Error Estimates of Theoretical Models: a Guide
Dobaczewski, J; Reinhard, P -G
2014-01-01
This guide offers suggestions/insights on uncertainty quantification of nuclear structure models. We discuss a simple approach to statistical error estimates, strategies to assess systematic errors, and show how to uncover inter-dependencies by correlation analysis. The basic concepts are illustrated through simple examples. By providing theoretical error bars on predicted quantities and using statistical methods to study correlations between observables, theory can significantly enhance the feedback between experiment and nuclear modeling.
Theoretical accuracy in cosmological growth estimation
Bose, Benjamin; Koyama, Kazuya; Hellwing, Wojciech A.; Zhao, Gong-Bo; Winther, Hans A.
2017-07-01
We elucidate the importance of the consistent treatment of gravity-model specific nonlinearities when estimating the growth of cosmological structures from redshift space distortions (RSD). Within the context of standard perturbation theory (SPT), we compare the predictions of two theoretical templates with redshift space data from COLA (comoving Lagrangian acceleration) simulations in the normal branch of DGP gravity (nDGP) and general relativity (GR). Using COLA for these comparisons is validated using a suite of full N-body simulations for the same theories. The two theoretical templates correspond to the standard general relativistic perturbation equations and those same equations modeled within nDGP. Gravitational clustering nonlinear effects are accounted for by modeling the power spectrum up to one-loop order and redshift space clustering anisotropy is modeled using the Taruya, Nishimichi and Saito (TNS) RSD model. Using this approach, we attempt to recover the simulation's fiducial logarithmic growth parameter f . By assigning the simulation data with errors representing an idealized survey with a volume of 10 Gpc3/h3 , we find the GR template is unable to recover fiducial f to within 1 σ at z =1 when we match the data up to kmax=0.195 h /Mpc . On the other hand, the DGP template recovers the fiducial value within 1 σ . Further, we conduct the same analysis for sets of mock data generated for generalized models of modified gravity using SPT, where again we analyze the GR template's ability to recover the fiducial value. We find that for models with enhanced gravitational nonlinearity, the theoretical bias of the GR template becomes significant for stage IV surveys. Thus, we show that for the future large data volume galaxy surveys, the self-consistent modeling of non-GR gravity scenarios will be crucial in constraining theory parameters.
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
A System Theoretic Approach to Bandwidth Estimation
Liebeherr, Jorg; Fidler, Markus; Valaee, Shahrokh
2008-01-01
It is shown that bandwidth estimation in packet networks can be viewed in terms of min-plus linear system theory. The available bandwidth of a link or complete path is expressed in terms of a {\\em service curve}, which is a function that appears in the network calculus to express the service available to a traffic flow. The service curve is estimated based on measurements of a sequence of probing packets or passive measurements of a sample path of arrivals. It is shown that existing bandwidth...
A System Theoretic Approach to Bandwidth Estimation
Liebeherr, Jorg; Valaee, Shahrokh
2008-01-01
It is shown that bandwidth estimation in packet networks can be viewed in terms of min-plus linear system theory. The available bandwidth of a link or complete path is expressed in terms of a {\\em service curve}, which is a function that appears in the network calculus to express the service available to a traffic flow. The service curve is estimated based on measurements of a sequence of probing packets or passive measurements of a sample path of arrivals. It is shown that existing bandwidth estimation methods can be derived in the min-plus algebra of the network calculus, thus providing further mathematical justification for these methods. Principal difficulties of estimating available bandwidth from measurement of network probes are related to potential non-linearities of the underlying network. When networks are viewed as systems that operate either in a linear or in a non-linear regime, it is argued that probing schemes extract the most information at a point when the network crosses from a linear to a n...
Potential benefits of remote sensing: Theoretical framework and empirical estimate
Eisgruber, L. M.
1972-01-01
A theoretical framwork is outlined for estimating social returns from research and application of remote sensing. The approximate dollar magnitude is given of a particular application of remote sensing, namely estimates of corn production, soybeans, and wheat. Finally, some comments are made on the limitations of this procedure and on the implications of results.
A Theoretical Approach for Estimating Fracture Toughness of Ductile Metals
Y.T. He; F. Li; G.Q. Zhang; L.J. Ernst; X.J. FU
2004-01-01
Fracture toughness is very important when applying Damage Tolerance Design and Assessment Techniques. The traditional testing approach for obtaining fracture toughness values is costly and time consuming. In order to estimate the fracture toughness of ductile metals, the fracture mechanics theory, materials plastic deformation theory and materials constructive relationships are employed here. A series of formulae and a theoretical approach are presented to calculate fracture toughness values of different materials in the plane stress and plane strain conditions. Compared with test results, evaluated values have a good agreement.
Negro, Francesco; Yavuz, Ş Utku; Yavuz, Utku Ş; Farina, Dario
2014-01-01
Contractile properties of human motor units provide information on the force capacity and fatigability of muscles. The spike-triggered averaging technique (STA) is a conventional method used to estimate the twitch waveform of single motor units in vivo by averaging the joint force signal. Several limitations of this technique have been previously discussed in an empirical way, using simulated and experimental data. In this study, we provide a theoretical analysis of this technique in the frequency domain and describe its intrinsic limitations. By analyzing the analytical expression of STA, first we show that a certain degree of correlation between the motor unit activities prevents an accurate estimation of the twitch force, even from relatively long recordings. Second, we show that the quality of the twitch estimates by STA is highly related to the relative variability of the inter-spike intervals of motor unit action potentials. Interestingly, if this variability is extremely high, correct estimates could be obtained even for high discharge rates. However, for physiological inter-spike interval variability and discharge rate, the technique performs with relatively low estimation accuracy and high estimation variance. Finally, we show that the selection of the triggers that are most distant from the previous and next, which is often suggested, is not an effective way for improving STA estimates and in some cases can even be detrimental. These results show the intrinsic limitations of the STA technique and provide a theoretical framework for the design of new methods for the measurement of motor unit force twitch.
Francesco Negro; Ş Utku Yavuz; Dario Farina
2014-01-01
Contractile properties of human motor units provide information on the force capacity and fatigability of muscles. The spike-triggered averaging technique (STA) is a conventional method used to estimate the twitch waveform of single motor units in vivo by averaging the joint force signal. Several limitations of this technique have been previously discussed in an empirical way, using simulated and experimental data. In this study, we provide a theoretical analysis of this technique in the freq...
PART I: Theoretical Site Response Estimation for Microzoning Purposes
Triantafyllidis, P.; Suhadolc, P.; Hatzidimitriou, P. M.; Anastasiadis, A.; Theodulidis, N.
We estimate the theoretical site response along seven cross sections located in the city of Thessaloniki (Greece). For this purpose the 2-D structural models used are based on the known geometry and the dynamic soil properties derived from borehole measurements and other geophysical techniques. Several double-couple sources have been employed to generate the seismic wavefield, and a hybrid method that combines the modal summation with finite differences, has been deployed to produce synthetic accelerograms to a maximum frequency of 6 Hz for all components of motion. The ratios between the response spectra of signals derived for the 2-D local model and the corresponding spectra of signals derived for the 1-D bedrock reference model at the same site, allow us to estimate the site response due to lateral heterogeneities. We interpret the results in terms of both geological and geometrical features of the models and of the characteristics of the wave propagation. The cases discussed confirm that the geometry and depth of the rock basement, along with the impedance contrast, are responsible for ground amplification phenomena such as edge effects and generation and entrapment of local surface waves. Our analysis also confirms that the peak ground acceleration is not well correlated with damage and that a substantially better estimator for possible damage is the spectral amplification.
Passalia, Claudio; Alfano, Orlando M. [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina); Brandi, Rodolfo J., E-mail: rbrandi@santafe-conicet.gov.ar [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina)
2012-04-15
Highlights: Black-Right-Pointing-Pointer Indoor pollution control via photocatalytic reactors. Black-Right-Pointing-Pointer Scaling-up methodology based on previously determined mechanistic kinetics. Black-Right-Pointing-Pointer Radiation interchange model between catalytic walls using configuration factors. Black-Right-Pointing-Pointer Modeling and experimental validation of a complex geometry photocatalytic reactor. - Abstract: A methodology for modeling photocatalytic reactors for their application in indoor air pollution control is carried out. The methodology implies, firstly, the determination of intrinsic reaction kinetics for the removal of formaldehyde. This is achieved by means of a simple geometry, continuous reactor operating under kinetic control regime and steady state. The kinetic parameters were estimated from experimental data by means of a nonlinear optimization algorithm. The second step was the application of the obtained kinetic parameters to a very different photoreactor configuration. In this case, the reactor is a corrugated wall type using nanosize TiO{sub 2} as catalyst irradiated by UV lamps that provided a spatially uniform radiation field. The radiative transfer within the reactor was modeled through a superficial emission model for the lamps, the ray tracing method and the computation of view factors. The velocity and concentration fields were evaluated by means of a commercial CFD tool (Fluent 12) where the radiation model was introduced externally. The results of the model were compared experimentally in a corrugated wall, bench scale reactor constructed in the laboratory. The overall pollutant conversion showed good agreement between model predictions and experiments, with a root mean square error less than 4%.
Theoretical Estimate of Hydride Affinities of Aromatic Carbonyl Compounds
AI Teng; ZHU Xiao-Qing; CHENG Jin-Pei
2003-01-01
@@ Aromatic carbonyl compounds are one type of the most important organic compounds, and the reductions ofthem by hydride agents such as LiAlH4 or NaBH4 are widely used in organic synthesis. The reactivity of carbonyl compounds generally increases in the following order: ketone ＜ aldehyde, and amide ＜ acid ＜ ester ＜ acid halide, which could be related to their hydride affinities (HA). In the previous paper, Robert[1] calculated the absolute HAof a series of small non-aromatic carbonyl compounds. In this paper, we use DFT method at B3LYP/6-311 + + G (2d, 2p)∥B3LYP/6-31 + G* level to estimate hydride affinities of five groups of aromatic carbonyl compounds. The detailed results are listed in Table 1.
Theoretical Formalism To Estimate the Positron Scattering Cross Section.
Singh, Suvam; Dutta, Sangita; Naghma, Rahla; Antony, Bobby
2016-07-21
A theoretical formalism is introduced in this article to calculate the total cross sections for positron scattering. This method incorporates positron-target interaction in the spherical complex optical potential formalism. The study of positron collision has been quite subtle until now. However, recently, it has emerged as an interesting area due to its role in atomic and molecular structure physics, astrophysics, and medicine. With the present method, the total cross sections for simple atoms C, N, and O and their diatomic molecules C2, N2, and O2 are obtained and compared with existing data. The total cross section obtained in the present work gives a more consistent shape and magnitude than existing theories. The characteristic dip below 10 eV is identified due to the positronium formation. The deviation of the present cross section with measurements at energies below 10 eV is attributed to the neglect of forward angle-discrimination effects in experiments, the inefficiency of additivity rule for molecules, empirical treatment of positronium formation, and the neglect of annihilation reactions. In spite of these deficiencies, the present results show consistent behavior and reasonable agreement with previous data, wherever available. Besides, this is the first computational model to report positron scattering cross sections over the energy range from 1 to 5000 eV.
Francesco Negro
Full Text Available Contractile properties of human motor units provide information on the force capacity and fatigability of muscles. The spike-triggered averaging technique (STA is a conventional method used to estimate the twitch waveform of single motor units in vivo by averaging the joint force signal. Several limitations of this technique have been previously discussed in an empirical way, using simulated and experimental data. In this study, we provide a theoretical analysis of this technique in the frequency domain and describe its intrinsic limitations. By analyzing the analytical expression of STA, first we show that a certain degree of correlation between the motor unit activities prevents an accurate estimation of the twitch force, even from relatively long recordings. Second, we show that the quality of the twitch estimates by STA is highly related to the relative variability of the inter-spike intervals of motor unit action potentials. Interestingly, if this variability is extremely high, correct estimates could be obtained even for high discharge rates. However, for physiological inter-spike interval variability and discharge rate, the technique performs with relatively low estimation accuracy and high estimation variance. Finally, we show that the selection of the triggers that are most distant from the previous and next, which is often suggested, is not an effective way for improving STA estimates and in some cases can even be detrimental. These results show the intrinsic limitations of the STA technique and provide a theoretical framework for the design of new methods for the measurement of motor unit force twitch.
Theoretical and experimental estimates of the Peierls stress
Nabarro, FRN
1997-03-01
Full Text Available an error of a factor of 2 in this exponent in Peierls's original estimate. A revised estimate by Huntington introduced a further factor of 2. Three experimental estimates are available, from the Bordoni peaks (which agrees with the Huntington theory), from...
Theoretical and Experimental Estimations of Volumetric Inductive Phase Shift in Breast Cancer Tissue
González, C. A.; Lozano, L. M.; Uscanga, M. C.; Silva, J. G.; Polo, S. M.
2013-04-01
Impedance measurements based on magnetic induction for breast cancer detection has been proposed in some studies. This study evaluates theoretical and experimentally the use of a non-invasive technique based on magnetic induction for detection of patho-physiological conditions in breast cancer tissue associated to its volumetric electrical conductivity changes through inductive phase shift measurements. An induction coils-breast 3D pixel model was designed and tested. The model involves two circular coils coaxially centered and a human breast volume centrally placed with respect to the coils. A time-harmonic numerical simulation study addressed the effects of frequency-dependent electrical properties of tumoral tissue on the volumetric inductive phase shift of the breast model measured with the circular coils as inductor and sensor elements. Experimentally; five female volunteer patients with infiltrating ductal carcinoma previously diagnosed by the radiology and oncology departments of the Specialty Clinic for Women of the Mexican Army were measured by an experimental inductive spectrometer and the use of an ergonomic inductor-sensor coil designed to estimate the volumetric inductive phase shift in human breast tissue. Theoretical and experimental inductive phase shift estimations were developed at four frequencies: 0.01, 0.1, 1 and 10 MHz. The theoretical estimations were qualitatively in agreement with the experimental findings. Important increments in volumetric inductive phase shift measurements were evident at 0.01MHz in theoretical and experimental observations. The results suggest that the tested technique has the potential to detect pathological conditions in breast tissue associated to cancer by non-invasive monitoring. Further complementary studies are warranted to confirm the observations.
Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong
2015-08-01
For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.
Liarte, Danilo B.; Posen, Sam; Transtrum, Mark K.; Catelani, Gianluigi; Liepe, Matthias; Sethna, James P.
2017-03-01
Theoretical limits to the performance of superconductors in high magnetic fields parallel to their surfaces are of key relevance to current and future accelerating cavities, especially those made of new higher-T c materials such as Nb3Sn, NbN, and MgB2. Indeed, beyond the so-called superheating field {H}{sh}, flux will spontaneously penetrate even a perfect superconducting surface and ruin the performance. We present intuitive arguments and simple estimates for {H}{sh}, and combine them with our previous rigorous calculations, which we summarize. We briefly discuss experimental measurements of the superheating field, comparing to our estimates. We explore the effects of materials anisotropy and the danger of disorder in nucleating vortex entry. Will we need to control surface orientation in the layered compound MgB2? Can we estimate theoretically whether dirt and defects make these new materials fundamentally more challenging to optimize than niobium? Finally, we discuss and analyze recent proposals to use thin superconducting layers or laminates to enhance the performance of superconducting cavities. Flux entering a laminate can lead to so-called pancake vortices; we consider the physics of the dislocation motion and potential re-annihilation or stabilization of these vortices after their entry.
Information-theoretic methods for estimating of complicated probability distributions
Zong, Zhi
2006-01-01
Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur
Note on a paradox in decision-theoretic interval estimation
Kabaila, Paul
2012-01-01
Confidence intervals are assessed according to two criteria, namely expected length and coverage probability. In an attempt to apply the decision-theoretic method to finding a good confidence interval, a loss function that is a linear combination of the interval length and the indicator function that the interval includes the parameter of interest has been proposed. We consider the particular case that the parameter of interest is the normal mean, when the variance is unknown. Casella, Hwang and Robert, Statistica Sinica, 1993, have shown that this loss function, combined with the standard noninformative prior, leads to a generalized Bayes rule that is a confidence interval for this parameter which has "paradoxical behaviour". We show that a simple modification of this loss function, combined with the same prior, leads to a generalized Bayes rule that is the usual confidence interval i.e. the "paradoxical behaviour" is removed.
Uncertainty Estimates for Theoretical Atomic and Molecular Data
Chung, H -K; Bartschat, K; Csaszar, A G; Drake, G W F; Kirchner, T; Kokoouline, V; Tennyson, J
2016-01-01
Sources of uncertainty are reviewed for calculated atomic and molecular data that are important for plasma modeling: atomic and molecular structure and cross sections for electron-atom, electron-molecule, and heavy particle collisions. We concentrate on model uncertainties due to approximations to the fundamental many-body quantum mechanical equations and we aim to provide guidelines to estimate uncertainties as a routine part of computations of data for structure and scattering.
Ahmed, Mousumi
Designing the control technique for nonlinear dynamic systems is a significant challenge. Approaches to designing a nonlinear controller are studied and an extensive study on backstepping based technique is performed in this research with the purpose of tracking a moving target autonomously. Our main motivation is to explore the controller for cooperative and coordinating unmanned vehicles in a target tracking application. To start with, a general theoretical framework for target tracking is studied and a controller in three dimensional environment for a single UAV is designed. This research is primarily focused on finding a generalized method which can be applied to track almost any reference trajectory. The backstepping technique is employed to derive the controller for a simplified UAV kinematic model. This controller can compute three autopilot modes i.e. velocity, ground heading (or course angle), and flight path angle for tracking the unmanned vehicle. Numerical implementation is performed in MATLAB with the assumption of having perfect and full state information of the target to investigate the accuracy of the proposed controller. This controller is then frozen for the multi-vehicle problem. Distributed or decentralized cooperative control is discussed in the context of multi-agent systems. A consensus based cooperative control is studied; such consensus based control problem can be viewed from the algebraic graph theory concepts. The communication structure between the UAVs is represented by the dynamic graph where UAVs are represented by the nodes and the communication links are represented by the edges. The previously designed controller is augmented to account for the group to obtain consensus based on their communication. A theoretical development of the controller for the cooperative group of UAVs is presented and the simulation results for different communication topologies are shown. This research also investigates the cases where the communication
Casey, Daniel
1984-10-01
This assessment addresses the impacts to the wildlife populations and wildlife habitats due to the Hungry Horse Dam project on the South Fork of the Flathead River and previous mitigation of theses losses. In order to develop and focus mitigation efforts, it was first necessary to estimate wildlife and wildlife hatitat losses attributable to the construction and operation of the project. The purpose of this report was to document the best available information concerning the degree of impacts to target wildlife species. Indirect benefits to wildlife species not listed will be identified during the development of alternative mitigation measures. Wildlife species incurring positive impacts attributable to the project were identified.
del Val, Ioscani Jimenez; Polizzi, Karen M.; Kontoravdi, Cleo
2016-01-01
Glycosylation greatly influences the safety and efficacy of many of the highest-selling recombinant therapeutic proteins (rTPs). In order to define optimal cell culture feeding strategies that control rTP glycosylation, it is necessary to know how nucleotide sugars (NSs) are consumed towards host cell and rTP glycosylation. Here, we present a theoretical framework that integrates the reported glycoproteome of CHO cells, the number of N-linked and O-GalNAc glycosylation sites on individual host cell proteins (HCPs), and the carbohydrate content of CHO glycosphingolipids to estimate the demand of NSs towards CHO cell glycosylation. We have identified the most abundant N-linked and O-GalNAc CHO glycoproteins, obtained the weighted frequency of N-linked and O-GalNAc glycosites across the CHO cell proteome, and have derived stoichiometric coefficients for NS consumption towards CHO cell glycosylation. By combining the obtained stoichiometric coefficients with previously reported data for specific growth and productivity of CHO cells, we observe that the demand of NSs towards glycosylation is significant and, thus, is required to better understand the burden of glycosylation on cellular metabolism. The estimated demand of NSs towards CHO cell glycosylation can be used to rationally design feeding strategies that ensure optimal and consistent rTP glycosylation. PMID:27345611
Nielsen, Lars Hougaard; Løkkegaard, Ellen; Andreasen, Anne Helms;
2009-01-01
PURPOSE: Many studies which investigate the effect of drugs categorize the exposure variable into never, current, and previous use of the study drug. When prescription registries are used to make this categorization, the exposure variable possibly gets misclassified since the registries do not ca...... with Hazard Ratios ranging from 1.68 to 1.78 for current use compared to never use. CONCLUSIONS: The findings suggest that it is possible to estimate the effect of never, current and previous use of HT on breast cancer using prescription data.......PURPOSE: Many studies which investigate the effect of drugs categorize the exposure variable into never, current, and previous use of the study drug. When prescription registries are used to make this categorization, the exposure variable possibly gets misclassified since the registries do...... not carry any information on the time of discontinuation of treatment.In this study, we investigated the amount of misclassification of exposure (never, current, previous use) to hormone therapy (HT) when the exposure variable was based on prescription data. Furthermore, we evaluated the significance...
Slattery, Richard N.; Asquith, William H.; Gordon, John D.
2017-02-15
IntroductionIn 2016, the U.S. Geological Survey (USGS), in cooperation with the San Antonio Water System, began a study to refine previously derived estimates of groundwater outflows from Medina and Diversion Lakes in south-central Texas near San Antonio. When full, Medina and Diversion Lakes (hereinafter referred to as the Medina/Diversion Lake system) (fig. 1) impound approximately 255,000 acre-feet and 2,555 acre-feet of water, respectively.Most recharge to the Edwards aquifer occurs as seepage from streams as they cross the outcrop (recharge zone) of the aquifer (Slattery and Miller, 2017). Groundwater outflows from the Medina/Diversion Lake system have also long been recognized as a potentially important additional source of recharge. Puente (1978) published methods for estimating monthly and annual estimates of the potential recharge to the Edwards aquifer from the Medina/Diversion Lake system. During October 1995–September 1996, the USGS conducted a study to better define short-term rates of recharge and to reduce the error and uncertainty associated with estimates of monthly recharge from the Medina/Diversion Lake system (Lambert and others, 2000). As a followup to that study, Slattery and Miller (2017) published estimates of groundwater outflows from detailed water budgets for the Medina/Diversion Lake system during 1955–1964, 1995–1996, and 2001–2002. The water budgets were compiled for selected periods during which time the water-budget components were inferred to be relatively stable and the influence of precipitation, stormwater runoff, and changes in storage were presumably minimal. Linear regression analysis techniques were used by Slattery and Miller (2017) to assess the relation between the stage in Medina Lake and groundwater outflows from the Medina/Diversion Lake system.
Elizabeth G Martin
Full Text Available Air Space Proportion (ASP is a measure of how much air is present within a bone, which allows for a quantifiable comparison of pneumaticity between specimens and species. Measured from zero to one, higher ASP means more air and less bone. Conventionally, it is estimated from measurements of the internal and external bone diameter, or by analyzing cross-sections. To date, the only pterosaur ASP study has been carried out by visual inspection of sectioned bones within matrix. Here, computed tomography (CT scans are used to calculate ASP in a small sample of pterosaur wing bones (mainly phalanges and to assess how the values change throughout the bone. These results show higher ASPs than previous pterosaur pneumaticity studies, and more significantly, higher ASP values in the heads of wing bones than the shaft. This suggests that pneumaticity has been underestimated previously in pterosaurs, birds, and other archosaurs when shaft cross-sections are used to estimate ASP. Furthermore, ASP in pterosaurs is higher than those found in birds and most sauropod dinosaurs, giving them among the highest ASP values of animals studied so far, supporting the view that pterosaurs were some of the most pneumatized animals to have lived. The high degree of pneumaticity found in pterosaurs is proposed to be a response to the wing bone bending stiffness requirements of flight rather than a means to reduce mass, as is often suggested. Mass reduction may be a secondary result of pneumaticity that subsequently aids flight.
Martin, Elizabeth G; Palmer, Colin
2014-01-01
Air Space Proportion (ASP) is a measure of how much air is present within a bone, which allows for a quantifiable comparison of pneumaticity between specimens and species. Measured from zero to one, higher ASP means more air and less bone. Conventionally, it is estimated from measurements of the internal and external bone diameter, or by analyzing cross-sections. To date, the only pterosaur ASP study has been carried out by visual inspection of sectioned bones within matrix. Here, computed tomography (CT) scans are used to calculate ASP in a small sample of pterosaur wing bones (mainly phalanges) and to assess how the values change throughout the bone. These results show higher ASPs than previous pterosaur pneumaticity studies, and more significantly, higher ASP values in the heads of wing bones than the shaft. This suggests that pneumaticity has been underestimated previously in pterosaurs, birds, and other archosaurs when shaft cross-sections are used to estimate ASP. Furthermore, ASP in pterosaurs is higher than those found in birds and most sauropod dinosaurs, giving them among the highest ASP values of animals studied so far, supporting the view that pterosaurs were some of the most pneumatized animals to have lived. The high degree of pneumaticity found in pterosaurs is proposed to be a response to the wing bone bending stiffness requirements of flight rather than a means to reduce mass, as is often suggested. Mass reduction may be a secondary result of pneumaticity that subsequently aids flight.
Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J
2006-09-01
A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (
Voigt, Andreas Jauernik; Santos, Ilmar
2012-01-01
This paper gives an original theoretical and experimental contribution to the issue of reducing force estimation errors, which arise when applying Active Magnetic Bearings (AMBs) with pole embedded Hall sensors for force quantification purposes. Motivated by the prospect of increasing the usability...
Influence of parameter estimation uncertainty in Kriging: Part 1 - Theoretical Development
E. Todini
2001-01-01
Full Text Available This paper deals with a theoretical approach to assessing the effects of parameter estimation uncertainty both on Kriging estimates and on their estimated error variance. Although a comprehensive treatment of parameter estimation uncertainty is covered by full Bayesian Kriging at the cost of extensive numerical integration, the proposed approach has a wide field of application, given its relative simplicity. The approach is based upon a truncated Taylor expansion approximation and, within the limits of the proposed approximation, the conventional Kriging estimates are shown to be biased for all variograms, the bias depending upon the second order derivatives with respect to the parameters times the variance-covariance matrix of the parameter estimates. A new Maximum Likelihood (ML estimator for semi-variogram parameters in ordinary Kriging, based upon the assumption of a multi-normal distribution of the Kriging cross-validation errors, is introduced as a mean for the estimation of the parameter variance-covariance matrix. Keywords: Kriging, maximum likelihood, parameter estimation, uncertainty
Liarte, Danilo B; Transtrum, Mark K; Catelani, Gianluigi; Liepe, Matthias; Sethna, James P
2016-01-01
We review our work on theoretical limits to the performance of superconductors in high magnetic fields parallel to their surfaces. These limits are of key relevance to current and future accelerating cavities, especially those made of new higher-$T_c$ materials such as Nb$_3$Sn, NbN, and MgB$_2$. We summarize our calculations of the so-called superheating field $H_{\\mathrm{sh}}$, beyond which flux will spontaneously penetrate even a perfect superconducting surface and ruin the performance. We briefly discuss experimental measurements of the superheating field, comparing to our estimates. We explore the effects of materials anisotropy and disorder. Will we need to control surface orientation in the layered compound MgB$_2$? Can we estimate theoretically whether dirt and defects make these new materials fundamentally more challenging to optimize than niobium? Finally, we discuss and analyze recent proposals to use thin superconducting layers or laminates to enhance the performance of superconducting cavities. T...
Theoretical estimates of spherical and chromatic aberration in photoemission electron microscopy.
Fitzgerald, J P S; Word, R C; Könenkamp, R
2016-01-01
We present theoretical estimates of the mean coefficients of spherical and chromatic aberration for low energy photoemission electron microscopy (PEEM). Using simple analytic models, we find that the aberration coefficients depend primarily on the difference between the photon energy and the photoemission threshold, as expected. However, the shape of the photoelectron spectral distribution impacts the coefficients by up to 30%. These estimates should allow more precise correction of aberration in PEEM in experimental situations where the aberration coefficients and precise electron energy distribution cannot be readily measured.
Varadarajan, Divya; Haldar, Justin P
2017-08-19
The data measured in diffusion MRI can be modeled as the Fourier transform of the Ensemble Average Propagator (EAP), a probability distribution that summarizes the molecular diffusion behavior of the spins within each voxel. This Fourier relationship is potentially advantageous because of the extensive theory that has been developed to characterize the sampling requirements, accuracy, and stability of linear Fourier reconstruction methods. However, existing diffusion MRI data sampling and signal estimation methods have largely been developed and tuned without the benefit of such theory, instead relying on approximations, intuition, and extensive empirical evaluation. This paper aims to address this discrepancy by introducing a novel theoretical signal processing framework for diffusion MRI. The new framework can be used to characterize arbitrary linear diffusion estimation methods with arbitrary q-space sampling, and can be used to theoretically evaluate and compare the accuracy, resolution, and noise-resilience of different data acquisition and parameter estimation techniques. The framework is based on the EAP, and makes very limited modeling assumptions. As a result, the approach can even provide new insight into the behavior of model-based linear diffusion estimation methods in contexts where the modeling assumptions are inaccurate. The practical usefulness of the proposed framework is illustrated using both simulated and real diffusion MRI data in applications such as choosing between different parameter estimation methods and choosing between different q-space sampling schemes. Copyright © 2017 Elsevier Inc. All rights reserved.
A theoretical estimation for the optimal network robustness measure R against malicious node attacks
Ma, Liangliang; Liu, Jing; Duan, Boping; Zhou, Mingxing
2015-07-01
In a recent work (Schneider C. M. et al., Proc. Natl. Acad. Sci. U.S.A., 108 (2011) 3838), Schneider et al. introduced an effective measure R to evaluate the network robustness against malicious attacks on nodes. Take R as the objective function, they used a heuristic algorithm to optimize the network robustness. In this paper, a theoretical analysis is conducted to estimate the value of R for different types of networks, including regular networks, WS networks, ER networks, and BA networks. The experimental results show that the theoretical value of R is approximately equal to that of optimized networks. Furthermore, the theoretical analysis also shows that regular networks are the most robust than other networks. To validate this result, a heuristic method is proposed to optimize the network structure, in which the degree distribution can be changed and the number of nodes and edges remains invariant. The optimization results show that the degree of most nodes in the optimal networks is close to the average degree, and the optimal network topology is close to regular networks, which confirms the theoretical analysis.
Hebert, Randy S; Smith, Cheri G; Wright, Scott M
2003-03-04
High rates of authorship misrepresentation have been documented among medical trainees. To assess misrepresentation among internal medicine residency applicants while comparing searches used by previous authors (searches 1 and 2) to a more comprehensive strategy (search 3). Review of 497 residency applications. Two university-based internal medicine residency programs. Search 1 was limited to MEDLINE. Search 2 added Current Contents, Science Citation Index, and BIOSIS and included searching journals by hand. Search 3 added seven other databases and contacts to librarians, editors, and coauthors. 224 applicants reported 634 articles; 630 (99%) were verified. The number of applicants with misrepresented citations varied depending on the search used (56 applicants [25%] in search 1 vs. 34 applicants [15%] in search 2 vs. 4 applicants [1.8%] in search 3). Using a comprehensive search, we found substantially less misrepresentation than had been reported. Previous studies probably overestimated the magnitude of the problem.
Panigrahi, Swapnesh; Ramachandran, Hema; Alouini, Mehdi
2016-01-01
The efficiency of using intensity modulated light for estimation of scattering properties of a turbid medium and for ballistic photon discrimination is theoretically quantified in this article. Using the diffusion model for modulated photon transport and considering a noisy quadrature demodulation scheme, the minimum-variance bounds on estimation of parameters of interest are analytically derived and analyzed. The existence of a variance-minimizing optimal modulation frequency is shown and its evolution with the properties of the intervening medium is derived and studied. Furthermore, a metric is defined to quantify the efficiency of ballistic photon filtering which may be sought when imaging through turbid media. The analytical derivation of this metric shows that the minimum modulation frequency required to attain significant ballistic discrimination depends only on the reduced scattering coefficient of the medium in a linear fashion for a highly scattering medium.
Panigrahi, Swapnesh; Fade, Julien; Ramachandran, Hema; Alouini, Mehdi
2016-07-11
The efficiency of using intensity modulated light for the estimation of scattering properties of a turbid medium and for ballistic photon discrimination is theoretically quantified in this article. Using the diffusion model for modulated photon transport and considering a noisy quadrature demodulation scheme, the minimum-variance bounds on estimation of parameters of interest are analytically derived and analyzed. The existence of a variance-minimizing optimal modulation frequency is shown and its evolution with the properties of the intervening medium is derived and studied. Furthermore, a metric is defined to quantify the efficiency of ballistic photon filtering which may be sought when imaging through turbid media. The analytical derivation of this metric shows that the minimum modulation frequency required to attain significant ballistic discrimination depends only on the reduced scattering coefficient of the medium in a linear fashion for a highly scattering medium.
Parmar, Jyotsana J; Das, Dibyendu; Padinhateeri, Ranjith
2016-02-29
It is being increasingly realized that nucleosome organization on DNA crucially regulates DNA-protein interactions and the resulting gene expression. While the spatial character of the nucleosome positioning on DNA has been experimentally and theoretically studied extensively, the temporal character is poorly understood. Accounting for ATPase activity and DNA-sequence effects on nucleosome kinetics, we develop a theoretical method to estimate the time of continuous exposure of binding sites of non-histone proteins (e.g. transcription factors and TATA binding proteins) along any genome. Applying the method to Saccharomyces cerevisiae, we show that the exposure timescales are determined by cooperative dynamics of multiple nucleosomes, and their behavior is often different from expectations based on static nucleosome occupancy. Examining exposure times in the promoters of GAL1 and PHO5, we show that our theoretical predictions are consistent with known experiments. We apply our method genome-wide and discover huge gene-to-gene variability of mean exposure times of TATA boxes and patches adjacent to TSS (+1 nucleosome region); the resulting timescale distributions have non-exponential tails.
An Estimation Theoretic Approach for Sparsity Pattern Recovery in the Noisy Setting
Hormati, Ali; Mohajer, Soheil; Vetterli, Martin
2009-01-01
Compressed sensing deals with the reconstruction of sparse signals using a small number of linear measurements. One of the main challenges in compressed sensing is to find the support of a sparse signal. In the literature, several bounds on the scaling law of the number of measurements for successful support recovery have been derived where the main focus is on random Gaussian measurement matrices. In this paper, we investigate the noisy support recovery problem from an estimation theoretic point of view, where no specific assumption is made on the underlying measurement matrix. The linear measurements are perturbed by additive white Gaussian noise. We define the output of a support estimator to be a set of position values in increasing order. We set the error between the true and estimated supports as the $\\ell_2$-norm of their difference. On the one hand, this choice allows us to use the machinery behind the $\\ell_2$-norm error metric and on the other hand, converts the support recovery into a more intuitiv...
Maziero, G C; Baunwart, C; Toledo, M C
2001-05-01
The theoretical maximum daily intakes (TMDI) of the phenolic antioxidants butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and tertbutyl hydroquinone (TBHQ) in Brazil were estimated using food consumption data derived from a household economic survey and a packaged goods market survey. The estimates were based on maximum levels of use of the food additives specified in national food standards. The calculated intakes of the three additives for the mean consumer were below the ADIs. Estimates of TMDI for BHA, BHT and TBHQ ranged from 0.09 to 0.15, 0.05 to 0.10 and 0.07 to 0.12 mg/kg of body weight, respectively. To check if the additives are actually used at their maximum authorized levels, analytical determinations of these compounds in selected food categories were carried out using HPLC with UV detection. BHT and TBHQ concentrations in foodstuffs considered to be representive sources of these antioxidants in the diet were below the respective maximum permitted levels. BHA was not detected in any of the analysed samples. Based on the maximal approach and on the analytical data, it is unlikely that the current ADI of BHA (0.5 mg/kg body weight), BHT (0.3 mg/kg body weight) and TBHQ (0.7 mg/kg body weight) will be exceeded in practice by the average Brazilian consumer.
Reducing thermal mismatch stress in anodically bonded silicon-glass wafers: theoretical estimation
Sinev, Leonid S.; Ryabov, Vladimir T.
2017-01-01
This paper reports the theoretical study and estimations of thermal mismatch stress reduction in anodically bonded silicon-glass stacks by justifiable selection of bonding temperature and glass thickness. This can be done only after prior thorough study of temperature dependence of the linear thermal expansion coefficient of the glass and silicon to be used. We show by analyzing such a dependence of several glass brands that the usual idea of decreasing the bonding process temperature as a solution to the thermal mismatch stress problem can be a failure. Interchanging glass brands during device design is shown to produce very contrasting changes in residual stresses. These results are in good agreement with finite-element modeling. This paper reports there is proportion between glass and silicon wafer thicknesses minimizing thermal mismatch stress at unbonded side of the silicon independently of the bonding or working temperatures chosen.
Sasaki, Tomohiko; Kondo, Osamu
2016-09-01
Recent theoretical progress potentially refutes past claims that paleodemographic estimations are flawed by statistical problems, including age mimicry and sample bias due to differential preservation. The life expectancy at age 15 of the Jomon period prehistoric populace in Japan was initially estimated to have been ∼16 years while a more recent analysis suggested 31.5 years. In this study, we provide alternative results based on a new methodology. The material comprises 234 mandibular canines from Jomon period skeletal remains and a reference sample of 363 mandibular canines of recent-modern Japanese. Dental pulp reduction is used as the age-indicator, which because of tooth durability is presumed to minimize the effect of differential preservation. Maximum likelihood estimation, which theoretically avoids age mimicry, was applied. Our methods also adjusted for the known pulp volume reduction rate among recent-modern Japanese to provide a better fit for observations in the Jomon period sample. Without adjustment for the known rate in pulp volume reduction, estimates of Jomon life expectancy at age 15 were dubiously long. However, when the rate was adjusted, the estimate results in a value that falls within the range of modern hunter-gatherers, with significantly better fit to the observations. The rate-adjusted result of 32.2 years more likely represents the true life expectancy of the Jomon people at age 15, than the result without adjustment. Considering ∼7% rate of antemortem loss of the mandibular canine observed in our Jomon period sample, actual life expectancy at age 15 may have been as high as ∼35.3 years. © 2016 Wiley Periodicals, Inc.
Estimation-theoretic approach to delayed decoding of predictively encoded video sequences.
Han, Jingning; Melkote, Vinay; Rose, Kenneth
2013-03-01
Current video coders employ predictive coding with motion compensation to exploit temporal redundancies in the signal. In particular, blocks along a motion trajectory are modeled as an auto-regressive (AR) process, and it is generally assumed that the prediction errors are temporally independent and approximate the innovations of this process. Thus, zero-delay encoding and decoding is considered efficient. This paper is premised on the largely ignored fact that these prediction errors are, in fact, temporally dependent due to quantization effects in the prediction loop. It presents an estimation-theoretic delayed decoding scheme, which exploits information from future frames to improve the reconstruction quality of the current frame. In contrast to the standard decoder that reproduces every block instantaneously once the corresponding quantization indices of residues are available, the proposed delayed decoder efficiently combines all accessible (including any future) information in an appropriately derived probability density function, to obtain the optimal delayed reconstruction per transform coefficient. Experiments demonstrate significant gains over the standard decoder. Requisite information about the source AR model is estimated in a spatio-temporally adaptive manner from a bit-stream conforming to the H.264/AVC standard, i.e., no side information needs to be sent to the decoder in order to employ the proposed approach, thereby compatibility with the standard syntax and existing encoders is retained.
Godisov, O N; Yudkin, M I; Gerasimov, S F; Feofilov, G A
1994-01-01
Contradictory demands raised by the application of different types of sensitive detectors in 5 layers of the Inner Tracking System (ITS) for ALICE stipulate the simultaneous use of different schemes of heat drain: gaseous cooling of the 1st layer (uniform heat production over the sensitive surface) and evaporative cooling for the 2nd-5th layers (localised heat production). The last system is also a must for the thermostabilization of Si-drift detectors within 0.1 degree C. Theoretical estimates of gaseous, evaporative and liquid cooling systems are done for all ITS layers. The results of the experiments done for evaporative and liquid heat drain systems are presented and discussed. The major technical problems of the evaporative systems' design are being considered: i) control of liquid supply; ii) vapour pressure control. Two concepts of the evaporative systems are proposed: 1) One channel system for joint transfer of two phases (liquid + gas); 2) Two channels system with separate transfer of phases. Both sy...
Chang, W. Y.; Chen, K. P.
2016-12-01
The main purpose of this study is to theoretically estimate the death toll caused by collapsed buildings in different regions of Taiwan from future earthquakes according to the empirical data of the 1999 7.6 Chi-Chi earthquake that occurred in Taiwan. The results are presented in a quadratic equation that relates collapsed buildings with Modified Mercalli Intensity (), then matching with a novel reliable function. It is found that two zones are subject to high collapsed building, one zone extends from Hsinchu southward to Taichung, Nantou, Chiayi, and Tainan in western Taiwan and the other extends from Ilan southward to Hualian and Taitung in eastern Taiwan. These zones are also characterized by low b values.We also present the expected probability of collapsed buildings as a function of waiting time in ten major metropolitan areas of Taiwan. The results exhibit relatively low expected probabilities in Tainan, Kaohsiung, and Hengchun; hence, the expected death toll due to collapsed buildings is very low (e.g., the expected death toll in Kaohsiung is zero). However, a relatively high number of collapsed buildings is found for most other areas. These results should be of use to government regulators and practicing engineers in enforcing appropriate building codes to effectively mitigate potential seismic hazards.
Comparison of nine theoretical models for estimating the mechanical power output in cycling
González‐Haro, Carlos; Ballarini, P A Galilea; Soria, M; Drobnic, F; Escanero, J F
2007-01-01
Objective To assess which of the equations used to estimate mechanical power output for a wide aerobic range of exercise intensities gives the closest value to that measured with the SRM training system. Methods Thirty four triathletes and endurance cyclists of both sexes (mean (SD) age 24 (5) years, height 176.3 (6.6) cm, weight 69.4 (7.6) kg and Vo2max 61.5 (5.9) ml/kg/min) performed three incremental tests, one in the laboratory and two in the velodrome. The mean mechanical power output measured with the SRM training system in the velodrome tests corresponding to each stage of the tests was compared with the values theoretically estimated using the nine most referenced equations in literature (Whitt (Ergonomics 1971;14:419–24); Di Prampero et al (J Appl Physiol 1979;47:201–6); Whitt and Wilson (Bicycling science. Cambridge: MIT Press, 1982); Kyle (Racing with the sun. Philadelphia: Society of Automotive Engineers, 1991:43–50); Menard (First International Congress on Science and Cycling Skills, Malaga, 1992); Olds et al (J Appl Physiol 1995;78:1596–611; J Appl Physiol 1993;75:730–7); Broker (USOC Sport Science and Technology Report 1–24, 1994); Candau et al (Med Sci Sports Exerc 1999;31:1441–7)). This comparison was made using the mean squared error of prediction, the systematic error and the random error. Results The equations of Candau et al, Di Prampero et al, Olds et al (J Appl Physiol 1993;75:730–7) and Whitt gave a moderate mean squared error of prediction (12.7%, 21.6%, 13.2% and 16.5%, respectively) and a low random error (0.5%, 0.6%, 0.7% and 0.8%, respectively). Conclusions The equations of Candau et al and Di Prampero et al give the best estimate of mechanical power output when compared with measurements obtained with the SRM training system. PMID:17341588
Theoretical estimate on tensor-polarization asymmetry in proton-deuteron Drell-Yan process
Kumano, S.; Song, Qin-Tao
2016-09-01
Tensor-polarized parton distribution functions are new quantities in spin-1 hadrons such as the deuteron, and they could probe new quark-gluon dynamics in hadron and nuclear physics. In charged-lepton deep inelastic scattering, they are studied by the twist-2 structure functions b1 and b2. The HERMES Collaboration found unexpectedly large b1 values compared to a naive theoretical expectation based on the standard deuteron model. The situation should be significantly improved in the near future by an approved experiment to measure b1 at Thomas Jefferson National Accelerator Facility (JLab). There is also an interesting indication in the HERMES result that finite antiquark tensor polarization exists. It could play an important role in solving a mechanism on tensor structure in the quark-gluon level. The tensor-polarized antiquark distributions are not easily determined from the charged-lepton deep inelastic scattering; however, they can be measured in a proton-deuteron Drell-Yan process with a tensor-polarized deuteron target. In this article, we estimate the tensor-polarization asymmetry for a possible Fermilab Main-Injector experiment by using optimum tensor-polarized parton distribution functions to explain the HERMES measurement. We find that the asymmetry is typically a few percent. If it is measured, it could probe new hadron physics, and such studies could create an interesting field of high-energy spin physics. In addition, we find that a significant tensor-polarized gluon distribution should exist due to Q2 evolution, even if it were zero at a low Q2 scale. The tensor-polarized gluon distribution has never been observed, so it is an interesting future project.
Theoretical-game estimate of radiosystem's efficiency based on entropy approach
Marigodov, V. K.
2011-01-01
Theoretical-game synthesis of radio communications system in a conflict situation of interaction between radio communications system and radio masking system operators taking into consideration information limitations that are imposed on differential entropies of players’ mixed strategies.
Group theoretic structures in the estimation of an unknown unitary transformation
Chiribella, G
2010-01-01
This paper presents a series of general results about the optimal estimation of physical transformations in a given symmetry group. In particular, it is shown how the different symmetries of the problem determine different properties of the optimal estimation strategy. The paper also contains a discussion about the role of entanglement between the representation and multiplicity spaces and about the optimality of square-root measurements.
Errors in second moments estimated from monostatic Doppler sodar winds. I. Theoretical description
Kristensen, Leif; Gaynor, J. E.
1986-01-01
geometric parameters, pulse repetition rate, pulsing sequence, mean wind velocity and an estimate of the turbulence length scale. The model results indicate an optimum monostatic configuration that minimizes the bias due to temporal and spatial separation. A correction for pulse volume averaging is also...... derived which requires a turbulence length scale and an estimate of the pulse volume diameter. A monostatic sodar technique, based on radar VAD methods, which also minimizes the bias caused by spatial and temporal separations in the sampling volumes, is proposed...
Rosholm, A; Hyldstrup, L; Backsgaard, L
2002-01-01
A new automated radiogrammetric method to estimate bone mineral density (BMD) from a single radiograph of the hand and forearm is described. Five regions of interest in radius, ulna and the three middle metacarpal bones are identified and approximately 1800 geometrical measurements from these bon...
Exploring super-gaussianity towards robust information-theoretical time delay estimation
Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos;
2013-01-01
Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far...
A mechanical model of wing and theoretical estimate of taper factor for three gliding birds
Moosarreza Shamsyeh Zahedi; Mir Yaseen Ali Khan
2007-03-01
We tested a mechanical model of wing, which was constructed using the measurements of wingspan and wing area taken from three species of gliding birds. In this model, we estimated the taper factors of the wings for jackdaw (Corrus monedula), Harris’ hawk (Parabuteo unicinctas) and Lagger falcon (Falco jugger) as 1.8, 1.5 and 1.8, respectively. Likewise, by using the data linear regression and curve estimation method, as well as estimating the taper factors and the angle between the humerus and the body, we calculated the relationship between wingspan, wing area and the speed necessary to meet the aerodynamic requirements of sustained flight. In addition, we calculated the relationship between the speed, wing area and wingspan for a specific angle between the humerus and the body over the range of stall speed to maximum speed of gliding flight. We then compared the results for these three species of gliding birds. These comparisons suggest that the aerodynamic characteristics of Harris’ hawk wings are similar to those of the falcon but different from those of the jackdaw. This paper also presents two simple equations to estimate the minimum angle between the humerus and the body as well as the minimum span ratio of a bird in gliding flight.
A mechanical model of wing and theoretical estimate of taper factor for three gliding birds.
Zahedi, Moosarreza Shamsyeh; Khan, Mir Yaseen Ali
2007-03-01
We tested a mechanical model of wing,which was constructed using the measurements of wingspan and wing area taken from three species of gliding birds.In this model,we estimated the taper factors of the wings for jackdaw (Corrus monedula), Harris' hawk (Parabuteo unicinctas) and Lagger falcon (Falco jugger) as 1.8, 1.5 and 1.8,respectively. Likewise, by using the data linear regression and curve estimation method,as well as estimating the taper factors and the angle between the humerus and the body, we calculated the relationship between wingspan,wing area and the speed necessary to meet the aerodynamic requirements of sustained flight.In addition,we calculated the relationship between the speed,wing area and wingspan for a specific angle between the humerus and the body over the range of stall speed to maximum speed of gliding flight.We then compared the results for these three species of gliding birds. These comparisons suggest that the aerodynamic characteristics of Harris' hawk wings are similar to those of the falcon but different from those of the jackdaw.This paper also presents two simple equations to estimate the minimum angle between the humerus and the body as well as the minimum span ratio of a bird in gliding flight.
Granovska Lyudmyla Mykolayivna
2016-02-01
Full Text Available The article deals with theoretical analysis of the indicators system concept, observes in details such components of this system as society, nature, economy. The general indicators of social, ecological, and economic system development are considered. It has been found that indicators are the indexes system of ecological, economical, and social development and give the possibility to analyze and control the implementation of areas sustainable development statement. The system of indicators, which characterize the level of the region stability on the basis of domestic and foreign experience considering region peculiarities, has been improved. At that, the regional peculiarities of both the natural and resource potential and the economical activity impact on the natural and resource potential have been taken into account.
A theoretical estimate of intrinsic ellipticity bispectra induced by angular momenta alignments
Merkel, Philipp M
2014-01-01
Intrinsically aligned galaxy shapes are one of the most important systematics in cosmic shear measurements. So far theoretical studies of intrinsic alignments almost exclusively focus on their statistics at the two-point level. Results from numerical simulations, however, suggest that third-order measures might be even stronger affected. We therefore investigate the (angular) bispectrum of intrinsic alignments. In our fully analytical study we describe intrinsic galaxy ellipticities by a physical alignment model, which makes use of tidal torque theory. We derive expressions for the various combinations of intrinsic and gravitationally induced ellipticities, i.e. III-, GII- and GGI-alignments, and compare our results to the shear bispectrum, the GGG-term. The latter is computed using hyper-extended perturbation theory. Considering equilateral and squeezed configurations we find that for a Euclid-like survey intrinsic alignments (III-alignments) start to dominate on angular scales smaller than 20 arcmin and 13 ...
Rosholm, A; Hyldstrup, L; Backsgaard, L
2002-01-01
A new automated radiogrammetric method to estimate bone mineral density (BMD) from a single radiograph of the hand and forearm is described. Five regions of interest in radius, ulna and the three middle metacarpal bones are identified and approximately 1800 geometrical measurements from these bones......-ray absoptiometry (r = 0.86, p Relative to this age-related loss, the reported short...... sites and a precision that potentially allows for relatively short observation intervals. Udgivelsesdato: 2001-null...
Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro
2017-10-01
The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r→Z transform.
无
2002-01-01
Based on the studies of Reed-Solomon codes and orthogonal space-time block codes over Rayleigh fading channel, a theoretical method for estimating performance of Reed-Solomon codes concatenated with orthogonal space-time block codes is presented in this paper. And an upper bound of the bit error rate is also obtained. It is shown through computer simulations that the signal-to-noise ratio reduces about 15 dB or more after orthogonal space-time block codes are concatenate with Reed-Solomon (15,6) codes over Rayleigh fading channel, when the bit error rate is 10-4.
Ankowski, Artur M; Benhar, Omar; Caballero, Juan A; Giusti, Carlotta; González-Jiménez, Raúl; Megias, Guillermo D; Meucci, Andrea
2015-01-01
Free nucleons propagating in water are known to produce gamma rays, which form a background to the searches for diffuse supernova neutrinos and sterile neutrinos carried out with Cherenkov detectors. As a consequence, the process of nucleon knockout induced by neutral-current quasielastic interactions of atmospheric (anti)neutrinos with oxygen needs to be under control at the quantitative level in the background simulations of the ongoing and future experiments. In this paper, we provide a quantitative assessment of the uncertainty associated with the theoretical description of the nuclear cross sections, estimating it from the discrepancies between the predictions of different models.
Theoretical, observational, and isotopic estimates of the lifetime of the solar nebula
Podosek, Frank A.; Cassen, Patrick
1994-01-01
There are a variety of isotopic data for meteorites which suggest that the protostellar nebula existed and was involved in making planetary materials for some 10(exp 7) yr or more. Many cosmochemists, however, advocate alternative interpretations of such data in order to comply with a perceived constraint, from theoretical considerations, that the nebula existed only for a much shorter time, usually stated as less than or equal to 10(exp 6) yr. In this paper, we review evidence relevant to solar nebula duration which is available through three different disciplines: theoretical modeling of star formation, isotopic data from meteorites, and astronomical observations of T Tauri stars. Theoretical models based on observations of present star-forming regions indicate that stars like the Sun form by dynamical gravitational collapse of dense cores of cold molcular clouds in the interstellar clouds in the interstellar medium. The collapse to a star and disk occurs rapidly, on a time scale of the order 10(exp 5) yr. Disks evolve by dissipating energy while redistributing angular momentum, but it is difficult to predict the rate of evolution, particularly for low mass (compared to the star) disks which nonetheless still contain enough material to account for the observed planetary system. There is no compelling evidence, from available theories of disk structure and evolution, that the solar nebula must have evolved rapidly and could not have persisted for more than 1 Ma. In considering chronoloically relevant isotopic data for meteorites, we focus on three methodologies: absolute ages by U-Pb/Pb-Pb, and relative ages by short-lived radionuclides (especially Al-26) and by evolution of Sr-87/Sr-86. Two kinds of meteoritic materials-refractory inclusions such as CAIs and differential meteorites (eucrites and augrites) -- appear to have experienced potentially dateable nebular events. In both cases, the most straightforward interpretations of the available data indicate
Fábio A. Miessi Sanches
2009-03-01
Full Text Available In this paper we set up a model of regional banking competition based on Bresnahan (1982, Lau (1982 and Nakane (2002. The structural model is estimated using data from eight Brazilian states and a dynamic panel. The results show that on average the level of competition in the Brazilian banking system is high, even tough the null of perfect competition can be rejected at the usual significance levels. This result also prevails at the state level: Rio Grande do Sul, São Paulo, Rio de Janeiro, Pernambuco and Minas Gerais have high degree of competition.
Wang Yi-Bo; Wang Shang-Wu; Zeng Xin-Wu
2012-01-01
One of the common characteristics of the electrothermal breakdown in an underwater discharge acoustic source(UDAS)is the existence of a pre-breakdown-heating phase.In our experiment,two phenomena were observed:(1)the breakdown time that takes on high randomicity and obeys a "double-peak" stochastic distribution;(2)the higher salt concentration that reduces the residual voltage and causes 100％ non-breakdown.The mechanism of electrothermal breakdown is analysed.To specify the end of the pre-breakdown-heating phase,a "border boiling" assumption is proposed,in which the breakdown time is assumed to be the time needed to heat the border water around the initial arc to 773 K.Based on this ‘border boiling' assumption,the numerical simulation is performed to evaluate the effects of two heating mechanisms:the Joule heating from the ionic current,and the radiation heating from the initial arc.The simulation results verify the theoretical explanations to these two experiment phenomena:(1)the stochastic distribution of the radius of the initial arc results in the randomicity of the breakdown time;(2)the difference in efficiency between the radiation heating and the Joule heating determines that,in the case of higher salt concentration,more energy will be consumed in the pre-breakdown-heating phase.
Khalilian, Morteza; Navidbakhsh, Mahdi; Valojerdi, Mojtaba Rezazadeh; Chizari, Mahmoud; Yazdi, Poopak Eftekhari
2010-04-06
The zona pellucida (ZP) is the spherical layer that surrounds the mammalian oocyte. The physical hardness of this layer plays a crucial role in fertilization and is largely unknown because of the lack of appropriate measuring and modelling methods. The aim of this study is to measure the biomechanical properties of the ZP of human/mouse ovum and to test the hypothesis that Young's modulus of the ZP varies with fertilization. Young's moduli of ZP are determined before and after fertilization by using the micropipette aspiration technique, coupled with theoretical models of the oocyte as an elastic incompressible half-space (half-space model), an elastic compressible bilayer (layered model) or an elastic compressible shell (shell model). Comparison of the models shows that incorporation of the layered geometry of the ovum and the compressibility of the ZP in the layered and shell models may provide a means of more accurately characterizing ZP elasticity. Evaluation of results shows that although the results of the models are different, all confirm that the hardening of ZP will increase following fertilization. As can be seen, different choices of models and experimental parameters can affect the interpretation of experimental data and lead to differing mechanical properties.
Theoretical Estimation of Thermal Effects in Drilling of Woven Carbon Fiber Composite
José Díaz-Álvarez
2014-06-01
Full Text Available Carbon Fiber Reinforced Polymer (CFRPs composites are extensively used in structural applications due to their attractive properties. Although the components are usually made near net shape, machining processes are needed to achieve dimensional tolerance and assembly requirements. Drilling is a common operation required for further mechanical joining of the components. CFRPs are vulnerable to processing induced damage; mainly delamination, fiber pull-out, and thermal degradation, drilling induced defects being one of the main causes of component rejection during manufacturing processes. Despite the importance of analyzing thermal phenomena involved in the machining of composites, only few authors have focused their attention on this problem, most of them using an experimental approach. The temperature at the workpiece could affect surface quality of the component and its measurement during processing is difficult. The estimation of the amount of heat generated during drilling is important; however, numerical modeling of drilling processes involves a high computational cost. This paper presents a combined approach to thermal analysis of composite drilling, using both an analytical estimation of heat generated during drilling and numerical modeling for heat propagation. Promising results for indirect detection of risk of thermal damage, through the measurement of thrust force and cutting torque, are obtained.
Theoretical model estimation of guest diffusion in Metal-Organic Frameworks (MOFs)
Zheng, Bin
2015-08-11
Characterizing molecule diffusion in nanoporous matrices is critical to understanding the novel chemical and physical properties of metal-organic frameworks (MOFs). In this paper, we developed a theoretical model to fastly and accurately compute the diffusion rate of guest molecules in a zeolitic imidazolate framework-8 (ZIF-8). The ideal gas or equilibrium solution diffusion model was modified to contain the effect of periodical media via introducing the possibility of guests passing through the framework gate. The only input in our model is the energy barrier of guests passing through the MOF’s gate. Molecular dynamics (MD) methods were employed to gather the guest density profile, which then was used to deduce the energy barrier values. This produced reliable results that require a simulation time of 5 picoseconds, which is much shorter when using pure MD methods (in the billisecond scale) . Also, we used density functional theory (DFT) methods to obtain the energy profile of guests passing through gates, as this does not require specification of a force field for the MOF degrees of freedom. In the DFT calculation, we only considered one gate of MOFs each time; as this greatly reduced the computational cost. Based on the obtained energy barrier values we computed the diffusion rate of alkane and alcohol in ZIF-8 using our model, which was in good agreement with experimental test results and the calculation values from standard MD model. Our model shows the advantage of obtaining accurate diffusion rates for guests in MOFs for a lower computational cost and shorter calculation time. Thus, our analytic model calculation is especially attractive for high-throughput computational screening of the dynamic performance of guests in a framework.
Simona Laura DRAGOȘ
2013-06-01
Full Text Available The insurance sector becomes a more and more important component for the national economic and financial development. Nevertheless, the consumption and the density, both for life and non-life insurances, reveal a great variation across countries. Academic literature treats frequently the importance of the economic and demographic factors. This study evaluates the determining institutional factors of the insurance demand using OLS Multiple Regressions models on a sample of 31 European countries. The econometric estimates, according to the theory, show that a country’s level of corruption is decisive for the development of the non-life insurances. For the life insurances, business freedom, fiscal freedom and government spending are the most relevant explanatory variables. The article emphasizes the mechanism through which the significant institutional factors influence the insurance density from a country.
LI Yong; WANG Chao
2007-01-01
A simple estimation model of groundwater discharge and nutrient flux from nearshore unconfined aquifer to lake was studied. It was supposed that the aquifer was permeable isotropic homogeneously and its thickness approximated to the depth of lake. Distribution of the hydraulic gradient and the specific discharge along the transect of the discharge zone were discussed. Results show that the groundwater discharge patterns vary with the inclination angle of lakeshore bottom. For a shallow lake with gentle slope bottom, the rate of discharge of groundwater to lake is not constant across a discharge zone, but the discharge is concentrated in a narrow portion of the littoral zone where the Dupuit assumptions are invalid. The width of the discharge zone is correlative with aquifer thickness and slope of the lake bottom. The distribution functions of hydraulic gradient and groundwater discharge rates accord exponentially with offshore distance.
A theoretical approach to estimate the annual lightning hazards on human beings
Gomes, Chandima; Kadir, M. Z. A. Ab
2011-08-01
This study provides a detailed account of the stepwise development of an empirical equation to estimate the number of lightning casualties in a given region. The factors considered in the development of the formula are; lightning density, population density and urbanization of a given region. The unknown constants of the equation have been evaluated by applying state-wise lightning death records and information on lightning density distribution in USA. The death figure per year due to lightning calculated for Sri Lanka using the empirical equation developed is in good agreement with the same figure reported for the country by actual data collected. The paper also discusses the limitations of the empirical equations that have been developed to calculate lightning density once the isokeraunic level is provided as the input parameter.
Leontina Pavaloaia
2012-10-01
Full Text Available Mineral resources represent an important natural resource whose exploitation, unless it is rational, can lead to their exhaustion and the collapse of sustainable development. Given the importance of mineral resources and the uncertainty concerning the estimation of extant reserves, they have been analyzed by several national and international institutions. In this article we shall present a few aspects concerning the ways to approach the reserves of mineral resources at national and international level, by considering both economic aspects and those aspects concerned with the definition, classification and aggregation of the reserves of mineral resources by various specialized institutions. At present there are attempts to homogenize practices concerning these aspects for the purpose of presenting correct and comparable information.
Theoretical estimation of the impact velocity during the PWR spent drop in water condition
Kwon, Oh Joon; Park, Nam Gyu; Lee, Seong Ki; Kim, Jae Ik [KEPCO NF, Daejeon (Korea, Republic of)
2016-06-15
The spent fuel stored in the pool is vulnerable to external impacts, since the severe reactor conditions degrade the structural integrity of the fuel. Therefore an accident during shipping and handling should be considered. In an extreme case, the fuel assembly drop can be happened accidentally during handling the nuclear fuel in the spent fuel pool. The rod failure during such drop accident can be evaluated by calculating the impact force acting on the fuel assembly at the bottom of the spent fuel pool. The impact force can be evaluated with the impact velocity at the bottom of the spent fuel pool. Since fuel rods occupies most of weight and volume of a nuclear fuel assembly, the information of the rods are important to estimate the hydraulic resistance force. In this study, the hydraulic force acting on the 3×3 short rod bundle model during the drop accident is calculated, and the result is verified by comparing the numerical simulations. The methodology suggested by this study is expected to be useful for evaluating the integrity of the spent fuel.
Measurement and theoretical estimation of induced activity in natIn by high energy neutrons
Maitreyee Nandy; P K Sarkar; N Nakao; T Shibata
2009-10-01
Induced radioactivity in natural indium (natIn) foils by high energy neutrons was measured at the KENS Facility, KEK, Japan, where a 16.7 cm thick W target was bombarded by protons of 500 MeV. High energy neutrons consequently produced irradiated the In targets placed at different depths inside a 4 m thick concrete shield placed at the beam exit. The measured activities were compared with the results calculated using the nuclear reaction model codes ALICE-91 and EMPIRE-2.18. To estimate the induced activity, excitation functions of the various radionuclides were calculated using the two codes and folded with the appropriate neutron energy distribution at different depths of the concrete shield. The calculated excitation functions of a given nuclide were found to vary widely from one another in some cases. The performances of the codes for different input parameters like level densities and inverse cross-sections are reported in this paper. Our analysis shows that neither of the two codes reproduced all the measured activities satisfactorily, requiring further improvements in the models adopted.
Stavrev, Pavel; Stavreva, Nadejda; Ruggieri, Ruggero; Nahum, Alan
2015-08-01
We have compared two methods of estimating the cellular radiosensitivity of a heterogeneous tumour, namely, via cell-survival and via tumour control probability (TCP) pseudo-experiments. It is assumed that there exists intra-tumour variability in radiosensitivity and that the tumour consists predominantly of radiosensitive cells and a small number of radio-resistant cells. Using a multi-component, linear-quadratic (LQ) model of cell kill, a pseudo-experimental cell-survival versus dose curve is derived. This curve is then fitted with a mono-component LQ model describing the response of a homogeneous cell population. For the assumed variation in radiosensitivity it is shown that the composite pseudo-experimental survival curve is well approximated by the survival curve of cells with uniform radiosensitivity. For the same initial cell radiosensitivity distribution several pseudo-experimental TCP curves are simulated corresponding to different fractionation regimes. The TCP model used accounts for clonogen proliferation during a fractionated treatment. The set of simulated TCP curves is then fitted with a mono-component TCP model. As in the cell survival experiment the fit with a mono-component model assuming uniform radiosensitivity is shown to be highly acceptable. However, the best-fit values of cellular radiosensitivity produced via the two methods are very different. The cell-survival pseudo-experiment yields a high radiosensitivity value, while the TCP pseudo-experiment shows that the dose-response is dominated by the most resistant sub-population in the tumour, even when this is just a small fraction of the total.
Bernabeu, Ana M; Fernández-Fernández, Sandra; Rey, Daniel
2016-08-15
In oiled sandy beaches, unrecovered fuel can be buried up to several metres. This study proposes a theoretical approach to oil burial estimation along the intertidal area. First, our results revealed the existence of two main patterns in seasonal beach profile behaviour. Type A is characterized by intertidal slopes of time-constant steepness which advance/recede parallel to themselves in response to changing wave conditions. Type B is characterized by slopes of time-varying steepness which intersect at a given point in the intertidal area. This finding has a direct influence on the definition of oil depth. Type A pattern exhibits oil burial along the entire intertidal area following decreasing wave energy, while the type B pattern combines burial in high intertidal and exhumation in mid and/or low intertidal zones, depending on the position of the intersection point. These outcomes should be incorporated as key tools in future oil spill management programs.
Chesson, Harrell W; Gift, Thomas L; Owusu-Edusei, Kwame; Tao, Guoyu; Johnson, Ana P; Kent, Charlotte K
2011-10-01
We conducted a literature review of studies of the economic burden of sexually transmitted diseases in the United States. The annual direct medical cost of sexually transmitted diseases (including human immunodeficiency virus) has been estimated to be $16.9 billion (range: $13.9-$23.0 billion) in 2010 US dollars.
O.A. Bilovodska
2010-12-01
Full Text Available In this article the comparative analysis of existent approaches concerning to the estimation of strategies of enterprises is made. Besides, the theoretic-methodical approach concerning the estimation of strategy of the enterprises is improved on the basis of accounting of the strategic aim of the enterprise and also interests of the manufacturer and goods consumer.
Ganot, Noam; Gal-Yam, Avishay; Ofek, Eran O.; Sagiv, Ilan; Waxman, Eli; Lapid, Ofer [Department of Particle Physics and Astrophysics, Faculty of Physics, The Weizmann Institute of Science, Rehovot 76100 (Israel); Kulkarni, Shrinivas R.; Kasliwal, Mansi M. [Cahill Center for Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States); Ben-Ami, Sagi [Smithsonian Astrophysical Observatory, Harvard-Smithsonian Ctr. for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Chelouche, Doron; Rafter, Stephen [Physics Department, Faculty of Natural Sciences, University of Haifa, 31905 Haifa (Israel); Behar, Ehud; Laor, Ari [Physics Department, Technion Israel Institute of Technology, 32000 Haifa (Israel); Poznanski, Dovi; Nakar, Ehud; Maoz, Dan [School of Physics and Astronomy, Tel Aviv University, 69978 Tel Aviv (Israel); Trakhtenbrot, Benny [Institute for Astronomy, ETH Zurich, Wolfgang-Pauli-Strasse 27 Zurich 8093 (Switzerland); Neill, James D.; Barlow, Thomas A.; Martin, Christofer D., E-mail: noam.ganot@gmail.com [California Institute of Technology, 1200 East California Boulevard, MC 278-17, Pasadena, CA 91125 (United States); Collaboration: ULTRASAT Science Team; WTTH consortium; GALEX Science Team; Palomar Transient Factory; and others
2016-03-20
The radius and surface composition of an exploding massive star, as well as the explosion energy per unit mass, can be measured using early UV observations of core-collapse supernovae (SNe). We present the first results from a simultaneous GALEX/PTF search for early ultraviolet (UV) emission from SNe. Six SNe II and one Type II superluminous SN (SLSN-II) are clearly detected in the GALEX near-UV (NUV) data. We compare our detection rate with theoretical estimates based on early, shock-cooling UV light curves calculated from models that fit existing Swift and GALEX observations well, combined with volumetric SN rates. We find that our observations are in good agreement with calculated rates assuming that red supergiants (RSGs) explode with fiducial radii of 500 R{sub ⊙}, explosion energies of 10{sup 51} erg, and ejecta masses of 10 M{sub ⊙}. Exploding blue supergiants and Wolf–Rayet stars are poorly constrained. We describe how such observations can be used to derive the progenitor radius, surface composition, and explosion energy per unit mass of such SN events, and we demonstrate why UV observations are critical for such measurements. We use the fiducial RSG parameters to estimate the detection rate of SNe during the shock-cooling phase (<1 day after explosion) for several ground-based surveys (PTF, ZTF, and LSST). We show that the proposed wide-field UV explorer ULTRASAT mission is expected to find >85 SNe per year (∼0.5 SN per deg{sup 2}), independent of host galaxy extinction, down to an NUV detection limit of 21.5 mag AB. Our pilot GALEX/PTF project thus convincingly demonstrates that a dedicated, systematic SN survey at the NUV band is a compelling method to study how massive stars end their life.
Ganot, Noam; Ofek, Eran O; Sagiv, Ilan; Waxman, Eli; Lapid, Ofer; Kulkarni, Shrinivas R; Ben-Ami, Sagi; Kasliwal, Mansi M; Chelouche, Doron; Rafter, Stephen; Behar, Ehud; Laor, Ari; Poznanski, Dovi; Nakar, Udi; Maoz, Dan; Trakhtenbrot, Benny; Neill, James D; Barlow, Thomas A; Martin, Christofer D; Gezari, Suvi; Arcavi, Iair; Bloom, Joshua s; Nugent, Peter E; Sullivan, Mark
2014-01-01
The radius and surface composition of an exploding massive star, as well as the explosion energy per unit mass, can be measured using early UV observations of core collapse supernovae (SNe). We present the first results from a simultaneous GALEX/PTF search for early UV emission from SNe. Six Type II SNe and one Type II superluminous SN (SLSN-II) are clearly detected in the GALEX NUV data. We compare our detection rate with theoretical estimates based on early, shock-cooling UV light curves calculated from models that fit existing Swift and GALEX observations well, combined with volumetric SN rates. We find that our observations are in good agreement with calculated rates assuming that red supergiants (RSGs) explode with fiducial radii of 500 R_solar, explosion energies of 10^51 erg, and ejecta masses of 10 M_solar. Exploding blue supergiants and Wolf-Rayet stars are poorly constrained. We describe how such observations can be used to derive the progenitor radius, surface composition and explosion energy per u...
Wonnapinij, Passorn; Chinnery, Patrick F; Samuels, David C
2010-04-09
In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference.
Tang, M.; Liu, Y.
2009-12-01
Although Mo isotopes have been increasingly used as a paleoredox proxy in the study of paleo-oceanographic condition changes (Barling et al., 2001; Siebert et al., 2003, 2005,2006; Arnold et al., 2004; Poulson et al., 2006), some very basic aspects of Mo isotopes geochemistry have not been obtained yet. First, although there are several previous studies on equilibrium Mo isotope fractionation factors(Tossell,2005; Weeks et al.,2007; Wasylenki et al.,2008), these studies were dealing with situations in vacuum and we find unfortunately the solvation effects for Ge species in solution cannot be ignored. Therefore, accurate Ge fractionation factors are actually not determined yet. Second, except the dominant dissolved Mo species in seawater which is known as molybdate ion (MoO42-), the forms of possible other minor species remain elusive. Third, the Mo removal mechanisms from seawater are only known for the anoxia and euxinic conditions (e.g. Helz et al., 1996; Zheng et al., 2000), the Mo removal mechanism under oxic condition are still arguing. Fourth, the adsorption effects on Mo isotope fractionation are almost completely unknown. Especially, without the adsorption fractionation knowledge, it is difficult to understand many distinct fractionations found in a number of geologic systems and it is difficult to explain the exceptionally long residence time of Mo in seawater. Urey model or Bigeleisen-Mayer equation based theoretical method and the super-molecule clusters are used to precisely evaluate the fractionation factors. The B3LYP/(6-311+G(2df,p),LANL2DZ) level method is used for frequencies calculation. 24 water molecules are used to form the supermolecues surrounding the Mo species. At least 4 different conformers for each supermolecule are used to prevent the errors from the diversity of configurations in solution. This study provides accurate equilibrium Mo isotope fractionation factors between possible dissolved Mo species and the adsorbed Mo species on the
2002-03-01
Full Text Available According to models estimated separately for second-, third-, and fourth-birth rates in Norway, an increase took place from the mid-1970s to about 1990, given age and duration since last previous birth. A similar rise in the birth rates was seen in Sweden, except that the upturn at short durations was sharper. It is shown in this study, using Norwegian register data, that the increase partly reflects earlier changes in lower-order parity transitions. When models for each parity transition are estimated jointly, with a common unobserved factor included, there is no longer an upward trend in Norwegian second-birth rates, but a very weak decline, and the increase in the higher-order birth rates is strongly reduced compared to that found in the simpler approach.
Theoretical method for estimation of power loss due to mismatch in solar cell I-V characteristics
Srinivasamurthy, N.; Malathi, B.; Mathur, R.S.
1978-01-01
In order to generate power from the solar panels at a required voltage, suitable number of cells should be connected in series and parallel. There exists a mismatch in the solar cell I-V characteristics, when they are produced in a lot. When such cells are connected in a series parallel array, power loss would occur due to the mismatch. A theoretical approach is made to compute the power loss. This would suggest the designer to select proper combination of cells for minimum power loss of any configuration of the solar panel.
Frederiksen, Kirsten; Deltour, Isabelle; Schüz, Joachim
2012-12-10
Estimating exposure-outcome associations using laterality information on exposure and on outcome is an issue, when estimating associations of mobile phone use and brain tumour risk. The exposure is localized; therefore, a potential risk is expected to exist primarily on the side of the head, where the phone is usually held (ipsilateral exposure), and to a lesser extent at the opposite side of the head (contralateral exposure). Several measures of the associations with ipsilateral and contralateral exposure, dealing with different sampling designs, have been presented in the literature. This paper presents a general framework for the analysis of such studies using a likelihood-based approach in a competing risks model setting. The approach clarifies the implicit assumptions required for the validity of the presented estimators, particularly that in some approaches the risk with contralateral exposure is assumed to be zero. The performance of the estimators is illustrated in a simulation study showing for instance that while in some scenarios there is a loss of statistical power, others - in case of a positive ipsilateral exposure-outcome association - would result in a negatively biased estimate of the contralateral exposure parameter, irrespective of any additional recall bias. In conclusion, our theoretical evaluations and results from the simulation study emphasize the importance of setting up a formal model, which furthermore allows for estimation in more complicated and perhaps more realistic exposure settings, such as taking into account exposure to both sides of the head.
Aurich, Nathassia K; Alves Filho, José O; Marques da Silva, Ana M; Franco, Alexandre R
2015-01-01
With resting-state functional MRI (rs-fMRI) there are a variety of post-processing methods that can be used to quantify the human brain connectome. However, there is also a choice of which preprocessing steps will be used prior to calculating the functional connectivity of the brain. In this manuscript, we have tested seven different preprocessing schemes and assessed the reliability between and reproducibility within the various strategies by means of graph theoretical measures. Different preprocessing schemes were tested on a publicly available dataset, which includes rs-fMRI data of healthy controls. The brain was parcellated into 190 nodes and four graph theoretical (GT) measures were calculated; global efficiency (GEFF), characteristic path length (CPL), average clustering coefficient (ACC), and average local efficiency (ALE). Our findings indicate that results can significantly differ based on which preprocessing steps are selected. We also found dependence between motion and GT measurements in most preprocessing strategies. We conclude that by using censoring based on outliers within the functional time-series as a processing, results indicate an increase in reliability of GT measurements with a reduction of the dependency of head motion.
Jensen, Jesper; Tan, Zheng-Hua
2014-01-01
the logarithmic which is usually used for MFCC computation. The proposed method shows estimation performance which is identical to or better than state-of-the-art methods. It further shows comparable ASR performance, where the advantage of being able to use mel-frequency speech features based on a power non......-linearity rather than a logarithmic is demonstrated....
T.A. Dyachenko
2012-12-01
Full Text Available The concepts of market competitive environment, force of a competitive position of the enterprise and the connection between its terms have been defined. The subjects and factors that affect them have been discovered. The methodical approach of their estimation has been developed.
ZHANG; Shuwen; YUAN; Yeli
2004-01-01
Wave breaking statistics, such as the whitecap coverage and average volume of broken seawater, are evaluated in terms of wave parameters by use of wave breaking model (Yuan et al., 1988) taking the fifth order Stokes's wave as the analog of the original wave field. Based on the observed fact that breaking waves play an important role in the exchange of mass, momentum and energy between the atmosphere and the ocean, the influence of wave breaking on air-sea fluxes of heat and moisture is investigated. Theoretical expressions of bubble-volume flux and sea spray spectrum at the sea surface and models for bubble-induced and spray droplet-induced heat and moisture fluxes are established. This work can be taken as the basis for further understanding the mechanism of air-sea coupling and parameterization models.
Borkowski Andrzej
2015-12-01
Full Text Available The paper presents a summary of research activities concerning theoretical geodesy performed in Poland in the period of 2011-2014. It contains the results of research on new methods of the parameter estimation, a study on robustness properties of the M-estimation, control network and deformation analysis, and geodetic time series analysis. The main achievements in the geodetic parameter estimation involve a new model of the M-estimation with probabilistic models of geodetic observations, a new Shift-Msplit estimation, which allows to estimate a vector of parameter differences and the Shift-Msplit(+ that is a generalisation of Shift-Msplit estimation if the design matrix A of a functional model has not a full column rank. The new algorithms of the coordinates conversion between the Cartesian and geodetic coordinates, both on the rotational and triaxial ellipsoid can be mentioned as a highlights of the research of the last four years. New parameter estimation models developed have been adopted and successfully applied to the control network and deformation analysis.
Borkowski, Andrzej; Kosek, Wiesław
2015-12-01
The paper presents a summary of research activities concerning theoretical geodesy performed in Poland in the period of 2011-2014. It contains the results of research on new methods of the parameter estimation, a study on robustness properties of the M-estimation, control network and deformation analysis, and geodetic time series analysis. The main achievements in the geodetic parameter estimation involve a new model of the M-estimation with probabilistic models of geodetic observations, a new Shift-Msplit estimation, which allows to estimate a vector of parameter differences and the Shift-Msplit(+) that is a generalisation of Shift-Msplit estimation if the design matrix A of a functional model has not a full column rank. The new algorithms of the coordinates conversion between the Cartesian and geodetic coordinates, both on the rotational and triaxial ellipsoid can be mentioned as a highlights of the research of the last four years. New parameter estimation models developed have been adopted and successfully applied to the control network and deformation analysis. New algorithms based on the wavelet, Fourier and Hilbert transforms were applied to find time-frequency characteristics of geodetic and geophysical time series as well as time-frequency relations between them. Statistical properties of these time series are also presented using different statistical tests as well as 2nd, 3rd and 4th moments about the mean. The new forecasts methods are presented which enable prediction of the considered time series in different frequency bands.
Qinyi Li
2015-10-01
Full Text Available The activation energy of particle aggregation in suspensions is a very important kinetic parameter in a wide range of science and engineering applications. At present, however, there is no theory that can theoretically predict the activation energy. Because the activation energy is often less than 10 kT (where k is the Boltzmann constant and T is the temperature, it is difficult to experimentally measure. In this study, a theory for calculating the activation energy is established. Experimental measurements of the activation energy of montmorillonite aggregation were performed with different electrolyte and particle concentrations using the dynamic light scattering (DLS technique. The validity of the theory was verified by the experiments. This study confirmed that both the method for activation energy measurements by DLS and the theory for its calculation can be applied to suspensions of polydisperse nonspherical particles. The average kinetic energy at the moment of particle collision in the aggregation process was found to be 0.2 kT, which is less than the instantaneous kinetic energy of a Brownian particle (0.5 kT because of the viscous resistance of the water medium. This study also shows that adsorbed Na+ is strongly polarized in the electric field near the particle surface, and the polarization increases the effective charge of Na+ from +1 to +1.18.
Califf, S.; Cully, C. M.
2016-07-01
Double-probe electric field measurements on board spacecraft present significant technical challenges, especially in the inner magnetosphere where the ambient plasma characteristics can vary dramatically and alter the behavior of the instrument. We explore the shorting factor for the Time History of Events and Macroscale Interactions during Substorms electric field instrument, which is a scale factor error on the measured electric field due to coupling between the sensing spheres and the long wire booms, using both an empirical technique and through simulations with varying levels of fidelity. The empirical data and simulations both show that there is effectively no shorting when the spacecraft is immersed in high-density plasma deep within the plasmasphere and that shorting becomes more prominent as plasma density decreases and the Debye length increases outside the plasmasphere. However, there is a significant discrepancy between the data and theory for the shorting factor in low-density plasmas: the empirical estimate indicates ~0.7 shorting for long Debye lengths, but the simulations predict a shorting factor of ~0.94. This paper systematically steps through the empirical and modeling methods leading to the disagreement with the intention of motivating further study on the topic.
B. Murali Babu
2015-03-01
Full Text Available The objective of this study is to develop the numerical model of InGaAs QD solar cell to describe the device characteristics. The developed model is based on Homotopy analysis which provides self-consistent and nonlinear solutions to 3D Poisson and Schrodinger equations. The exact potential and energy profile of the quantum dot accounts for the estimation of current under dark condition. The model is used in photocurrent determination of quantum dot solar cell under 1 Sun, 1.5 AM condition over a range of various solar cell parameters such as optical generation life time, quantum dot concentration and number of quantum dot layer. The quantum wavelength and quantum dot layers are used to calculate the photocurrent, recombination rate and conversion efficiency. The photocurrent has achieved its superiority with optimum quantum dot layers and wavelength. The results obtained show that the photocurrent is strongly sensitive to the above dependences and a good agreement with the experimental results was evidenced.
Bruce, Iain P; Karaman, M Muge; Rowe, Daniel B
2012-10-01
The acquisition of sub-sampled data from an array of receiver coils has become a common means of reducing data acquisition time in MRI. Of the various techniques used in parallel MRI, SENSitivity Encoding (SENSE) is one of the most common, making use of a complex-valued weighted least squares estimation to unfold the aliased images. It was recently shown in Bruce et al. [Magn. Reson. Imag. 29(2011):1267-1287] that when the SENSE model is represented in terms of a real-valued isomorphism,it assumes a skew-symmetric covariance between receiver coils, as well as an identity covariance structure between voxels. In this manuscript, we show that not only is the skew-symmetric coil covariance unlike that of real data, but the estimated covariance structure between voxels over a time series of experimental data is not an identity matrix. As such, a new model, entitled SENSE-ITIVE, is described with both revised coil and voxel covariance structures. Both the SENSE and SENSE-ITIVE models are represented in terms of real-valued isomorphisms, allowing for a statistical analysis of reconstructed voxel means, variances, and correlations resulting from the use of different coil and voxel covariance structures used in the reconstruction processes to be conducted. It is shown through both theoretical and experimental illustrations that the miss-specification of the coil and voxel covariance structures in the SENSE model results in a lower standard deviation in each voxel of the reconstructed images, and thus an artificial increase in SNR, compared to the standard deviation and SNR of the SENSE-ITIVE model where both the coil and voxel covariances are appropriately accounted for. It is also shown that there are differences in the correlations induced by the reconstruction operations of both models, and consequently there are differences in the correlations estimated throughout the course of reconstructed time series. These differences in correlations could result in meaningful
Blackmon, Heath; Demuth, Jeffery P
2016-02-01
The pace and direction of evolution in response to selection, drift, and mutation are governed by the genetic architecture that underlies trait variation. Consequently, much of evolutionary theory is predicated on assumptions about whether genes can be considered to act in isolation, or in the context of their genetic background. Evolutionary biologists have disagreed, sometimes heatedly, over which assumptions best describe evolution in nature. Methods for estimating genetic architectures that favor simpler (i.e., additive) models contribute to this debate. Here we address one important source of bias, model selection in line cross analysis (LCA). LCA estimates genetic parameters conditional on the best model chosen from a vast model space using relatively few line means. Current LCA approaches often favor simple models and ignore uncertainty in model choice. To address these issues we introduce Software for Analysis of Genetic Architecture (SAGA), which comprehensively assesses the potential model space, quantifies model selection uncertainty, and uses model weighted averaging to accurately estimate composite genetic effects. Using simulated data and previously published LCA studies, we demonstrate the utility of SAGA to more accurately define the components of complex genetic architectures, and show that traditional approaches have underestimated the importance of epistasis.
A. Langousis
2012-11-01
Full Text Available We focus on the special case of catchments covered by a single raingauge, and develop a theoretical framework to obtain estimates of spatial rainfall averages conditional on rainfall measurements from a single location, and the flow conditions at the catchment outlet. In doing so we use: (a statistical tools to identify and correct inconsistencies between daily rainfall occurrence and amount and the flow conditions at the outlet of the basin, (b concepts from multifractal theory to relate the fraction of wet intervals in point rainfall measurements and that in spatial rainfall averages, while accounting for the shape and size of the catchment, the size, lifetime and advection velocity of rainfall generating features and the location of the raingauge inside the basin, and (c semi-theoretical arguments to assure consistency between rainfall and runoff volumes at an inter-annual level, implicitly accounting for spatial heterogeneities of rainfall caused by orographic influences. In an application study, using point rainfall records from Glafkos river basin in Western Greece, we find the suggested approach to demonstrate significant skill in resolving rainfall-runoff incompatibilities at a daily level, while reproducing the statistics of spatial rainfall averages at both monthly and annual time scales, independently of the location of the raingauge and the magnitude of the observed deviations between point rainfall measurements and spatial rainfall averages. The developed scheme should serve as an important tool for the effective calibration of rainfall-runoff models in basins covered by a single raingauge and, also, improve hydrologic impact assessment at a river basin level under changing climatic conditions.
Khan, Mohammad F.; Rita, Shamima A.; Kayser, Md. Shahidulla; Islam, Md. Shariful; Asad, Sharmeen; Bin Rashid, Ridwan; Bari, Md. Abdul; Rahman, Muhammed M.; Al Aman, D. A. Anwar; Setu, Nurul I.; Banoo, Rebecca; Rashid, Mohammad A.
2017-04-01
A simple, rapid, economic, accurate and precise method for the estimation of rifampicin in a mixture of isoniazid and pyrazinamide by UV spectrophotometeric technique (guided by the theoretical investigation of physicochemical properties) was developed and validated. Theoretical investigations revealed that isoniazid and pyrazinamide both were freely soluble in water and slightly soluble in ethyl acetate whereas rifampicin was practically insoluble in water but freely soluble in ethyl acetate. This indicates that ethyl acetate is an effective solvent for the extraction of rifampicin from a water mixture of isoniazid and pyrazinamide. Computational study indicated that pH range of 6.0-8.0 would favor the extraction of rifampicin. Rifampicin is separated from isoniazid and pyrazinamide at pH 7.4 ± 0.1 by extracting with ethyl acetate. The ethyl acetate was then analyzed at λmax of 344.0 nm. The developed method was validated for linearity, accuracy and precision according to ICH guidelines. The proposed method exhibited good linearity over the concentration range of 2.5 - 35.0 µg/mL. The intraday and inter-day precision in terms of % RSD ranged from 1.09 - 1.70% and 1.63 - 2.99%, respectively. The accuracy (in terms of recovery) of the method varied from of 96.7 ± 0.9 to 101.1 ± 0.4%. The LOD and LOQ were found to be 0.83 and 2.52 µg/mL, respectively. In addition, the developed method was successfully applied to assay rifampicin combination (isoniazid and pyrazinamide) brands available in Bangladesh.
Leclercq, C; Arcella, D; Turrini, A
2000-12-01
The three recent EU directives which fixed maximum permitted levels (MPL) for food additives for all member states also include the general obligation to establish national systems for monitoring the intake of these substances in order to evaluate their use safety. In this work, we considered additives with primary antioxidant technological function for which an acceptable daily intake (ADI) was established by the Scientific Committee for Food (SCF): gallates, butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and erythorbic acid. The potential intake of these additives in Italy was estimated by means of a hierarchical approach using, step by step, more refined methods. The likelihood of the current ADI to be exceeded was very low for erythorbic acid, BHA and gallates. On the other hand, the theoretical maximum daily intake (TMDI) of BHT was above the current ADI. The three food categories found to be main potential sources of BHT were "pastry, cake and biscuits", "chewing gums" and "vegetables oils and margarine"; they overall contributed 74% of the TMDI. Actual use of BHT in these food categories is discussed, together with other aspects such as losses of this substance in the technological process and percentage of ingestion in the case of chewing gums.
Eldridge, D. L.; Guo, W.; Farquhar, J.
2016-12-01
We present theoretical calculations for all three isotope ratios of sulfur (33S/32S, 34S/32S, 36S/32S) at the B3LYP/6-31+G(d,p) level of theory for aqueous sulfur compounds modeled in 30-40H2O clusters spanning the range of sulfur oxidation state (Sn, n = -2 to +6) for estimating equilibrium fractionation factors in aqueous systems. Computed 34β values based on major isotope (34S/32S) reduced partition function ratios (RPFRs) scale to a first order with sulfur oxidation state and coordination, where 34β generally increase with higher oxidation state and increasing coordination of the sulfur atom. Exponents defining mass dependent relationships based on β values (x/34κ = ln(xβ)/ln(34β), x = 33 or 36) conform to tight ranges over a wide range of temperature for all aqueous compounds (33/34κ ≈ 0.5148-0.5159, 36/34κ ≈ 1.89-1.90 from T ⩾ 0 °C). The exponents converge near a singular value for all compounds at the high temperature limit (33/34κT→∞ = 0.51587 ± 0.00003 and 36/34κT→∞ = 1.8905 ± 0.0002; 1 s.d. of all computed compounds), and typically follow trends based on oxidation state and coordination similar to those seen in 34β values at lower temperatures. Theoretical equilibrium fractionation factors computed from these β-values are compared to experimental constraints for HSO3-T(aq)/SO2(g, aq), SO2(aq)/SO2(g), H2S(aq)/H2S(g), H2S(aq)/HS-(aq), SO42-(aq)/H2ST(aq), S2O32-(aq) (intramolecular), and S2O32-(aq)/H2ST(aq), and generally agree within a reasonable estimation of uncertainties. We make predictions of fractionation factors where other constraints are unavailable. Isotope partitioning of the isomers of protonated compounds in the sulfite and sulfoxylate systems depend strongly on whether protons are bound to either sulfur or oxygen atoms. The magnitude of the HSO3-T/SO32- major isotope (34S/32S) fractionation factor is predicted to increase with temperature from 0 to 70 °C due to the combined effects of the large magnitude (HS)O3
Tong, Yujin; Guan, Mingzhi; Wang, Xingzhe
2017-04-01
The present study deals with the thermal characteristics and mechanical behaviors of low/high temperature superconducting (LTS/HTS) composite tapes during quench processes triggered by a spot heater. Based on the generalized thermoelastic theory, a dynamic thermoelastic model with a relaxation time is developed which takes into account the temperature dependence and finite speed of heat propagation for the superconducting tapes under cryogenic condition. The analyses were performed using the finite element method to solve the coupled differential equations of dynamic heat conduction and elastic equilibrium. The results show that the thermoelastic behaviors exhibit a strong relevance to quench characteristics of the superconductors. As a quench occurs, the thermoelastic strain-rate has an obvious jumping variation with the instant of time of its peak being fortunately coincident with the time at which the critical temperature is reached. Such a jumping change of strain-rate could be a way of estimation and detection of quench occurrence, and the theoretical predictions coincide with the existing experimental observations on thermoelastic strain-rate in LTS magnets. For a HTS tape, the thermoelastic strain-rate or temperature-rate variation and a small jump also are illustrated as the quench occurrence is determined. Additionally, the normal zone propagation velocities for the LTS/HTS tapes are predicted by the critical temperature and thermoelastic strain-rate to show quite good agreements with the results evaluated by Wilson’s formula for a LTS tape or the experimental measurements for a HTS tape. The influences of the relaxation time of heat conduction and thermoelastic coupling on the thermal distribution and strain profile are also discussed in details.
Klammler, Harald; Layton, Leif; Nemer, Bassel; Hatfield, Kirk; Mohseni, Ana
2017-06-01
Hydraulic conductivity and its anisotropy are fundamental aquifer properties for groundwater flow and transport modeling. Current in-well or direct-push field measurement techniques allow for relatively quick determination of general conductivity profiles with depth. However, capabilities for identifying local scale conductivities in the horizontal and vertical directions are very limited. Here, we develop the theoretical basis for estimating horizontal and vertical conductivities from different types of steady-state single-well/probe injection tests under saturated conditions and in the absence of a well skin. We explore existing solutions and a recent semi-analytical solution approach to the flow problem under the assumption that the aquifer is locally homogeneous. The methods are based on the collection of an additional piece of information in the form of a second injection (or recirculation) test at a same location, or in the form of an additional head or flow observation along the well/probe. Results are represented in dimensionless charts for partial validation against approximate solutions and for practical application to test interpretation. The charts further allow for optimization of a test configuration to maximize sensitivity to anisotropy ratio. The two methods most sensitive to anisotropy are found to be (1) subsequent injection from a lateral screen and from the bottom of an otherwise cased borehole, and (2) single injection from a lateral screen with an additional head observation along the casing. Results may also be relevant for attributing consistent divergences in conductivity measurements from different testing methods applied at a same site or location to the potential effects of anisotropy. Some practical aspects are discussed and references are made to existing methods, which appear easily compatible with the proposed procedures.
Fast, Ivan
2017-06-01
declared for compacted metallic waste residual from the reprocessing of spent fuel assemblies. In Germany, the radionuclide declaration list for the disposal of used fuel assemblies is not yet specified. An estimation of the average radionuclide composition of the burnt-up fuel including the realistic inventory bandwidths for each of relevant radionuclides would be highly desirable beforehand. This information is needed for the development of proof tools for the product quality control or safeguards, but also for the evaluation of various safety scenarios regarding the radionuclide mobility or contamination. This work is focused on the development of a method for the determination of realistic radionuclide bandwidths in cases when no information of reactor design and operating data is available. Reactor parameters are classes as Primary Reactor Parameters of burn-up (BU) and cooling time (CT) that are considered to be known, and so-called Secondary Reactor Parameters (SRPs) that include nine parameters that are analysed: initial enrichment (IE), fuel density (FD), fuel temperature (FT), specific power (SP), downtime (DT), irradiation time (IT), moderator density (MD), moderator temperature (MT) and boric acid concentration (BA) used in the water for reactor control. The modelling of radionuclide inventories is carried out with the burn-up code SCALE 6.1 using the nuclear data library ENDF/B-VII.0. The input data include geometry of the fuel assembly and a set of the associated SRP values. The magnitude of the bandwidth significantly varies for different radionuclides and depends strongly on the primary parameters of burn-up and cooling time. The theoretical bandwidths are validated with experimental data. For this purpose the destructive radiochemical assay (RCA) data are taken from the Spent Fuel Isotopic Composition Database (SFCOMPO), which is maintained by the OECD Nuclear Energy Agency. There is, however, presently insufficient experimental data to validate the
Bruus, Henrik
in complexity, a proper theoretical understanding becomes increasingly important. The basic idea of the book is to provide a self-contained formulation of the theoretical framework of microfluidics, and at the same time give physical motivation and examples from lab-on-a-chip technology. After three chapters...
Interpretation and Implications of Previous Sea Pay Estimates
2015-04-01
amount of sea duty that can be gained from a rise in Career Sea Pay and Career Sea Pay Premium . The second was how to separate multiple effects of sea...linear pricing scheme that would induce personnel to reveal important information about their willingness to reenlist and their willingness to...undertake sea duty. Under this pricing mechanism, the Navy could fashion combinations of Selective Reenlistment Bonuses and sea pays that would achieve
Theoretical study of the C-H bond dissociation energy of acetylene
Taylor, Peter R.; Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.
1990-01-01
The authors present a theoretical study of the convergence of the C-H bond dissociation energy (D sub o) of acetylene with respect to both the one- and n-particle spaces. Their best estimate for D sub o of 130.1 plus or minus 1.0 kcal/mole is slightly below previous theoretical estimates, but substantially above the value determined using Stark anticrossing spectroscopy that is asserted to be an upper bound.
No Previous Public Services Required
Taylor, Kelley R.
2009-01-01
In 2007, the Supreme Court heard a case that involved the question of whether a school district could be required to reimburse parents who unilaterally placed their child in private school when the child had not previously received special education and related services in a public institution ("Board of Education v. Tom F."). The…
Rahman, Syed Faisal ur
2014-01-01
The paper discusses ISW estimates through EMU-ASKAP survey. The main ideas this paper covers include: 1- Discussion on source distribution, confusion, position accuracy and shotnoise (with discussion focusing on SN ratios). 2- Selection of maximum redshift and maximum 'l' ranges in relation with SN ratios. Note: Complete abstract is available in the document.
Mikhailov, S V; Stefanis, N G
2016-01-01
We consider the pion-photon transition form factor at low to intermediate spacelike momenta within the theoretical framework of light-cone sum rules. We derive predictions which take into account all currently known contributions stemming from QCD perturbation theory up to the next-to-next-to-leading order (NNLO) and by including all twist terms up to order six. In order to enable a more detailed comparison with forthcoming high-precision data, we also estimate the main systematic theoretical uncertainties, stemming from various sources, and discuss their influence on the calculations --- in particular the dominant one related to the still uncalculated part of the NNLO contribution. The analysis addresses, in broad terms, also the role of the twist-two pion distribution amplitude derived with different approaches.
Green, M A; Wright, J C
1985-05-01
It has been clearly demonstrated that the rectal cooling curve does not obey Newton's Law, which is exponential. The first success in modelling rectal cooling mathematically was achieved by Marshall and Hoare [1]. An amendment was made to the simple exponential curve which led to a good mathematical model, exhibiting the three main sections of rectal cooling, i.e. lag, linear and quasi-exponential. The resultant method of postmortem interval estimation required a knowledge of the body mass and height. The present study has led to a totally different amendment to Newton's Law, which provides a means of postmortem interval estimation from body temperature data only. The derivation of the method, with a background on Newton's Law follows.
Marc Vanderhaeghen
2007-04-01
The theoretical issues in the interpretation of the precision measurements of the nucleon-to-Delta transition by means of electromagnetic probes are highlighted. The results of these measurements are confronted with the state-of-the-art calculations based on chiral effective-field theories (EFT), lattice QCD, large-Nc relations, perturbative QCD, and QCD-inspired models. The link of the nucleon-to-Delta form factors to generalized parton distributions (GPDs) is also discussed.
Mikeš, Daniel
2010-05-01
Theoretical geology Present day geology is mostly empirical of nature. I claim that geology is by nature complex and that the empirical approach is bound to fail. Let's consider the input to be the set of ambient conditions and the output to be the sedimentary rock record. I claim that the output can only be deduced from the input if the relation from input to output be known. The fundamental question is therefore the following: Can one predict the output from the input or can one predict the behaviour of a sedimentary system? If one can, than the empirical/deductive method has changes, if one can't than that method is bound to fail. The fundamental problem to solve is therefore the following: How to predict the behaviour of a sedimentary system? It is interesting to observe that this question is never asked and many a study is conducted by the empirical/deductive method; it seems that the empirical method has been accepted as being appropriate without question. It is, however, easy to argument that a sedimentary system is by nature complex and that several input parameters vary at the same time and that they can create similar output in the rock record. It follows trivially from these first principles that in such a case the deductive solution cannot be unique. At the same time several geological methods depart precisely from the assumption, that one particular variable is the dictator/driver and that the others are constant, even though the data do not support such an assumption. The method of "sequence stratigraphy" is a typical example of such a dogma. It can be easily argued that all the interpretation resulting from a method that is built on uncertain or wrong assumptions is erroneous. Still, this method has survived for many years, nonwithstanding all the critics it has received. This is just one example of the present day geological world and is not unique. Even the alternative methods criticising sequence stratigraphy actually depart from the same
Joos, Georg
1986-01-01
Among the finest, most comprehensive treatments of theoretical physics ever written, this classic volume comprises a superb introduction to the main branches of the discipline and offers solid grounding for further research in a variety of fields. Students will find no better one-volume coverage of so many essential topics; moreover, since its first publication, the book has been substantially revised and updated with additional material on Bessel functions, spherical harmonics, superconductivity, elastomers, and other subjects.The first four chapters review mathematical topics needed by theo
Fsusy and Field Theoretical Construction
Sedra, M B
2009-01-01
Following our previous work on fractional spin symmetries (FSS) \\cite{6, 7}, we consider here the construction of field theoretical models that are invariant under the $D=2(1/3,1/3)$ supersymmetric algebra.
Stöltzner, Michael
Answering to the double-faced influence of string theory on mathematical practice and rigour, the mathematical physicists Arthur Jaffe and Frank Quinn have contemplated the idea that there exists a `theoretical' mathematics (alongside `theoretical' physics) whose basic structures and results still require independent corroboration by mathematical proof. In this paper, I shall take the Jaffe-Quinn debate mainly as a problem of mathematical ontology and analyse it against the backdrop of two philosophical views that are appreciative towards informal mathematical development and conjectural results: Lakatos's methodology of proofs and refutations and John von Neumann's opportunistic reading of Hilbert's axiomatic method. The comparison of both approaches shows that mitigating Lakatos's falsificationism makes his insights about mathematical quasi-ontology more relevant to 20th century mathematics in which new structures are introduced by axiomatisation and not necessarily motivated by informal ancestors. The final section discusses the consequences of string theorists' claim to finality for the theory's mathematical make-up. I argue that ontological reductionism as advocated by particle physicists and the quest for mathematically deeper axioms do not necessarily lead to identical results.
Theoretical Physics 1. Theoretical Mechanics
Dreizler, Reiner M.; Luedde, Cora S. [Frankfurt Univ. (Germany). Inst. fuer Theoretische Physik
2010-07-01
After an introduction to basic concepts of mechanics more advanced topics build the major part of this book. Interspersed is a discussion of selected problems of motion. This is followed by a concise treatment of the Lagrangian and the Hamiltonian formulation of mechanics, as well as a brief excursion on chaotic motion. The last chapter deals with applications of the Lagrangian formulation to specific systems (coupled oscillators, rotating coordinate systems, rigid bodies). The level of this textbook is advanced undergraduate. The authors combine teaching experience of more than 40 years in all fields of Theoretical Physics and related mathematical disciplines and thorough knowledge in creating advanced eLearning content. The text is accompanied by an extensive collection of online material, in which the possibilities of the electronic medium are fully exploited, e.g. in the form of applets, 2D- and 3D-animations. (orig.)
Barney G. Glaser, Ph.D., Hon. Ph.D.
2009-11-01
Full Text Available Theoretical sorting has brought the analyst to the point of pent-up pressure to write: to see the months of work actualized in a “piece.” But this is only a personal pressure. The goal of grounded theory methodology, above all is to offer the results to the public, usually through one or more publications. We will focus on writing for publication, which is the most frequent way that the analyst can tell how people are “buying” what really matters in sociology, or in other fields.Both feedback on and use of publications will be the best evaluation of the analyst’s grounded theory. It will be his main source or criticism, constructive critique, and frequently of career rewards. In any case, he has to write to expand his audience beyond the limited number of close colleagues and students. Unless there is a publication, his work will be relegated to limited discussion, classroom presentation, or even private fantasy. The rigor and value of grounded theory work deserves publication. And many analysts have a stake in effecting wider publics, which makes their substantive grounded theory count.
Theoretical Mechanics Theoretical Physics 1
Dreizler, Reiner M
2011-01-01
After an introduction to basic concepts of mechanics more advanced topics build the major part of this book. Interspersed is a discussion of selected problems of motion. This is followed by a concise treatment of the Lagrangian and the Hamiltonian formulation of mechanics, as well as a brief excursion on chaotic motion. The last chapter deals with applications of the Lagrangian formulation to specific systems (coupled oscillators, rotating coordinate systems, rigid bodies). The level of this textbook is advanced undergraduate. The authors combine teaching experience of more than 40 years in all fields of Theoretical Physics and related mathematical disciplines and thorough knowledge in creating advanced eLearning content. The text is accompanied by an extensive collection of online material, in which the possibilities of the electronic medium are fully exploited, e.g. in the form of applets, 2D- and 3D-animations. - A collection of 74 problems with detailed step-by-step guidance towards the solutions. - A col...
Arndt, Channing; Simler, Kenneth R.
2010-01-01
an information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially......, with the current approach tending to systematically overestimate (underestimate) poverty in urban (rural) zones....
Rates of induced abortion in Denmark according to age, previous births and previous abortions
Marie-Louise H. Hansen
2009-11-01
Full Text Available Background: Whereas the effects of various socio-demographic determinants on a woman's risk of having an abortion are relatively well-documented, less attention has been given to the effect of previous abortions and births. Objective: To study the effect of previous abortions and births on Danish women's risk of an abortion, in addition to a number of demographic and personal characteristics. Data and methods: From the Fertility of Women and Couples Dataset we obtained data on the number of live births and induced abortions by year (1981-2001, age (16-39, county of residence and marital status. Logistic regression analysis was used to estimate the influence of the explanatory variables on the probability of having an abortion in a relevant year. Main findings and conclusion: A woman's risk of having an abortion increases with the number of previous births and previous abortions. Some interactions were was found in the way a woman's risk of abortion varies with calendar year, age and parity. The risk of an abortion for women with no children decreases while the risk of an abortion for women with children increases over time. Furthermore, the risk of an abortion decreases with age, but relatively more so for women with children compared to childless women. Trends for teenagers are discussed in a separate section.
Venters, Tonia M.
2011-01-01
We present new theoretical estimates of the contribution of unresolved star-forming galaxies to the extragalactic gamma-ray background (EGB) as measured by EGRET and the Fermi-LAT. We employ several methods for determining the star-forming galaxy contribution the the EGB, including a method positing a correlation between the gamma-ray luminosity of a galaxy and its rate of star formation as calculated from the total infrared luminosity, and a method that makes use of a model of the evolution of the galaxy gas mass with cosmic time. We find that depending on the model, unresolved star-forming galaxies could contribute significantly to the EGB as measured by the Fermi-LAT at energies between approx. 300 MeV and approx. few GeV. However, the overall spectrum of unresolved star-forming galaxies can explain neither the EGRET EGB spectrum at energies between 50 and 200 MeV nor the Fermi-LAT EGB spectrum at energies above approx. few GeV.
77 FR 70176 - Previous Participation Certification
2012-11-23
... URBAN DEVELOPMENT Previous Participation Certification AGENCY: Office of the Chief Information Officer... digital submission of all data and certifications is available via HUD's secure Internet systems. However...: Previous Participation Certification. OMB Approval Number: 2502-0118. Form Numbers: HUD-2530 ....
Theoretical Astrophysics at Fermilab
2004-01-01
The Theoretical Astrophysics Group works on a broad range of topics ranging from string theory to data analysis in the Sloan Digital Sky Survey. The group is motivated by the belief that a deep understanding of fundamental physics is necessary to explain a wide variety of phenomena in the universe. During the three years 2001-2003 of our previous NASA grant, over 120 papers were written; ten of our postdocs went on to faculty positions; and we hosted or organized many workshops and conferences. Kolb and collaborators focused on the early universe, in particular and models and ramifications of the theory of inflation. They also studied models with extra dimensions, new types of dark matter, and the second order effects of super-horizon perturbations. S tebbins, Frieman, Hui, and Dodelson worked on phenomenological cosmology, extracting cosmological constraints from surveys such as the Sloan Digital Sky Survey. They also worked on theoretical topics such as weak lensing, reionization, and dark energy. This work has proved important to a number of experimental groups [including those at Fermilab] planning future observations. In general, the work of the Theoretical Astrophysics Group has served as a catalyst for experimental projects at Fennilab. An example of this is the Joint Dark Energy Mission. Fennilab is now a member of SNAP, and much of the work done here is by people formerly working on the accelerator. We have created an environment where many of these people made transition from physics to astronomy. We also worked on many other topics related to NASA s focus: cosmic rays, dark matter, the Sunyaev-Zel dovich effect, the galaxy distribution in the universe, and the Lyman alpha forest. The group organized and hosted a number of conferences and workshop over the years covered by the grant. Among them were:
Theoretical physics 5 thermodynamics
Nolting, Wolfgang
2017-01-01
This concise textbook offers a clear and comprehensive introduction to thermodynamics, one of the core components of undergraduate physics courses. It follows on naturally from the previous volumes in this series, defining macroscopic variables, such as internal energy, entropy and pressure,together with thermodynamic principles. The first part of the book introduces the laws of thermodynamics and thermodynamic potentials. More complex themes are covered in the second part of the book, which describes phases and phase transitions in depth. Ideally suited to undergraduate students with some grounding in classical mechanics, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets. About the Theoretical Physics series Translated from the renowned and highly successful German editions, the eight volumes of this series cove...
Theoretical physics 3 electrodynamics
Nolting, Wolfgang
2016-01-01
This textbook offers a clear and comprehensive introduction to electrodynamics, one of the core components of undergraduate physics courses. It follows on naturally from the previous volumes in this series. The first part of the book describes the interaction of electric charges and magnetic moments by introducing electro- and magnetostatics. The second part of the book establishes deeper understanding of electrodynamics with the Maxwell equations, quasistationary fields and electromagnetic fields. All sections are accompanied by a detailed introduction to the math needed. Ideally suited to undergraduate students with some grounding in classical and analytical mechanics, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets. About the Theoretical Physics series Translated from the renowned and highly successful Germa...
Graph-theoretical matrices in chemistry
Janezic, Dusanka; Nikolic, Sonja; Trinajstic, Nenad
2015-01-01
Graph-Theoretical Matrices in Chemistry presents a systematic survey of graph-theoretical matrices and highlights their potential uses. This comprehensive volume is an updated, extended version of a former bestseller featuring a series of mathematical chemistry monographs. In this edition, nearly 200 graph-theoretical matrices are included.This second edition is organized like the previous one-after an introduction, graph-theoretical matrices are presented in five chapters: The Adjacency Matrix and Related Matrices, Incidence Matrices, The Distance Matrix and Related Matrices, Special Matrices
Subsequent pregnancy outcome after previous foetal death
Nijkamp, J. W.; Korteweg, F. J.; Holm, J. P.; Timmer, A.; Erwich, J. J. H. M.; van Pampus, M. G.
2013-01-01
Objective: A history of foetal death is a risk factor for complications and foetal death in subsequent pregnancies as most previous risk factors remain present and an underlying cause of death may recur. The purpose of this study was to evaluate subsequent pregnancy outcome after foetal death and to
Vibrations of twisted cantilevered plates - Summary of previous and current studies
Leissa, A. W.; Macbain, J. C.; Kielb, R. E.
1984-01-01
This work summarizes a comprehensive study made of the free vibrations of twisted, cantilevered plates of rectangular planform. Numerous theoretical and experimental investigations previously made by others have resulted in frequency results which disagree considerably. To clarify the problem a joint industry/government/university research effort was initiated to obtain comprehensive theoretical and experimental results for models having useful ranges of aspect ratios, thickness ratios and twist angles. Theoretical data came from 19 independent computer analyses, including finite element, shell theory and beam theory idealizations. Two independent sets of experimental data were also obtained. The theoretical and experimental results are summarized and compared.
New study reveals twice as many asteroids as previously believed
2002-05-01
The ISO satellite Credits: ESA ISO An artist's impression of the ISO spacecraft. The ISO Deep Asteroid Search indicates that there are between 1.1 million and 1.9 million 'space rocks' larger than 1 kilometre in diameter in the so-called 'main asteroid belt', about twice as many as previously believed. However, astronomers think it is premature to revise current assessments of the risk of the Earth being hit by an asteroid. Despite being in our own Solar System, asteroids can be more difficult to study than very distant galaxies. With sizes of up to one thousand kilometres in diameter, the brightness of these rocky objects may vary considerably in just a few minutes. They move very quickly with respect to the stars - they have been dubbed 'vermin of the sky' because they often appear as trails on long exposure images. This elusiveness explains why their actual number and size distribution remains uncertain. Most of the almost 40,000 asteroids catalogued so far (1) orbit the Sun forming the 'main asteroid belt', between Mars and Jupiter, too far to pose any threat to Earth. However, space-watchers do keep a closer eye on another category of asteroids, the 'Near Earth Asteroids' or 'NEAs', which are those whose orbits cross, or are likely to cross, that of our planet. The ISO Deep Asteroid Search (IDAS), the first systematic search for these objects performed in infrared light, focused on main belt asteroids. Because it is impossible to simply point the telescope at the whole main belt and count, astronomers choose selected regions of the belt and then use a theoretical model to extrapolate the data to the whole belt. Edward Tedesco (TerraSystems, Inc., New Hampshire, United States) and François-Xavier Desert (Observatoire de Grenoble, France) observed their main belt selected areas in 1996 and 1997 with ESA's ISO. They found that in the middle region of the belt the density of asteroids was 160 asteroids larger than 1 kilometre per square degree - an area of the
Induced vaginal birth after previous caesarean section
Akylbek Tussupkaliyev; Andrey Gayday; Bibigul Karimsakova; Saule Bermagambetova; Lunara Uteniyazova; Guldana Iztleuova; Gulkhanym Kusherbayeva; Meruyert Konakbayeva; Assylzada Merekeyeva; Zamira Imangaliyeva
2016-01-01
Introduction The rate of operative birth by Caesarean section is constantly rising. In Kazakhstan, it reaches 27 per cent. Research data confirm that the percentage of successful vaginal births after previous Caesarean section is 50–70 per cent. How safe the induction of vaginal birth after Caesarean (VBAC) remains unclear. Methodology The studied techniques of labour induction were amniotomy of the foetal bladder with the vulsellum ramus, intravaginal administra...
Cataract surgery in previously vitrectomized eyes.
Akinci, A; Batman, C; Zilelioglu, O
2008-05-01
To evaluate the results of extracapsular cataract extraction (ECCE) and phacoemulsification (PHACO) performed in previously vitrectomized eyes. In this retrospective study, 56 vitrectomized eyes that had ECCE and 60 vitrectomized eyes that had PHACO were included in the study group while 65 eyes that had PHACO in the control group. The evaluated parameters were the incidence of intra-operative and postoperative complications (IPC) and visual outcomes. Chi-squared, independent samples and paired samples tests were used for comparing the results. Deep anterior chamber (AC) was significantly more common in the PHACO group of vitrectomized eyes (PGVE) and observed in eyes that had undergone extensive vitreous removal (p ECCE group and the PGVE (p > 0.05). Some of the intra-operative conditions such as posterior synechiae, primary posterior capsular opacification (PCO) and postoperative complications such as retinal detachment (RD), PCO were significantly more common in vitrectomized eyes than the controls (p ECCE group and the PGVE (p > 0.05). Deep AC is more common in eyes with extensive vitreous removal during PHACO than ECCE. Decreasing the bottle height is advised in this case. Except for this, the results of ECCE and PHACO are similar in previously vitrectomized eyes. Posterior synechiaes, primary and postoperative PCO and RD are more common in vitrectomized eyes than the controls.
Books average previous decade of economic misery.
R Alexander Bentley
Full Text Available For the 20(th century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Obinutuzumab for previously untreated chronic lymphocytic leukemia.
Abraham, Jame; Stegner, Mark
2014-04-01
Obinutuzumab was approved by the Food and Drug Administration in late 2013 for use in combination with chlorambucil for the treatment of patients with previously untreated chronic lymphocytic leukemia (CLL). The approval was based on results of an open-label phase 3 trial that showed improved progression-free survival (PFS) with the combination of obinutuzumab plus chlorambucil compared with chlorambucil alone. Obinutuzumab is a monoclonal antibody that targets CD20 antigen expressed on the surface of pre B- and mature B-lymphocytes. After binding to CD20, obinutuzumab mediates B-cell lysis by engaging immune effector cells, directly activating intracellular death signaling pathways, and activating the complement cascade. Immune effector cell activities include antibody-dependent cellular cytotoxicity and antibody-dependent cellular phagocytosis.
Can previous learning alter future plasticity mechanisms?
Crestani, Ana Paula; Quillfeldt, Jorge Alberto
2016-02-01
The dynamic processes related to mnemonic plasticity have been extensively researched in the last decades. More recently, studies have attracted attention because they show an unusual plasticity mechanism that is independent of the receptor most usually related to first-time learning--that is, memory acquisition-the NMDA receptor. An interesting feature of this type of learning is that a previous experience may cause modifications in the plasticity mechanism of a subsequent learning, suggesting that prior experience in a very similar task triggers a memory acquisition process that does not depend on NMDARs. The intracellular molecular cascades necessary to assist the learning process seem to depend on the activation of hippocampal CP-AMPARs. Moreover, most of these studies were performed on hippocampus-dependent tasks, even though other brain areas, such as the basolateral amygdala, also display NMDAR-independent learning.
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Induced vaginal birth after previous caesarean section
Akylbek Tussupkaliyev
2016-11-01
Full Text Available Introduction The rate of operative birth by Caesarean section is constantly rising. In Kazakhstan, it reaches 27 per cent. Research data confirm that the percentage of successful vaginal births after previous Caesarean section is 50–70 per cent. How safe the induction of vaginal birth after Caesarean (VBAC remains unclear. Methodology The studied techniques of labour induction were amniotomy of the foetal bladder with the vulsellum ramus, intravaginal administration of E1 prostaglandin (Misoprostol, and intravenous infusion of Oxytocin-Richter. The assessment of rediness of parturient canals was conducted by Bishop’s score; the labour course was assessed by a partogram. The effectiveness of labour induction techniques was assessed by the number of administered doses, the time of onset of regular labour, the course of labour and the postpartum period and the presence of complications, and the course of the early neonatal period, which implied the assessment of the child’s condition, described in the newborn development record. The foetus was assessed by medical ultrasound and antenatal and intranatal cardiotocography (CTG. Obtained results were analysed with SAS statistical processing software. Results The overall percentage of successful births with intravaginal administration of Misoprostol was 93 per cent (83 of cases. This percentage was higher than in the amniotomy group (relative risk (RR 11.7 and was similar to the oxytocin group (RR 0.83. Amniotomy was effective in 54 per cent (39 of cases, when it induced regular labour. Intravenous oxytocin infusion was effective in 94 per cent (89 of cases. This percentage was higher than that with amniotomy (RR 12.5. Conclusions The success of vaginal delivery after previous Caesarean section can be achieved in almost 70 per cent of cases. At that, labour induction does not decrease this indicator and remains within population boundaries.
Natsume, Takahiro; Ishida, Masaki; Kitagawa, Kakuya; Nagata, Motonori; Sakuma, Hajime; Ichihara, Takashi
2015-11-01
The purpose of this study was to develop a method to determine time discrepancies between input and myocardial time-signal intensity (TSI) curves for accurate estimation of myocardial perfusion with first-pass contrast-enhanced MRI. Estimation of myocardial perfusion with contrast-enhanced MRI using kinetic models requires faithful recording of contrast content in the blood and myocardium. Typically, the arterial input function (AIF) is obtained by setting a region of interest in the left ventricular cavity. However, there is a small delay between the AIF and the myocardial curves, and such time discrepancies can lead to errors in flow estimation using Patlak plot analysis. In this study, the time discrepancies between the arterial TSI curve and the myocardial tissue TSI curve were estimated based on the compartment model. In the early phase after the arrival of the contrast agent in the myocardium, the relationship between rate constant K1 and the concentrations of Gd-DTPA contrast agent in the myocardium and arterial blood (LV blood) can be described by the equation K1={dCmyo(tpeak)/dt}/Ca(tpeak), where Cmyo(t) and Ca(t) are the relative concentrations of Gd-DTPA contrast agent in the myocardium and in the LV blood, respectively, and tpeak is the time corresponding to the peak of Ca(t). In the ideal case, the time corresponding to the maximum upslope of Cmyo(t), tmax, is equal to tpeak. In practice, however, there is a small difference in the arrival times of the contrast agent into the LV and into the myocardium. This difference was estimated to correspond to the difference between tpeak and tmax. The magnitudes of such time discrepancies and the effectiveness of the correction for these time discrepancies were measured in 18 subjects who underwent myocardial perfusion MRI under rest and stress conditions. The effects of the time discrepancies could be corrected effectively in the myocardial perfusion estimates.
Qualitative methods in theoretical physics
Maslov, Dmitrii
2017-01-01
This book comprises a set of tools which allow researchers and students to arrive at a qualitatively correct answer without undertaking lengthy calculations. In general, Qualitative Methods in Theoretical Physics is about combining approximate mathematical methods with fundamental principles of physics: conservation laws and symmetries. Readers will learn how to simplify problems, how to estimate results, and how to apply symmetry arguments and conduct dimensional analysis. A comprehensive problem set is included. The book will appeal to a wide range of students and researchers.
Previous gastric bypass surgery complicating total thyroidectomy.
Alfonso, Bianca; Jacobson, Adam S; Alon, Eran E; Via, Michael A
2015-03-01
Hypocalcemia is a well-known complication of total thyroidectomy. Patients who have previously undergone gastric bypass surgery may be at increased risk of hypocalcemia due to gastrointestinal malabsorption, secondary hyperparathyroidism, and an underlying vitamin D deficiency. We present the case of a 58-year-old woman who underwent a total thyroidectomy for the follicular variant of papillary thyroid carcinoma. Her history included Roux-en-Y gastric bypass surgery. Following the thyroid surgery, she developed postoperative hypocalcemia that required large doses of oral calcium carbonate (7.5 g/day), oral calcitriol (up to 4 μg/day), intravenous calcium gluconate (2.0 g/day), calcium citrate (2.0 g/day), and ergocalciferol (50,000 IU/day). Her serum calcium levels remained normal on this regimen after hospital discharge despite persistent hypoparathyroidism. Bariatric surgery patients who undergo thyroid surgery require aggressive supplementation to maintain normal serum calcium levels. Preoperative supplementation with calcium and vitamin D is strongly recommended.
Sebacinales everywhere: previously overlooked ubiquitous fungal endophytes.
Weiss, Michael; Sýkorová, Zuzana; Garnica, Sigisfredo; Riess, Kai; Martos, Florent; Krause, Cornelia; Oberwinkler, Franz; Bauer, Robert; Redecker, Dirk
2011-02-15
Inconspicuous basidiomycetes from the order Sebacinales are known to be involved in a puzzling variety of mutualistic plant-fungal symbioses (mycorrhizae), which presumably involve transport of mineral nutrients. Recently a few members of this fungal order not fitting this definition and commonly referred to as 'endophytes' have raised considerable interest by their ability to enhance plant growth and to increase resistance of their host plants against abiotic stress factors and fungal pathogens. Using DNA-based detection and electron microscopy, we show that Sebacinales are not only extremely versatile in their mycorrhizal associations, but are also almost universally present as symptomless endophytes. They occurred in field specimens of bryophytes, pteridophytes and all families of herbaceous angiosperms we investigated, including liverworts, wheat, maize, and the non-mycorrhizal model plant Arabidopsis thaliana. They were present in all habitats we studied on four continents. We even detected these fungi in herbarium specimens originating from pioneering field trips to North Africa in the 1830s/40s. No geographical or host patterns were detected. Our data suggest that the multitude of mycorrhizal interactions in Sebacinales may have arisen from an ancestral endophytic habit by specialization. Considering their proven beneficial influence on plant growth and their ubiquity, endophytic Sebacinales may be a previously unrecognized universal hidden force in plant ecosystems.
Surgery of intracranial aneurysms previously treated endovascularly.
Tirakotai, Wuttipong; Sure, Ulrich; Yin, Yuhua; Benes, Ludwig; Schulte, Dirk Michael; Bien, Siegfried; Bertalanffy, Helmut
2007-11-01
To perform a retrospective study on the patients who underwent aneurysmal surgery following endovascular treatment. We performed a retrospective study on eight patients who underwent aneurysmal surgery following endovascular treatment (-attempts) with gugliemi detachable coils (GDCs). The indications for surgery, surgical techniques and clinical outcomes were analyzed. The indications for surgical treatment after GDC coiling of aneurysm were classified into three groups. First group: surgery of incompletely coiled aneurysms (n=4). Second group: surgery of mass effect on the neural structures due to coil compaction or rebleeding (n=2). Third group: surgery of vascular complications after endovascular procedure due to parent artery occlusion or thrombus propagation from aneurysm (n=2). Aneurysm obliterations could be performed in all cases confirmed by postoperative angiography. Six patients had an excellent outcome and returned to their profession. Patient's visual acuity was improved. One individual experienced right hemiparesis (grade IV/V) and hemihypesthesia. Microsurgical clipping is rarely necessary for previously coiled aneurysms. Surgical treatment is uncommonly required when an acute complication arises during endovascular treatment, or when there is a dynamic change of a residual aneurysm configuration over time that is considered to be insecure.
[Electronic cigarettes - effects on health. Previous reports].
Napierała, Marta; Kulza, Maksymilian; Wachowiak, Anna; Jabłecka, Katarzyna; Florek, Ewa
2014-01-01
Currently very popular in the market of tobacco products have gained electronic cigarettes (ang. E-cigarettes). These products are considered to be potentially less harmful in compared to traditional tobacco products. However, current reports indicate that the statements of the producers regarding to the composition of the e- liquids not always are sufficient, and consumers often do not have reliable information on the quality of the product used by them. This paper contain a review of previous reports on the composition of e-cigarettes and their impact on health. Most of the observed health effects was related to symptoms of the respiratory tract, mouth, throat, neurological complications and sensory organs. Particularly hazardous effects of the e-cigarettes were: pneumonia, congestive heart failure, confusion, convulsions, hypotension, aspiration pneumonia, face second-degree burns, blindness, chest pain and rapid heartbeat. In the literature there is no information relating to passive exposure by the aerosols released during e-cigarette smoking. Furthermore, the information regarding to the use of these products in the long term are not also available.
Sebacinales everywhere: previously overlooked ubiquitous fungal endophytes.
Michael Weiss
Full Text Available Inconspicuous basidiomycetes from the order Sebacinales are known to be involved in a puzzling variety of mutualistic plant-fungal symbioses (mycorrhizae, which presumably involve transport of mineral nutrients. Recently a few members of this fungal order not fitting this definition and commonly referred to as 'endophytes' have raised considerable interest by their ability to enhance plant growth and to increase resistance of their host plants against abiotic stress factors and fungal pathogens. Using DNA-based detection and electron microscopy, we show that Sebacinales are not only extremely versatile in their mycorrhizal associations, but are also almost universally present as symptomless endophytes. They occurred in field specimens of bryophytes, pteridophytes and all families of herbaceous angiosperms we investigated, including liverworts, wheat, maize, and the non-mycorrhizal model plant Arabidopsis thaliana. They were present in all habitats we studied on four continents. We even detected these fungi in herbarium specimens originating from pioneering field trips to North Africa in the 1830s/40s. No geographical or host patterns were detected. Our data suggest that the multitude of mycorrhizal interactions in Sebacinales may have arisen from an ancestral endophytic habit by specialization. Considering their proven beneficial influence on plant growth and their ubiquity, endophytic Sebacinales may be a previously unrecognized universal hidden force in plant ecosystems.
A previously undescribed pathway for pyrimidine catabolism.
Loh, Kevin D; Gyaneshwar, Prasad; Markenscoff Papadimitriou, Eirene; Fong, Rebecca; Kim, Kwang-Seo; Parales, Rebecca; Zhou, Zhongrui; Inwood, William; Kustu, Sydney
2006-03-28
The b1012 operon of Escherichia coli K-12, which is composed of seven unidentified ORFs, is one of the most highly expressed operons under control of nitrogen regulatory protein C. Examination of strains with lesions in this operon on Biolog Phenotype MicroArray (PM3) plates and subsequent growth tests indicated that they failed to use uridine or uracil as the sole nitrogen source and that the parental strain could use them at room temperature but not at 37 degrees C. A strain carrying an ntrB(Con) mutation, which elevates transcription of genes under nitrogen regulatory protein C control, could also grow on thymidine as the sole nitrogen source, whereas strains with lesions in the b1012 operon could not. Growth-yield experiments indicated that both nitrogens of uridine and thymidine were available. Studies with [(14)C]uridine indicated that a three-carbon waste product from the pyrimidine ring was excreted. After trimethylsilylation and gas chromatography, the waste product was identified by mass spectrometry as 3-hydroxypropionic acid. In agreement with this finding, 2-methyl-3-hydroxypropionic acid was released from thymidine. Both the number of available nitrogens and the waste products distinguished the pathway encoded by the b1012 operon from pyrimidine catabolic pathways described previously. We propose that the genes of this operon be named rutA-G for pyrimidine utilization. The product of the divergently transcribed gene, b1013, is a tetracycline repressor family regulator that controls transcription of the b1012 operon negatively.
UNIFIED THEORETICAL MOMENT EXPRESSIONS FOR ELUTION CHROMATOGRAPHY AND FRONTAL CHROMATOGRAPHY
YANGGengliang; TAOZuyi
1992-01-01
The unified theoretical moment expressions for elution chromatography and frontal chromatography when the sorption process is described by a linear model were derived. The moment expressions derived by previous authors can be obtained from these unified theoretical moment expressions. In this paper, a mathematical analysis has been carried out so as to set up a unified theoretical basis for elution and frontal chromatography.
Birth after previous cesarean delivery: short-term maternal outcomes.
Lydon-Rochelle, Mona T; Cahill, Alison G; Spong, Catherine Y
2010-08-01
An estimated 40% of the 1.3 million cesarean deliveries performed each year in the United States are repeat procedures. The appropriate clinical management approach for women with previous cesarean delivery remains challenging because options are limited. The risks and benefits of clinical management choices in the woman's health need to be quantified. Thus, we discuss the available published scientific data on (1) the short-term maternal outcomes of trial of labor after cesarean and elective repeat cesarean delivery, (2) the differences between outcomes for both, (3) the important factors that influence these outcomes, and (4) successful vs. unsuccessful vaginal birth after cesarean. For women with a previous cesarean delivery, a successful trial of labor offers several distinct, consistently reproducible advantages compared with elective repeat cesarean delivery, including fewer hysterectomies, fewer thromboembolic events, lower blood transfusion rates, and shorter hospital stay. However, when trial of labor after cesarean fails, emergency cesarean is associated with increased uterine rupture, hysterectomy, operative injury, blood transfusion, endometritis, and longer hospital stay. Care of women with a history of previous cesarean delivery involves a confluence of interactions between medical and nonmedical factors; however, the most important determinants of the short-term outcomes among these women are likely individualized counseling, accurate clinical diagnoses, and careful management during a trial of labor. We recommend a randomized controlled trial among women undergoing a TOLAC and a longitudinal cohort study among women with previous cesarean to evaluate adverse outcomes, with focused attention on both mother and the infant.
Blatt, John M
2010-01-01
A classic work by two leading physicists and scientific educators endures as an uncommonly clear and cogent investigation and correlation of key aspects of theoretical nuclear physics. It is probably the most widely adopted book on the subject. The authors approach the subject as ""the theoretical concepts, methods, and considerations which have been devised in order to interpret the experimental material and to advance our ability to predict and control nuclear phenomena.""The present volume does not pretend to cover all aspects of theoretical nuclear physics. Its coverage is restricted to
2002-01-01
The proceedings contains 8 papers from the Conference on Theoretical Computer Science. Topics discussed include: query by committee, linear separation and random walks; hardness results for neural network approximation problems; a geometric approach to leveraging weak learners; mind change...
Order-theoretical connectivity
T. A. Richmond
1990-01-01
Full Text Available Order-theoretically connected posets are introduced and applied to create the notion of T-connectivity in ordered topological spaces. As special cases T-connectivity contains classical connectivity, order-connectivity, and link-connectivity.
2002-01-01
The proceedings contains 8 papers from the Conference on Theoretical Computer Science. Topics discussed include: query by committee, linear separation and random walks; hardness results for neural network approximation problems; a geometric approach to leveraging weak learners; mind change...
Consensus theoretic classification methods
Benediktsson, Jon A.; Swain, Philip H.
1992-01-01
Consensus theory is adopted as a means of classifying geographic data from multiple sources. The foundations and usefulness of different consensus theoretic methods are discussed in conjunction with pattern recognition. Weight selections for different data sources are considered and modeling of non-Gaussian data is investigated. The application of consensus theory in pattern recognition is tested on two data sets: 1) multisource remote sensing and geographic data and 2) very-high-dimensional remote sensing data. The results obtained using consensus theoretic methods are found to compare favorably with those obtained using well-known pattern recognition methods. The consensus theoretic methods can be applied in cases where the Gaussian maximum likelihood method cannot. Also, the consensus theoretic methods are computationally less demanding than the Gaussian maximum likelihood method and provide a means for weighting data sources differently.
Theoretical and computational chemistry.
Meuwly, Markus
2010-01-01
Computer-based and theoretical approaches to chemical problems can provide atomistic understanding of complex processes at the molecular level. Examples ranging from rates of ligand-binding reactions in proteins to structural and energetic investigations of diastereomers relevant to organo-catalysis are discussed in the following. They highlight the range of application of theoretical and computational methods to current questions in chemical research.
Theoretical physics and astrophysics
Ginzburg, VL
1979-01-01
The aim of this book is to present, on the one hand various topics in theoretical physics in depth - especially topics related to electrodynamics - and on the other hand to show how these topics find applications in various aspects of astrophysics. The first text on theoretical physics and astrophysical applications, it covers many recent advances including those in X-ray, &ggr;-ray and radio-astronomy, with comprehensive coverage of the literature
Theoretically Founded Optimization of Auctioneer's Revenues in Expanding Auctions
Rabin, Jonathan; Shehory, Onn
The expanding auction is a multi-unit auction which provides the auctioneer with control over the outcome of the auction by means of dynamically adding items for sale. Previous research on the expanding auction has provided a numeric method to calculate a strategy that optimizes the auctioneer's revenue. In this paper, we analyze various theoretical properties of the expanding auction, and compare it to VCG, a multi-unit auction protocol known in the art. We examine the effects of errors in the auctioneer's estimation of the buyers' maximal bidding values and prove a theoretical bound on the ratio between the revenue yielded by the Informed Decision Strategy (IDS) and the post-optimal strategy. We also analyze the relationship between the auction step and the optimal revenue and introduce a method of computing this optimizing step. We further compare the revenues yielded by the use of IDS with an expanding auction to those of the VCG mechanism and determine the conditions under which the former outperforms the latter. Our work provides new insight into the properties of the expanding auction. It further provides theoretically founded means for optimizing the revenue of auctioneer.
Previous violent events and mental health outcomes in Guatemala.
Puac-Polanco, Victor D; Lopez-Soto, Victor A; Kohn, Robert; Xie, Dawei; Richmond, Therese S; Branas, Charles C
2015-04-01
We analyzed a probability sample of Guatemalans to determine if a relationship exists between previous violent events and development of mental health outcomes in various sociodemographic groups, as well as during and after the Guatemalan Civil War. We used regression modeling, an interaction test, and complex survey design adjustments to estimate prevalences and test potential relationships between previous violent events and mental health. Many (20.6%) participants experienced at least 1 previous serious violent event. Witnessing someone severely injured or killed was the most common event. Depression was experienced by 4.2% of participants, with 6.5% experiencing anxiety, 6.4% an alcohol-related disorder, and 1.9% posttraumatic stress disorder (PTSD). Persons who experienced violence during the war had 4.3 times the adjusted odds of alcohol-related disorders (P PTSD (P civil war and continues today has had a significant effect on the mental health of Guatemalans. However, mental health outcomes resulting from violent events decreased in the postwar period, suggesting a nation in recovery.
Previous Violent Events and Mental Health Outcomes in Guatemala
Puac-Polanco, Victor D.; Lopez-Soto, Victor A.; Kohn, Robert; Xie, Dawei; Richmond, Therese S.
2015-01-01
Objectives. We analyzed a probability sample of Guatemalans to determine if a relationship exists between previous violent events and development of mental health outcomes in various sociodemographic groups, as well as during and after the Guatemalan Civil War. Methods. We used regression modeling, an interaction test, and complex survey design adjustments to estimate prevalences and test potential relationships between previous violent events and mental health. Results. Many (20.6%) participants experienced at least 1 previous serious violent event. Witnessing someone severely injured or killed was the most common event. Depression was experienced by 4.2% of participants, with 6.5% experiencing anxiety, 6.4% an alcohol-related disorder, and 1.9% posttraumatic stress disorder (PTSD). Persons who experienced violence during the war had 4.3 times the adjusted odds of alcohol-related disorders (P < .05) and 4.0 times the adjusted odds of PTSD (P < .05) compared with the postwar period. Women, indigenous Maya, and urban dwellers had greater odds of experiencing postviolence mental health outcomes. Conclusions. Violence that began during the civil war and continues today has had a significant effect on the mental health of Guatemalans. However, mental health outcomes resulting from violent events decreased in the postwar period, suggesting a nation in recovery. PMID:25713973
Gottlieb, S.A.
1990-05-01
My research in lattice gauge theory during the past year is described. Several projects were completed dealing with QCD simulations including dynamical fermions. Under the DOE Grand Challenge program, a large scale calculation of the QCD spectrum with two light flavors of dynamical staggered quarks was carried out. This calculation is one of the most significant efforts to data to take into account the effects of dynamical fermions. Smaller lattice spacing and lighter quark masses were used than in previous attempts. QCD thermodynamics was studied on the ST-100 array processor and on an ETA supercomputer at the John von Neumann Supercomputer Center. On the ST-100, a study with two flavors of dynamical staggered quarks with am{sub q} = 0.025 and 0.0125 was carried out on a 12{sup 3} {times} 8 lattice. These results give a rough estimate of the crossover couplings where we see the restoration of chiral symmetry. A study of QCD with dynamical Wilson fermions was carried out with N{sub t} = 4 to try to bring the study of QCD with dynamical Wilson fermions to the level that has been attained with staggered fermions over the past two years. We have calculated screening lengths to elucidate the properties of the high temperature phase. In the pure gluon theory, claims that the finite temperature deconfinement transition is second order, rather than first order, were investigated using a finite size scaling analysis. Our results support a first order transition. Finally, work was done to port computer code to new environments involving parallelism in order to pursue more ambitious calculations on more powerful hardware than the ST-100 and ETA10 used for the calculations reported here.
Frisch, Matthias; Melchinger, Albrecht E
2008-01-01
Random intermating of F2 populations has been suggested for obtaining precise estimates of recombination frequencies between tightly linked loci. In a simulation study, sampling effects due to small population sizes in the intermating generations were found to abolish the advantages of random intermating that were reported in previous theoretical studies considering an infinite population size. We propose a mating scheme for intermating with planned crosses that yields more precise estimates than those under random intermating.
Theoretic Study of CⅡ Recombination Line
彭永伦; 王民盛; 韩小英; 李家明
2004-01-01
Using the R-matrix method, we carry out theoretical calculations for recombination line λ 8794 A(3d'-3p') of CⅡ, which is important to estimate the abundances of carbon in planetary nebulae. Our calculations are based on three sets of target orbital basis, through which we elucidate the electron correlation and static polarization effects in the dielectronic recombination processes.
Global Polynomial Kernel Hazard Estimation
Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch;
2015-01-01
This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically...
Global Polynomial Kernel Hazard Estimation
Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch
2015-01-01
This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...
Sharrock, R; Gudjonsson, G H
1993-05-01
The main purpose of this study was to investigate the relationship between interrogative suggestibility and previous convictions among 108 defendants in criminal trials, using a path analysis technique. It was hypothesized that previous convictions, which may provide defendants with interrogative experiences, would correlate negatively with 'shift' as measured by the Gudjonsson Suggestibility Scale (Gudjonsson, 1984a), after intelligence and memory had been controlled for. The hypothesis was partially confirmed and the theoretical and practical implications of the findings are discussed.
Reflections on theoretical pragmatics
黄衍
2001-01-01
This paper provides a critical survey of theoretical pragmatics in contemporary linguistics. Among the topics that are addressed in the essay include the Anglo-American, and European Continental schools of thought;neo-Gricean pragmatic, and Relevance theories, the pragmatics-semantics interface; and the pragmatics-syntax interface.
Particle Interferometry New Theoretical Results
Heinz, Ulrich W
1997-01-01
By measuring hadronic single-particle spectra and two-particle correlations in heavy-ion collisions, the size and dynamical state of the collision fireball at freeze-out can be reconstructed. I discuss the relevant theoretical methods and their limitations. By applying the formalism to recent pion correlation data from Pb+Pb collisions at CERN we demonstrate that the collision zone has undergone strong transverse growth before freeze-out (by a factor 2-3 in each direction), and that it expands both longitudinally and transversally. From the thermal and flow energy density at freeze-out the energy density at the onset of transverse expansion can be estimated from conservation laws. It comfortably exceeds the critical value for the transition to color deconfined matter.
Cohen, Andrew [Boston Univ., MA (United States); Schmaltz, Martin [Boston Univ., MA (United States); Katz, Emmanuel [Boston Univ., MA (United States); Rebbi, Claudio [Boston Univ., MA (United States); Glashow, Sheldon [Boston Univ., MA (United States); Brower, Richard [Boston Univ., MA (United States); Pi, So-Young [Boston Univ., MA (United States)
2016-09-30
This award supported a broadly based research effort in theoretical particle physics, including research aimed at uncovering the laws of nature at short (subatomic) and long (cosmological) distances. These theoretical developments apply to experiments in laboratories such as CERN, the facility that operates the Large Hadron Collider outside Geneva, as well as to cosmological investigations done using telescopes and satellites. The results reported here apply to physics beyond the so-called Standard Model of particle physics; physics of high energy collisions such as those observed at the Large Hadron Collider; theoretical and mathematical tools and frameworks for describing the laws of nature at short distances; cosmology and astrophysics; and analytic and computational methods to solve theories of short distance physics. Some specific research accomplishments include + Theories of the electroweak interactions, the forces that give rise to many forms of radioactive decay; + Physics of the recently discovered Higgs boson. + Models and phenomenology of dark matter, the mysterious component of the universe, that has so far been detected only by its gravitational effects. + High energy particles in astrophysics and cosmology. + Algorithmic research and Computational methods for physics of and beyond the Standard Model. + Theory and applications of relativity and its possible limitations. + Topological effects in field theory and cosmology. + Conformally invariant systems and AdS/CFT. This award also supported significant training of students and postdoctoral fellows to lead the research effort in particle theory for the coming decades. These students and fellows worked closely with other members of the group as well as theoretical and experimental colleagues throughout the physics community. Many of the research projects funded by this grant arose in response to recently obtained experimental results in the areas of particle physics and cosmology. We describe a few of
INFANTILISM: THEORETICAL CONSTRUCT AND OPERATIONALIZATION
Yelena V. Sabelnikova
2016-01-01
Full Text Available The aim of the presented research is to define and operationalize theoretically the concept of infantilism and its construct. The content of theoretical construct «infantilism» is analyzed. Methods. The methods of theoretical research involve analysis and synthesis. The age and content criteria are analysed in the context of childhood and adulthood. The traits which can be interpreted as adult infantile traits are described. Results. The characteristics of adult infantilism in modern world taking into account the increasing of information flows and socio-economic changes are defined. The definition of the concept «infantilism» including its main features is given. Infantilism is defined as the personal organization including features and models of the previous age period not adequate for the real age stage with emphasis on immaturity of the emotional and volitional sphere. Scientific novelty. The main psychological characteristics of adulthood are described as the reflection, requirement to work and professional activity, existence of professional self-determination, possession of labor skills, need for selfrealization, maturity of the emotional and volitional sphere. As objective adulthood characteristics are considered the following: transition to economic and territorial independence of a parental family, and also development of new social roles, such as a worker, spouse, and parent. Two options of a possible operationalization of concept are allocated: objective (existence / absence in real human life of objective criteria of adulthood and subjective (the self-report on subjective feeling of existence / lack of psychological characteristics of adulthood. Practical significance consists in a construct operationalization of «infantilism» which at the moment has so many interpretations. That operationalization is necessary for the further analysis and carrying out various researches.
Theoretical physics 1 classical mechanics
Nolting, Wolfgang
2016-01-01
This textbook offers a clear and comprehensive introduction to classical mechanics, one of the core components of undergraduate physics courses. The book starts with a thorough introduction to the mathematical tools needed, to make this textbook self-contained for learning. The second part of the book introduces the mechanics of the free mass point and details conservation principles. The third part expands the previous to mechanics of many particle systems. Finally the mechanics of the rigid body is illustrated with rotational forces, inertia and gyroscope movement. Ideally suited to undergraduate students in their first year, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets. About the Theoretical Physics series Translated from the renowned and highly successful German editions, the eight volumes of this series...
Landy, David; Silbert, Noah; Goldin, Aleah
2013-01-01
Despite their importance in public discourse, numbers in the range of 1 million to 1 trillion are notoriously difficult to understand. We examine magnitude estimation by adult Americans when placing large numbers on a number line and when qualitatively evaluating descriptions of imaginary geopolitical scenarios. Prior theoretical conceptions…
Friedrich, Harald
2017-01-01
This expanded and updated well-established textbook contains an advanced presentation of quantum mechanics adapted to the requirements of modern atomic physics. It includes topics of current interest such as semiclassical theory, chaos, atom optics and Bose-Einstein condensation in atomic gases. In order to facilitate the consolidation of the material covered, various problems are included, together with complete solutions. The emphasis on theory enables the reader to appreciate the fundamental assumptions underlying standard theoretical constructs and to embark on independent research projects. The fourth edition of Theoretical Atomic Physics contains an updated treatment of the sections involving scattering theory and near-threshold phenomena manifest in the behaviour of cold atoms (and molecules). Special attention is given to the quantization of weakly bound states just below the continuum threshold and to low-energy scattering and quantum reflection just above. Particular emphasis is laid on the fundamen...
Compendium of theoretical physics
Wachter, Armin
2006-01-01
Mechanics, Electrodynamics, Quantum Mechanics, and Statistical Mechanics and Thermodynamics comprise the canonical undergraduate curriculum of theoretical physics. In Compendium of Theoretical Physics, Armin Wachter and Henning Hoeber offer a concise, rigorous and structured overview that will be invaluable for students preparing for their qualifying examinations, readers needing a supplement to standard textbooks, and research or industrial physicists seeking a bridge between extensive textbooks and formula books. The authors take an axiomatic-deductive approach to each topic, starting the discussion of each theory with its fundamental equations. By subsequently deriving the various physical relationships and laws in logical rather than chronological order, and by using a consistent presentation and notation throughout, they emphasize the connections between the individual theories. The reader’s understanding is then reinforced with exercises, solutions and topic summaries. Unique Features: Every topic is ...
Robustness - theoretical framework
Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.
2010-01-01
More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....
Electrochemical kinetics theoretical aspects
Vetter, Klaus J
1967-01-01
Electrochemical Kinetics: Theoretical Aspects focuses on the processes, methodologies, reactions, and transformations in electrochemical kinetics. The book first offers information on electrochemical thermodynamics and the theory of overvoltage. Topics include equilibrium potentials, concepts and definitions, electrical double layer and electrocapillarity, and charge-transfer, diffusion, and reaction overvoltage. Crystallization overvoltage, total overvoltage, and resistance polarization are also discussed. The text then examines the methods of determining electrochemical reaction mechanisms
Theoretical Delay Time Distributions
Nelemans, Gijs; Bours, Madelon
2012-01-01
We briefly discuss the method of population synthesis to calculate theoretical delay time distributions of type Ia supernova progenitors. We also compare the results of the different research groups and conclude that although one of the main differences in the results for single degenerate progenitors is the retention efficiency with which accreted hydrogen is added to the white dwarf core, this cannot explain all the differences.
Theoretical Delay Time Distributions
Nelemans, Gijs; Toonen, Silvia; Bours, Madelon
2013-01-01
We briefly discuss the method of population synthesis to calculate theoretical delay time distributions of Type Ia supernova progenitors. We also compare the results of different research groups and conclude that, although one of the main differences in the results for single degenerate progenitors is the retention efficiency with which accreted hydrogen is added to the white dwarf core, this alone cannot explain all the differences.
Silicene: Recent theoretical advances
Lew Yan Voon, L. C.
2016-04-14
Silicene is a two-dimensional allotrope of silicon with a puckered hexagonal structure closely related to the structure of graphene and that has been predicted to be stable. To date, it has been successfully grown in solution (functionalized) and on substrates. The goal of this review is to provide a summary of recent theoretical advances in the properties of both free-standing silicene as well as in interaction with molecules and substrates, and of proposed device applications.
MARKETING MIX THEORETICAL ASPECTS
Margarita Išoraitė
2016-01-01
Aim of article is to analyze marketing mix theoretical aspects. The article discusses that marketing mix is one of the main objectives of the marketing mix elements for setting objectives and marketing budget measures. The importance of each element depends not only on the company and its activities, but also on the competition and time. All marketing elements are interrelated and should be seen in the whole of their actions. Some items may have greater importance than others; it depends main...
Robustness - theoretical framework
Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.
2010-01-01
More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....
Theoretical numerical analysis
Wendroff, Burton
1966-01-01
Theoretical Numerical Analysis focuses on the presentation of numerical analysis as a legitimate branch of mathematics. The publication first elaborates on interpolation and quadrature and approximation. Discussions focus on the degree of approximation by polynomials, Chebyshev approximation, orthogonal polynomials and Gaussian quadrature, approximation by interpolation, nonanalytic interpolation and associated quadrature, and Hermite interpolation. The text then ponders on ordinary differential equations and solutions of equations. Topics include iterative methods for nonlinear systems, matri
Theoretical Developments in SUSY
Shifman, M.
2009-01-01
I am proud that I was personally acquainted with Julius Wess. We first met in 1999 when I was working on the Yuri Golfand Memorial Volume (The Many Faces of the Superworld, World Scientific, Singapore, 2000). I invited him to contribute, and he accepted this invitation with enthusiasm. After that, we met many times, mostly at various conferences in Germany and elsewhere. I was lucky to discuss with Julius questions of theoretical physics, and hear his recollections on how supersymmetry was born. In physics Julius was a visionary, who paved the way to generations of followers. In everyday life he was a kind and modest person, always ready to extend a helping hand to people who were in need of his help. I remember him telling me how concerned he was about the fate of theoretical physicists in Eastern Europe after the demise of communism. His ties with Israeli physicists bore a special character. I am honored by the opportunity to contribute an article to the Julius Wess Memorial Volume. I will review theoretical developments of the recent years in non-perturbative supersymmetry.
Theoretical developments in SUSY
Shifman, M. [University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)
2009-01-15
I am proud that I was personally acquainted with Julius Wess. We first met in 1999 when I was working on the Yuri Golfand Memorial Volume (The Many Faces of the Superworld, World Scientific, Singapore, 2000). I invited him to contribute, and he accepted this invitation with enthusiasm. After that, we met many times, mostly at various conferences in Germany and elsewhere. I was lucky to discuss with Julius questions of theoretical physics, and hear his recollections on how supersymmetry was born. In physics Julius was a visionary, who paved the way to generations of followers. In everyday life he was a kind and modest person, always ready to extend a helping hand to people who were in need of his help. I remember him telling me how concerned he was about the fate of theoretical physicists in Eastern Europe after the demise of communism. His ties with Israeli physicists bore a special character. I am honored by the opportunity to contribute an article to the Julius Wess Memorial Volume. I review theoretical developments of the recent years in non-perturbative supersymmetry. (orig.)
Biology is more theoretical than physics.
Gunawardena, Jeremy
2013-06-01
The word "theory" is used in at least two senses--to denote a body of widely accepted laws or principles, as in "Darwinian theory" or "quantum theory," and to suggest a speculative hypothesis, often relying on mathematical analysis, that has not been experimentally confirmed. It is often said that there is no place for the second kind of theory in biology and that biology is not theoretical but based on interpretation of data. Here, ideas from a previous essay are expanded upon to suggest, to the contrary, that the second kind of theory has always played a critical role and that biology, therefore, is a good deal more theoretical than physics.
Schlüter, M; Kerschhaggl, O; Wagner, F
1999-08-01
We present a general performance measure (information loss) for associative memories based on information theoretical concepts. This performance measure can be estimated, provided that mean values of observables have been determined for the associative memory. Then the estimation guarantees a minimal association quality. The formalism allows the application of the performance measure to complex systems where the relation between input and output of the associative memory is not explicitly known. Here we apply our formalism to the Hopfield model and estimate the storage capacity alpha(c) from the numerically determined information loss. In contrast to other numerical methods the whole overlap distribution is taken into account. Our numerical value alpha(c)=0.1379(4) for the storage capacity in the Hopfield model is below numerical values obtained previously. This indicates that the consideration of small remnant overlaps lowers the storage capacity of the Hopfield model.
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Institute for Theoretical Physics
Giddings, S.B.; Ooguri, H.; Peet, A.W.; Schwarz, J.H.
1998-06-01
String theory is the only serious candidate for a unified description of all known fundamental particles and interactions, including gravity, in a single theoretical framework. Over the past two years, activity in this subject has grown rapidly, thanks to dramatic advances in understanding the dynamics of supersymmetric field theories and string theories. The cornerstone of these new developments is the discovery of duality which relates apparently different string theories and transforms difficult strongly coupled problems of one theory into weakly coupled problems of another theory.
Theoretical astrophysics an introduction
Bartelmann, Matthias
2013-01-01
A concise yet comprehensive introduction to the central theoretical concepts of modern astrophysics, presenting hydrodynamics, radiation, and stellar dynamics all in one textbook. Adopting a modular structure, the author illustrates a small number of fundamental physical methods and principles, which are sufficient to describe and understand a wide range of seemingly very diverse astrophysical phenomena and processes. For example, the formulae that define the macroscopic behavior of stellar systems are all derived in the same way from the microscopic distribution function. This function it
Shivamoggi, Bhimsen K
1998-01-01
"Although there are many texts and monographs on fluid dynamics, I do not know of any which is as comprehensive as the present book. It surveys nearly the entire field of classical fluid dynamics in an advanced, compact, and clear manner, and discusses the various conceptual and analytical models of fluid flow." - Foundations of Physics on the first edition. Theoretical Fluid Dynamics functions equally well as a graduate-level text and a professional reference. Steering a middle course between the empiricism of engineering and the abstractions of pure mathematics, the author focuses
Theoretical Optics An Introduction
Römer, Hartmann
2004-01-01
Starting from basic electrodynamics, this volume provides a solid, yet concise introduction to theoretical optics, containing topics such as nonlinear optics, light-matter interaction, and modern topics in quantum optics, including entanglement, cryptography, and quantum computation. The author, with many years of experience in teaching and research, goes way beyond the scope of traditional lectures, enabling readers to keep up with the current state of knowledge. Both content and presentation make it essential reading for graduate and phD students as well as a valuable reference for researche
Theoretical solid state physics
Haug, Albert
2013-01-01
Theoretical Solid State Physics, Volume 1 focuses on the study of solid state physics. The volume first takes a look at the basic concepts and structures of solid state physics, including potential energies of solids, concept and classification of solids, and crystal structure. The book then explains single-electron approximation wherein the methods for calculating energy bands; electron in the field of crystal atoms; laws of motion of the electrons in solids; and electron statistics are discussed. The text describes general forms of solutions and relationships, including collective electron i
Over 400 previously undocumented Svalbard surge-type glaciers identified
Farnsworth, Wesley R.; Ingólfsson, Ólafur; Retelle, Michael; Schomacker, Anders
2016-07-01
Identifying glaciers that exhibit surge-type behavior is important when using evidence of ice front fluctuations as a proxy for reconstructing past climate oscillations. This study identifies previously undocumented surge-type glaciers in Svalbard, based on the presence of crevasse squeeze ridges in glacier forelands. Crevasse squeeze ridges are landforms suggested to be unique to surging glacier land systems. Estimates vary greatly as to the actual percentage of surge-type glaciers in Svalbard, and consequently their distribution pattern is poorly understood. A detailed survey of recent (2008-2012), high-resolution aerial imagery from TopoSvalbard, provided by the Norwegian Polar Institute, allowed for a survey of all the glacier forelands in Svalbard. Before our study, 277 individual glaciers in Svalbard have been documented to exhibit surge behavior. By using crevasse squeeze ridges as indicators of surge behavior, we have identified 431 additional glaciers that have surged. We suggest that this is a modest value as the unique surge landforms were not visible in approximately one-third of the forelands with documented surge histories. Limits to the crevasse squeeze ridge technique are presented and potential controlling factors for crevasse squeeze ridge formation/preservation are discussed.
Numerical Estimation of Information Theoretic Measures for Large Data Sets
2013-01-30
specific applications and to prove that performance was improving as trackers were tuned or new trackers acquired. The authors’ prior research has...community, the inability to recognize superior performance from trackers meant that it was difficult to acquire the right trackers for specific applications ...JNIMIJMN EEE EEE 2 JNINIJM IJNIJMN EEE EE 2 IMIN IJNIJMIJMN EE EEE 2 IMJNIN IJMIJNIJMN EEE EEE 2 332 yxH
Masochism: a clinical and theoretical overview.
Sack, R L; Miller, W
1975-08-01
This paper will review some of the theoretical and clinical features of masochism from an eclectic point of view. The topic of masochism has been taken up by authors of many perspectives because it addresses one of the anomalous, absurd, difficult-to-explain aspects of behavior for which no psychological system has an easy answer. Therefore, a wide-ranging literature on the topic of masochism is available. However, few previous reviewers have attempted to draw from a variety of disciplines and theoretical frameworks. In this review the historical development of the term and some of the psychoanalytic conceptualizations will be presented first. Since previous reviews of masochism from a strictly psychoanalytic perspective are adequate (Brenner, 1959; Eisenbud, 1967; Fenichel, 1945; Loewenstein, 1957; Panken, 1967), our discussions of masochism will be developed employing more extensively the interpersonal, social, learning theory, and biological perspectives.
Thomas, Hoben
1981-01-01
Psychophysicists neglect to consider how error should be characterized in applications of the power law. Failures of the power law to agree with certain theoretical predictions are examined. A power law with lognormal product structure is proposed and approximately unbiased parameter estimates given for several common estimation situations.…
Multisensor estimation: New distributed algorithms
K. N. Plataniotis
1996-01-01
Full Text Available The multisensor estimation problem is considered in this paper. New distributed algorithms, which are able to locally process the information and which deliver identical results to those generated by their centralized counterparts are presented. The algorithms can be used to provide robust and computationally efficient solutions to the multisensor estimation problem. The proposed distributed algorithms are theoretically interesting and computationally attractive.
Methodological Framework for Estimating the Correlation Dimension in HRV Signals
Bolea, Juan; Laguna, Pablo; Remartínez, José María; Rovira, Eva; Navarro, Augusto; Bailón, Raquel
2014-01-01
This paper presents a methodological framework for robust estimation of the correlation dimension in HRV signals. It includes (i) a fast algorithm for on-line computation of correlation sums; (ii) log-log curves fitting to a sigmoidal function for robust maximum slope estimation discarding the estimation according to fitting requirements; (iii) three different approaches for linear region slope estimation based on latter point; and (iv) exponential fitting for robust estimation of saturation level of slope series with increasing embedded dimension to finally obtain the correlation dimension estimate. Each approach for slope estimation leads to a correlation dimension estimate, called D^2, D^2⊥, and D^2max. D^2 and D^2max estimate the theoretical value of correlation dimension for the Lorenz attractor with relative error of 4%, and D^2⊥ with 1%. The three approaches are applied to HRV signals of pregnant women before spinal anesthesia for cesarean delivery in order to identify patients at risk for hypotension. D^2 keeps the 81% of accuracy previously described in the literature while D^2⊥ and D^2max approaches reach 91% of accuracy in the same database. PMID:24592284
Methodological Framework for Estimating the Correlation Dimension in HRV Signals
Juan Bolea
2014-01-01
Full Text Available This paper presents a methodological framework for robust estimation of the correlation dimension in HRV signals. It includes (i a fast algorithm for on-line computation of correlation sums; (ii log-log curves fitting to a sigmoidal function for robust maximum slope estimation discarding the estimation according to fitting requirements; (iii three different approaches for linear region slope estimation based on latter point; and (iv exponential fitting for robust estimation of saturation level of slope series with increasing embedded dimension to finally obtain the correlation dimension estimate. Each approach for slope estimation leads to a correlation dimension estimate, called D^2, D^2⊥, and D^2max. D^2 and D^2max estimate the theoretical value of correlation dimension for the Lorenz attractor with relative error of 4%, and D^2⊥ with 1%. The three approaches are applied to HRV signals of pregnant women before spinal anesthesia for cesarean delivery in order to identify patients at risk for hypotension. D^2 keeps the 81% of accuracy previously described in the literature while D^2⊥ and D^2max approaches reach 91% of accuracy in the same database.
Theoretical Particle Astrophysics
Kamionkowski, Marc
2013-08-07
Abstract: Theoretical Particle Astrophysics The research carried out under this grant encompassed work on the early Universe, dark matter, and dark energy. We developed CMB probes for primordial baryon inhomogeneities, primordial non-Gaussianity, cosmic birefringence, gravitational lensing by density perturbations and gravitational waves, and departures from statistical isotropy. We studied the detectability of wiggles in the inflation potential in string-inspired inflation models. We studied novel dark-matter candidates and their phenomenology. This work helped advance the DoE's Cosmic Frontier (and also Energy and Intensity Frontiers) by finding synergies between a variety of different experimental efforts, by developing new searches, science targets, and analyses for existing/forthcoming experiments, and by generating ideas for new next-generation experiments.
Theoretical Molecular Biophysics
Scherer, Philipp
2010-01-01
"Theoretical Molecular Biophysics" is an advanced study book for students, shortly before or after completing undergraduate studies, in physics, chemistry or biology. It provides the tools for an understanding of elementary processes in biology, such as photosynthesis on a molecular level. A basic knowledge in mechanics, electrostatics, quantum theory and statistical physics is desirable. The reader will be exposed to basic concepts in modern biophysics such as entropic forces, phase separation, potentials of mean force, proton and electron transfer, heterogeneous reactions coherent and incoherent energy transfer as well as molecular motors. Basic concepts such as phase transitions of biopolymers, electrostatics, protonation equilibria, ion transport, radiationless transitions as well as energy- and electron transfer are discussed within the frame of simple models.
Social Security: Theoretical Aspects
O. I. Kashnik
2013-01-01
Full Text Available The paper looks at the phenomena of security and social security from the philosophical, sociological and psychological perspective. The undertaken analysis of domestic and foreign scientific materials demonstrates the need for interdisciplinary studies, including pedagogy and education, aimed at developing the guidelines for protecting the social system from destruction. The paper defines the indicators, security level indices and their assessment methods singled out from the analytical reports and security studies by the leading Russian sociological centers and international expert organizations, including the United Nations.The research is aimed at finding out the adequate models of personal and social security control systems at various social levels. The theoretical concepts can be applied by the teachers of the Bases of Life Safety course, the managers and researches developing the assessment criteria and security indices of educational environment evaluation, as well as the methods of diagnostics and expertise of educational establishments from the security standpoint.
Demonetisation: Some Theoretical Perspectives
Waknis, Parag
2017-01-01
On November 8, 2017, the Prime Minister of India Narendra Modi declared currency denominations of Rs.500 and Rs.1000 to be illegal for use in transactions. These currency denominations together constituted almost 85% of total currency in circulation according to some estimates. Based on essentiality of money, and a segmented markets model perspective, I analyze the effects of this surprise demonetisation policy on the Indian economy.
Muñoz-Jaramillo, Andrés; Martens, Petrus C H
2010-01-01
The turbulent magnetic diffusivity in the solar convection zone is one of the most poorly constrained ingredients of mean-field dynamo models. This lack of constraint has previously led to controversy regarding the most appropriate set of parameters, as different assumptions on the value of turbulent diffusivity lead to radically different solar cycle predictions. Typically, the dynamo community uses double step diffusivity profiles characterized by low values of diffusivity in the bulk of the convection zone. However, these low diffusivity values are not consistent with theoretical estimates based on mixing-length theory -- which suggest much higher values for turbulent diffusivity. To make matters worse, kinematic dynamo simulations cannot yield sustainable magnetic cycles using these theoretical estimates. In this work we show that magnetic cycles become viable if we combine the theoretically estimated diffusivity profile with magnetic quenching of the diffusivity. Furthermore, we find that the main featur...
THEORETICAL ASPECTS OF INNOVATION DEVELOPMENT
Evelina Šakalytė
2013-06-01
Full Text Available Purpose – Innovation is defined as an economic stimulus and the key factor of scientific and technological progress as well as international competitiveness. Therefore it is essential to identify theoretical conceptions and approaches of innovation development and review the use of innovation value chain. Design/methodology/approach – analysis of scientific literature. Findings – Innovation plays a significant role of business growth and is a principal factor of survival in competition and incentive of economic development. The importance of the innovation value chain awareness has been brought out as it significantly contributes to the successful development of innovation. Research limitations/implications – the theory of innovation in manufacturing sector and service sector has not been distinguished enough yet. Practical implications – comprehension of innovation concepts and innovation value chain helps to see multiple connections throughout the entire innovation process: from the beginning to the end. The information gained regarding to the innovation value chain helps managers to focus on innovation as a complete product and strengthen weaknesses. Originality/Value – The topic of innovation was analyzed for few previous decades, however, the decision on the mutual understanding on innovation concept have not been highlighted. The paper reveals the systematic conceptual overview on comprehension of innovation and the importance of innovation development process in today’s fast changing competitive market. Keywords: innovation, innovation value chain, knowledge. Research type: scientific literature review.
Information theoretic preattentive saliency
Loog, Marco
2011-01-01
-driven density estimation. Given the features descriptors or filter bank that one wants to use to describe the image content at every position we provide a closed-form expression for the associated saliency at that location. This indeed makes explicit that what is considered salient depends on how i.e. by means...... of which features, image information is described. We illustrate our result by determining a few specific saliency maps based on particular choices of features. One of them makes the link with the mapping underlying well-known Harris interest points, which is a result recently obtained in isolation...
22 CFR 40.91 - Certain aliens previously removed.
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Certain aliens previously removed. 40.91... IMMIGRANTS UNDER THE IMMIGRATION AND NATIONALITY ACT, AS AMENDED Aliens Previously Removed § 40.91 Certain aliens previously removed. (a) 5-year bar. An alien who has been found inadmissible, whether as a result...
Theoretical Approaches to Coping
Sofia Zyga
2013-01-01
Full Text Available Introduction: Dealing with stress requires conscious effort, it cannot be perceived as equal to individual's spontaneous reactions. The intentional management of stress must not be confused withdefense mechanisms. Coping differs from adjustment in that the latter is more general, has a broader meaning and includes diverse ways of facing a difficulty.Aim: An exploration of the definition of the term "coping", the function of the coping process as well as its differentiation from other similar meanings through a literature review.Methodology: Three theoretical approaches of coping are introduced; the psychoanalytic approach; approaching by characteristics; and the Lazarus and Folkman interactive model.Results: The strategic methods of the coping approaches are described and the article ends with a review of the approaches including the functioning of the stress-coping process , the classificationtypes of coping strategies in stress-inducing situations and with a criticism of coping approaches.Conclusions: The comparison of coping in different situations is difficult, if not impossible. The coping process is a slow process, so an individual may select one method of coping under one set ofcircumstances and a different strategy at some other time. Such selection of strategies takes place as the situation changes.
Improved estimates of the nuclear structure corrections in $\\mu$D
Hernandez, Oscar Javier; Bacca, Sonia; Dinur, Nir Nevo; Barnea, Nir
2014-01-01
We calculate the nuclear structure corrections to the Lamb shift in muonic deuterium by using state-of-the-art nucleon-nucleon potentials derived from chiral effective field theory. Our calculations complement previous theoretical work obtained from phenomenological potentials and the zero range approximation. The study of the chiral convergence order-by-order and the dependence on cutoff variations allows us to improve the estimates on the nuclear structure corrections and the theoretical uncertainty coming from nuclear potentials. This will enter the determination of the nuclear radius from ongoing muonic deuterium experiments at PSI.
Zhang Zhi; Li Jianxun; Liu Liu; Liu Zhaolei; Han Shan
2015-01-01
Since the features of low energy consumption and limited power supply are very impor-tant for wireless sensor networks (WSNs), the problems of distributed state estimation with quan-tized innovations are investigated in this paper. In the first place, the assumptions of prior and posterior probability density function (PDF) with quantized innovations in the previous papers are analyzed. After that, an innovative Gaussian mixture estimator is proposed. On this basis, this paper presents a Gaussian mixture state estimation algorithm based on quantized innovations for WSNs. In order to evaluate and compare the performance of this kind of state estimation algo-rithms for WSNs, the posterior Cramer–Rao lower bound (CRLB) with quantized innovations is put forward. Performance analysis and simulations show that the proposed Gaussian mixture state estimation algorithm is efficient than the others under the same number of quantization levels and the performance of these algorithms can be benchmarked by the theoretical lower bound.
Zhang Zhi
2015-12-01
Full Text Available Since the features of low energy consumption and limited power supply are very important for wireless sensor networks (WSNs, the problems of distributed state estimation with quantized innovations are investigated in this paper. In the first place, the assumptions of prior and posterior probability density function (PDF with quantized innovations in the previous papers are analyzed. After that, an innovative Gaussian mixture estimator is proposed. On this basis, this paper presents a Gaussian mixture state estimation algorithm based on quantized innovations for WSNs. In order to evaluate and compare the performance of this kind of state estimation algorithms for WSNs, the posterior Cramér–Rao lower bound (CRLB with quantized innovations is put forward. Performance analysis and simulations show that the proposed Gaussian mixture state estimation algorithm is efficient than the others under the same number of quantization levels and the performance of these algorithms can be benchmarked by the theoretical lower bound.
2011-01-01
of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....
The prevalence of previously undiagnosed leprosy in the general population of northwest Bangladesh
F.J. Moet (Fake); R.P. Schuring (Ron); D. Pahan (David); L. Oskam (Linda); J.H. Richardus (Jan Hendrik)
2008-01-01
textabstractBackground: The prevalence of previously undiagnosed leprosy (PPUL) in the general population was determined to estimate the background level leprosy in the population and to compare this with registered prevalence and the known PPUL in different levels of contacts of leprosy patients.
The Basic Theoretical Framework
Loeb, Abraham
Cosmology is by now a mature experimental science. We are privileged to live at a time when the story of genesis (how the Universe started and developed) can be critically explored by direct observations. Looking deep into the Universe through powerful telescopes, we can see images of the Universe when it was younger because of the finite time it takes light to travel to us from distant sources. Existing data sets include an image of the Universe when it was 0.4 million years old (in the form of the cosmic microwave background), as well as images of individual galaxies when the Universe was older than a billion years. But there is a serious challenge: in between these two epochs was a period when the Universe was dark, stars had not yet formed, and the cosmic microwave background no longer traced the distribution of matter. And this is precisely the most interesting period, when the primordial soup evolved into the rich zoo of objects we now see. The observers are moving ahead along several fronts. The first involves the construction of large infrared telescopes on the ground and in space, that will provide us with new photos of the first galaxies. Current plans include ground-based telescopes which are 24-42 m in diameter, and NASA's successor to the Hubble Space Telescope, called the James Webb Space Telescope. In addition, several observational groups around the globe are constructing radio arrays that will be capable of mapping the three-dimensional distribution of cosmic hydrogen in the infant Universe. These arrays are aiming to detect the long-wavelength (redshifted 21-cm) radio emission from hydrogen atoms. The images from these antenna arrays will reveal how the non-uniform distribution of neutral hydrogen evolved with cosmic time and eventually was extinguished by the ultra-violet radiation from the first galaxies. Theoretical research has focused in recent years on predicting the expected signals for the above instruments and motivating these ambitious
Theoretically Optimal Distributed Anomaly Detection
National Aeronautics and Space Administration — A novel general framework for distributed anomaly detection with theoretical performance guarantees is proposed. Our algorithmic approach combines existing anomaly...
Theoretical models for supernovae
Woosley, S.E.; Weaver, T.A.
1981-09-21
The results of recent numerical simulations of supernova explosions are presented and a variety of topics discussed. Particular emphasis is given to (i) the nucleosynthesis expected from intermediate mass (10sub solar less than or equal to M less than or equal to 100 Msub solar) Type II supernovae and detonating white dwarf models for Type I supernovae, (ii) a realistic estimate of the ..gamma..-line fluxes expected from this nucleosynthesis, (iii) the continued evolution, in one and two dimensions, of intermediate mass stars wherein iron core collapse does not lead to a strong, mass-ejecting shock wave, and (iv) the evolution and explosion of vary massive stars (M greater than or equal to 100 Msub solar of both Population I and III. In one dimension, nuclear burning following a failed core bounce does not appear likely to lead to a supernova explosion although, in two dimensions, a combination of rotation and nuclear burning may do so. Near solar proportions of elements from neon to calcium and very brilliant optical displays may be created by hypernovae, the explosions of stars in the mass range 100 M/sub solar/ to 300 M/sub solar/. Above approx. 300 M/sub solar/ a black hole is created by stellar collapse following carbon ignition. Still more massive stars may be copious producers of /sup 4/He and /sup 14/N prior to their collapse on the pair instability.
Improved estimation of radiated axions from cosmological axionic strings
Hiramatsu, Takashi; Sekiguchi, Toyokazu; Yamaguchi, Masahide; Yokoyama, Jun'ichi
2010-01-01
Cosmological evolution of axionic string network is analyzed in terms of field-theoretic simulations in a box of 512^3 grids, which are the largest ever, using a new and more efficient identification scheme of global strings. The scaling parameter is found to be \\xi=0.87 +- 0.14 in agreement with previous results. The energy spectrum is calculated precisely using a pseudo power spectrum estimator which significantly reduces the error in the mean reciprocal comoving momentum. The resultant constraint on the axion decay constant leads to f_a <= 3*10^11 GeV. We also discuss implications for the early Universe.
Topics in modern physics theoretical foundations
Walecka, John Dirk
2013-01-01
While the two previous books entitled Introduction to Modern Physics: Theoretical Foundations and Advanced Modern Physics: Theoretical Foundations exposed the reader to the foundations and frontiers of today's physics, the goal of this third volume is to cover in some detail several topics omitted in the essentially linear progression of the first two. This book is divided into three parts. Part 1 is on quantum mechanics. Analytic solutions to the Schrödinger equation are developed for some basic systems. The analysis is then formalized, concluding with a set of postulates for the theory. Part 2 is on applications of quantum mechanics: approximation methods for bound states, scattering theory, time-dependent perturbation theory, and electromagnetic radiation and quantum electrodynamics. Part 3 covers some selected topics in relativistic quantum field theory: discrete symmetries, the Heisenberg picture, and the Feynman rules for quantum chromodynamics. The three volumes in this series taken together provide ...
Computability-theoretic learning complexity.
Case, John; Kötzing, Timo
2012-07-28
Initially discussed are some of Alan Turing's wonderfully profound and influential ideas about mind and mechanism-including regarding their connection to the main topic of the present study, which is within the field of computability-theoretic learning theory. Herein is investigated the part of this field concerned with the algorithmic, trial-and-error inference of eventually correct programs for functions from their data points. As to the main content of this study: in prior papers, beginning with the seminal work by Freivalds et al. in 1995, the notion of intrinsic complexity is used to analyse the learning complexity of sets of functions in a Gold-style learning setting. Herein are pointed out some weaknesses of this notion. Offered is an alternative based on epitomizing sets of functions-sets that are learnable under a given learning criterion, but not under other criteria that are not at least as powerful. To capture the idea of epitomizing sets, new reducibility notions are given based on robust learning (closure of learning under certain sets of computable operators). Various degrees of epitomizing sets are characterized as the sets complete with respect to corresponding reducibility notions! These characterizations also provide an easy method for showing sets to be epitomizers, and they are then employed to prove several sets to be epitomizing. Furthermore, a scheme is provided to generate easily very strong epitomizers for a multitude of learning criteria. These strong epitomizers are the so-called self-learning sets, previously applied by Case & Kötzing in 2010. These strong epitomizers can be easily generated and employed in a myriad of settings to witness with certainty the strict separation in learning power between the criteria so epitomized and other not as powerful criteria!
28 CFR 10.5 - Incorporation of papers previously filed.
2010-07-01
... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Incorporation of papers previously filed... CARRYING ON ACTIVITIES WITHIN THE UNITED STATES Registration Statement § 10.5 Incorporation of papers previously filed. Papers and documents already filed with the Attorney General pursuant to the said act...
2 CFR 1.215 - Relationship to previous issuances.
2010-01-01
... 2 Grants and Agreements 1 2010-01-01 2010-01-01 false Relationship to previous issuances. 1.215 Section 1.215 Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction toSubtitle A § 1.215 Relationship to previous issuances. Although some of the guidance was...
49 CFR 236.1031 - Previously approved PTC systems.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Previously approved PTC systems. 236.1031 Section... Train Control Systems § 236.1031 Previously approved PTC systems. (a) Any PTC system fully implemented and operational prior to March 16, 2010, may receive PTC System Certification if the applicable PTC...
Application of chaotic theory to parameter estimation
无
2002-01-01
High precision parameter estimation is very important for control system design and compensation. This paper utilizes the properties of chaotic system for parameter estimation. Theoretical analysis and experimental results indicated that this method has extremely high sensitivity and resolving power. The most important contribution of this paper is apart from the traditional engineering viewpoint and actualizing parameter estimation just based on unstable chaotic systems.
Almost Free Modules Set-Theoretic Methods
Eklof, PC
1990-01-01
This is an extended treatment of the set-theoretic techniques which have transformed the study of abelian group and module theory over the last 15 years. Part of the book is new work which does not appear elsewhere in any form. In addition, a large body of material which has appeared previously (in scattered and sometimes inaccessible journal articles) has been extensively reworked and in many cases given new and improved proofs. The set theory required is carefully developed with algebraists in mind, and the independence results are derived from explicitly stated axioms. The book contains exe
Theoretical study on spherical proton emission
无
2009-01-01
The proton radioactivity half-lives of spherical proton emitters are investigated within a generalized liquid drop model(GLDM),including the proximity effects between nuclei in a neck and the mass and charge asymmetry.The penetrability is calculated in the WKB approximation and the assault frequency is estimated by the quantum mechanism method considering the structure of the parent nucleus.The spectroscopic factor is taken into account in half-life calculation,which is obtained by employing the relativistic mean field(RMF) theory.The half-lives within the GLDM are compared with the experimental data and other theoretical values.The results show that the GLDM works quite well for spherical proton emitters when the assault frequency is estimated by the quantum mechanical method and the spectroscopic factor is considered.
Theoretical study on spherical proton emission
ZHANG HongFei; WANG YongJia; DONG JianMin; LI JunQing
2009-01-01
The proton radioactivity half-lives of spherical proton emitters are investigated within a generalized liquid drop model (GLDM),including the proximity effects between nuclei in a neck and the mass and charge asymmetry.The penetrability is calculated in the WKB approximation and the assault frequency is estimated by the quantum mechanism method considering the structure of the parent nucleus.The spectroscopic factor is taken into account in half-life calculation,which is obtained by employing the relativistic mean field (RMF) theory.The half-lives within the GLDM are compared with the experimental data and other theoretical values.The results show that the GLDM works quite well for spherical proton emitters when the assault frequency is estimated by the quantum mechanical method and the spectroscopic factor is considered.
Uncomplicated pregnancy and delivery after previous severe postpartum cerebral angiopathy.
Rémi, Jan; Pfefferkorn, Thomas; Fesl, Gunther; Rogenhofer, Nina; Straube, Andreas; Klein, Matthias
2011-09-01
Postpartum cerebral angiopathy (PCA) is a cerebral vasoconstriction syndrome developing shortly after delivery, without signs of preceding eclampsia. The risk for recurrence of PCA is unknown. Here, we report on a closely monitored, uneventful pregnancy of a woman with a previous severe episode of PCA. In summary, this case report demonstrates that PCA does not necessarily recur in following pregnancies, even after previous severe episodes.
Uncomplicated Pregnancy and Delivery after Previous Severe Postpartum Cerebral Angiopathy
Jan Rémi
2011-10-01
Full Text Available Postpartum cerebral angiopathy (PCA is a cerebral vasoconstriction syndrome developing shortly after delivery, without signs of preceding eclampsia. The risk for recurrence of PCA is unknown. Here, we report on a closely monitored, uneventful pregnancy of a woman with a previous severe episode of PCA. In summary, this case report demonstrates that PCA does not necessarily recur in following pregnancies, even after previous severe episodes.
Theoretical chemistry advances and perspectives
Eyring, Henry
1980-01-01
Theoretical Chemistry: Advances and Perspectives, Volume 5 covers articles concerning all aspects of theoretical chemistry. The book discusses the mean spherical approximation for simple electrolyte solutions; the representation of lattice sums as Mellin-transformed products of theta functions; and the evaluation of two-dimensional lattice sums by number theoretic means. The text also describes an application of contour integration; a lattice model of quantum fluid; as well as the computational aspects of chemical equilibrium in complex systems. Chemists and physicists will find the book usef
Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian;
2011-01-01
of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set...
OUTCOME OF PREGNANCY IN WOMEN WITH PREVIOUS CAESAREAN SECTION
Bellad Girija
2016-06-01
Full Text Available BACKGROUND Carefully selected cases of Vaginal Birth after Caesarean Section (VBAC is safe and successful. Even though options of elective caesarean section or a trial of labour are given to women with prior caesarean section, the risk is always present. In successful VBACs, morbidity is less compared to repeat caesarean section. That is why this study is conducted to determine the outcome of pregnancy in women with previous CS. OBJECTIVES 1. To evaluate the clinical course of labour in cases with previous caesarean section. 2. To study the perinatal outcome in cases with previous caesarean section either by vaginal delivery or repeat Caesarean section. 3. To study maternal morbidity in these cases. METHOD A retrospective analysis of medical records of 250 women with a previous caesarean section, who delivered in BIMS Hospital between May 2015 and July 2015 was carried out. Women with recurrent indications for caesarean section and those having nonrecurrent indications with any complicating factors in present pregnancy and women with previous two caesarean sections were not given trial for vaginal delivery. Those women with previous section for the non-recurrent indications were given trial for vaginal delivery. STATISTICAL ANALYSIS Was done by Chi-square test. RESULT In 250 cases, 132 cases were given trial for vaginal delivery. In these, vaginal delivery was 61.3% and repeat section was 38%. There is an association between maternal morbidity and type of delivery. Birth weight was associated with the type of delivery. There is no association between neonatal outcome and type of delivery. CONCLUSION In carefully selected patients, appropriate timing and close supervision, trial of vaginal delivery in previous one caesarean section is safe and successful. Individual approach seems to be the best.
Secondary recurrent miscarriage is associated with previous male birth.
Ooi, Poh Veh
2011-01-01
Secondary recurrent miscarriage (RM) is defined as three or more consecutive pregnancy losses after delivery of a viable infant. Previous reports suggest that a firstborn male child is associated with less favourable subsequent reproductive potential, possibly due to maternal immunisation against male-specific minor histocompatibility antigens. In a retrospective cohort study of 85 cases of secondary RM we aimed to determine if secondary RM was associated with (i) gender of previous child, maternal age, or duration of miscarriage history, and (ii) increased risk of pregnancy complications. Fifty-three women (62.0%; 53\\/85) gave birth to a male child prior to RM compared to 32 (38.0%; 32\\/85) who gave birth to a female child (p=0.002). The majority (91.7%; 78\\/85) had uncomplicated, term deliveries and normal birth weight neonates, with one quarter of the women previously delivered by Caesarean section. All had routine RM investigations and 19.0% (16\\/85) had an abnormal result. Fifty-seven women conceived again and 33.3% (19\\/57) miscarried, but there was no significant difference in failure rates between those with a previous male or female child (13\\/32 vs. 6\\/25, p=0.2). When patients with abnormal results were excluded, or when women with only one previous child were considered, there was still no difference in these rates. A previous male birth may be associated with an increased risk of secondary RM but numbers preclude concluding whether this increases recurrence risk. The suggested association with previous male birth provides a basis for further investigations at a molecular level.
Secondary recurrent miscarriage is associated with previous male birth.
Ooi, Poh Veh
2012-01-31
Secondary recurrent miscarriage (RM) is defined as three or more consecutive pregnancy losses after delivery of a viable infant. Previous reports suggest that a firstborn male child is associated with less favourable subsequent reproductive potential, possibly due to maternal immunisation against male-specific minor histocompatibility antigens. In a retrospective cohort study of 85 cases of secondary RM we aimed to determine if secondary RM was associated with (i) gender of previous child, maternal age, or duration of miscarriage history, and (ii) increased risk of pregnancy complications. Fifty-three women (62.0%; 53\\/85) gave birth to a male child prior to RM compared to 32 (38.0%; 32\\/85) who gave birth to a female child (p=0.002). The majority (91.7%; 78\\/85) had uncomplicated, term deliveries and normal birth weight neonates, with one quarter of the women previously delivered by Caesarean section. All had routine RM investigations and 19.0% (16\\/85) had an abnormal result. Fifty-seven women conceived again and 33.3% (19\\/57) miscarried, but there was no significant difference in failure rates between those with a previous male or female child (13\\/32 vs. 6\\/25, p=0.2). When patients with abnormal results were excluded, or when women with only one previous child were considered, there was still no difference in these rates. A previous male birth may be associated with an increased risk of secondary RM but numbers preclude concluding whether this increases recurrence risk. The suggested association with previous male birth provides a basis for further investigations at a molecular level.
Adaptive Methods for Permeability Estimation and Smart Well Management
Lien, Martha Oekland
2005-04-01
The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement
Theoretical study on a water muffler
Du, T.; Chen, Y. W.; Miao, T. C.; Wu, D. Z.
2016-05-01
Theoretical computation on a previously studied water muffler is carried out in this article. Structure of the water muffler is composed of two main parts, namely, the Kevlar- reinforced rubber tube and the inner-noise-reduction structure. Rubber wall of the rubber tube is assumed to function as rigid wall lined with sound absorption material and is described by a complex radial wave number. Comparison among the results obtained from theoretical computation, FEM (finite element method) simulation and experiment of the rubber tube and that of the water muffler has been made. The theoretical results show a good accordance in general tendency with the FEM simulated and the measured results. After that, parametric study on the diameter of the inner structure and that of the rubber tube is conducted. Results show that the diameter of the left inner structure has the most significant effect on the SPL of the water muffler due to its location and its effect on the diameter ratio D2/D1.
Experimental and Theoretical Studies on Biologically Active Lanthanide (III) Complexes
Kostova, I.; Trendafilova, N.; Georgieva, I.; Rastogi, V. K.; Kiefer, W.
2008-11-01
The complexation ability and the binding mode of the ligand coumarin-3-carboxylic acid (HCCA) to La(III), Ce(III), Nd(III), Sm(III), Gd(III) and Dy(III) lanthanide ions (Ln(III)) are elucidated at experimental and theoretical level. The complexes were characterized using elemental analysis, DTA and TGA data as well as 1H NMR and 13C NMR spectra. FTIR and Raman spectroscopic techniques as well as DFT quantum chemical calculations were used for characterization of the binding mode and the structures of lanthanide(III) complexes of HCCA. The metal—ligand binding mode is predicted through molecular modeling and energy estimation of different Ln—CCA structures using B3LYP/6-31G(d) method combined with a large quasi-relativistic effective core potential for lanthanide ion. The energies obtained predict bidentate coordination of CCA- to Ln(III) ions through the carbonylic oxygen and the carboxylic oxygen. Detailed vibrational analysis of HCCA, CCA- and Ln(III) complexes based on both calculated and experimental frequencies confirms the suggested metal—ligand binding mode. The natural bonding analysis predicts strongly ionic character of the Ln(III)-CCA bonding in the- complexes studied. With the relatively resistant tumor cell line K-562 we obtained very interesting in-vitro results which are in accordance with our previously published data concerning the activity of lanthanide(III) complexes with other coumarin derivatives.
Theoretical approaches to elections defining
Natalya V. Lebedeva
2011-01-01
Full Text Available Theoretical approaches to elections defining develop the nature, essence and content of elections, help to determine their place and a role as one of the major national law institutions in democratic system.
Theoretical Foundations of Learning Communities
Jessup-Anger, Jody E.
2015-01-01
This chapter describes the historical and contemporary theoretical underpinnings of learning communities and argues that there is a need for more complex models in conceptualizing and assessing their effectiveness.
Theoretical Studies of Proton Radioactivity
Ldia S Ferreira; Enrico Maglione
2016-01-01
In the paper, we will discuss the most recent theoretical approaches developed by our group, to understand the mechanisms of decay by one proton emission, and the structure and shape of exotic nuclei at the limits of stability.
Euclid's Number-Theoretical Work
Zhang, Shaohua
2009-01-01
The object of this paper is to affirm the number-theoretical role of Euclid and the historical significance of Euclid's algorithm. We give a brief introduction about Euclid's number-theoretical work. Our study is the first to show that Euclid's algorithm is essentially equivalent with Division algorithm which is the basis of Theory of Divisibility. Note also that Euclid's algorithm implies Euclid's first theorem and Euclid's second theorem. Thus, in the nature of things, Euclid's algorithm is the most important number-theoretical work of Euclid. For this reason, we further summarize briefly the influence of Euclid's algorithm. It leads to the conclusion that Euclid's algorithm is the greatest number-theoretical achievement of the age.
THEORETICAL APPROACHES IN INTERNATIONAL RELATIONS ...
plt
understanding of the social dynamics of the world we live in. Theoretical approaches are also instrumental in shaping perceptions of what matters in international politics ... This implies that, as a technique of last resort, the military instrument.
Estimation of Fluid Properties and Phase Equilibria.
Herskowitz, M.
1985-01-01
Describes a course (given to junior/senior students with strong background in thermodynamics and transport phenomena) that covers the theoretical and practical aspects of properties estimation. An outline for the course is included. (JN)
Ivanchik, A V; Varshalovich, D A
1999-01-01
Endeavours of the unification of the four fundamental interactions have resulted in a development of theories having cosmological solutions in which low-energy limits of fundamental physical constants vary with time. The validity of such theoretical models should be checked by comparison of the theoretical predictions with observational and experimental bounds on possible time-dependences of the fundamental constants. Based on high-resolution measurements of quasar spectra, we obtain the following direct limits on the average rate of the cosmological time variation of the fine-structure constant limit, and |\\dot{\\alpha}/\\alpha| < 3.1 \\times 10^{-14} yr^{-1} is the most conservative limit. Analogous estimates published previously, as well as other contemporary tests for possible variations of \\alpha (those based on the "Oklo phenomenon", on the primordial nucleosynthesis models, and others) are discussed and compared with the present upper limit. We argue that the present result is the most conservative one...
Primary Malignant Tumours of Bone Following Previous Malignancy
Patton, J. T.; Sommerville, S. M. M.; Grimer, R. J.
2008-01-01
Destructive bone lesions occurring in patients who have previously had a malignancy are generally assumed to be a metastasis from that malignancy. We reviewed 60 patients with a previous history of malignancy, who presented with a solitary bone lesion that was subsequently found to be a new and different primary sarcoma of bone. These second malignancies occurred in three distinct groups of patients: (1) patients with original tumours well known to be associated with second malignancies (5%); (2) patients whose second malignancies were likely to be due to the previous treatment of their primary malignancy (40%); (3) patients in whom there was no clearly defined association between malignancies (55%). The purpose of this study is to emphasise the necessity for caution in assuming the diagnosis of a metastasis when a solitary bone lesion is identified following a prior malignancy. Inappropriate biopsy and treatment of primary bone sarcomas compromises limb salvage surgery and can affect patient mortality. PMID:18414590
Primary Malignant Tumours of Bone Following Previous Malignancy
R. J. Grimer
2008-04-01
Full Text Available Destructive bone lesions occurring in patients who have previously had a malignancy are generally assumed to be a metastasis from that malignancy. We reviewed 60 patients with a previous history of malignancy, who presented with a solitary bone lesion that was subsequently found to be a new and different primary sarcoma of bone. These second malignancies occurred in three distinct groups of patients: (1 patients with original tumours well known to be associated with second malignancies (5%; (2 patients whose second malignancies were likely to be due to the previous treatment of their primary malignancy (40%; (3 patients in whom there was no clearly defined association between malignancies (55%. The purpose of this study is to emphasise the necessity for caution in assuming the diagnosis of a metastasis when a solitary bone lesion is identified following a prior malignancy. Inappropriate biopsy and treatment of primary bone sarcomas compromises limb salvage surgery and can affect patient mortality.
Erlotinib-induced rash spares previously irradiated skin
Lips, Irene M.; Vonk, Ernest J.A. [Radiotherapeutisch Instituut Stedendriehoek en Omstreken (RISO), Deventer (Netherlands). Dept. of Radiation Oncology; Koster, Mariska E.Y. [Deventer Hospital (Netherlands). Dept. of Lung Diseases; Houwing, Ronald H. [Deventer Hospital (Netherlands). Dept. of Dermatology
2011-08-15
Erlotinib is an epidermal growth factor receptor inhibitor prescribed to patients with locally advanced or metastasized non-small cell lung carcinoma after failure of at least one earlier chemotherapy treatment. Approximately 75% of the patients treated with erlotinib develop acneiform skin rashes. A patient treated with erlotinib 3 months after finishing concomitant treatment with chemotherapy and radiotherapy for non-small cell lung cancer is presented. Unexpectedly, the part of the skin that had been included in his previously radiotherapy field was completely spared from the erlotinib-induced acneiform skin rash. The exact mechanism of erlotinib-induced rash sparing in previously irradiated skin is unclear. The underlying mechanism of this phenomenon needs to be explored further, because the number of patients being treated with a combination of both therapeutic modalities is increasing. The therapeutic effect of erlotinib in the area of the previously irradiated lesion should be assessed. (orig.)
Asymptotic accuracy of Bayesian estimation for a single latent variable.
Yamazaki, Keisuke
2015-09-01
In data science and machine learning, hierarchical parametric models, such as mixture models, are often used. They contain two kinds of variables: observable variables, which represent the parts of the data that can be directly measured, and latent variables, which represent the underlying processes that generate the data. Although there has been an increase in research on the estimation accuracy for observable variables, the theoretical analysis of estimating latent variables has not been thoroughly investigated. In a previous study, we determined the accuracy of a Bayes estimation for the joint probability of the latent variables in a dataset, and we proved that the Bayes method is asymptotically more accurate than the maximum-likelihood method. However, the accuracy of the Bayes estimation for a single latent variable remains unknown. In the present paper, we derive the asymptotic expansions of the error functions, which are defined by the Kullback-Leibler divergence, for two types of single-variable estimations when the statistical regularity is satisfied. Our results indicate that the accuracies of the Bayes and maximum-likelihood methods are asymptotically equivalent and clarify that the Bayes method is only advantageous for multivariable estimations.
Evaluation of gravimetric techniques to estimate the microvascular filtration coefficient.
Dongaonkar, R M; Laine, G A; Stewart, R H; Quick, C M
2011-06-01
Microvascular permeability to water is characterized by the microvascular filtration coefficient (K(f)). Conventional gravimetric techniques to estimate K(f) rely on data obtained from either transient or steady-state increases in organ weight in response to increases in microvascular pressure. Both techniques result in considerably different estimates and neither account for interstitial fluid storage and lymphatic return. We therefore developed a theoretical framework to evaluate K(f) estimation techniques by 1) comparing conventional techniques to a novel technique that includes effects of interstitial fluid storage and lymphatic return, 2) evaluating the ability of conventional techniques to reproduce K(f) from simulated gravimetric data generated by a realistic interstitial fluid balance model, 3) analyzing new data collected from rat intestine, and 4) analyzing previously reported data. These approaches revealed that the steady-state gravimetric technique yields estimates that are not directly related to K(f) and are in some cases directly proportional to interstitial compliance. However, the transient gravimetric technique yields accurate estimates in some organs, because the typical experimental duration minimizes the effects of interstitial fluid storage and lymphatic return. Furthermore, our analytical framework reveals that the supposed requirement of tying off all draining lymphatic vessels for the transient technique is unnecessary. Finally, our numerical simulations indicate that our comprehensive technique accurately reproduces the value of K(f) in all organs, is not confounded by interstitial storage and lymphatic return, and provides corroboration of the estimate from the transient technique.
Yang, Ren-Qiang; Jabbari, Javad; Cheng, Xiao-Shu
2014-01-01
BACKGROUND: Marfan syndrome (MFS) is a rare autosomal dominantly inherited connective tissue disorder with an estimated prevalence of 1:5,000. More than 1000 variants have been previously reported to be associated with MFS. However, the disease-causing effect of these variants may be questionable...... with regard to disease stratification based on these previously reported MFS-associated variants....
WAYS HIERARCHY OF ACCOUNTING ESTIMATES
ŞERBAN CLAUDIU VALENTIN
2015-03-01
Full Text Available Based on one hand on the premise that the estimate is an approximate evaluation, completed with the fact that the term estimate is increasingly common and used by a variety of both theoretical and practical areas, particularly in situations where we can not decide ourselves with certainty, it must be said that, in fact, we are dealing with estimates and in our case with an accounting estimate. Completing on the other hand the idea above with the phrase "estimated value", which implies that we are dealing with a value obtained from an evaluation process, but its size is not exact but approximated, meaning is close to the actual size, it becomes obvious the neccessity to delimit the hierarchical relationship between evaluation / estimate while considering the context in which the evaluation activity is derulated at entity level.
Mehlsen, Jesper; Wiinberg, Niels; Joergensen, Bjarne S
2010-01-01
The presence of peripheral arterial disease (PAD) in patients with other manifestations of cardiovascular disease identifies a population at increased risk of complications both during acute coronary events and on a long-term basis and possibly a population in whom secondary prevention of cardiov...... of cardiovascular events should be addressed aggressively. The present study was aimed at providing a valid estimate on the prevalence of PAD in patients attending their general practitioner and having previously suffered a cardio- or cerebrovascular event....
"Battered Women" and Previous Victimization: Is the Question Relevant?
Gudim, Laurie, Comp.; And Others
This report discusses battered women and the role of their previous victimization. After a literature review on family violence in general, these topics are discussed: (1) family violence and the patriarchy; (2) the historical background of family violence; (3) intergenerational cycle of violence; and (4) psychological literature's four ways…
Recovery of Previously Uncultured Bacterial Genera from Three Mediterranean Sponges
Versluis, Dennis; McPherson, Kyle; Passel, van Mark W.J.; Smidt, Hauke; Sipkema, Detmer
2017-01-01
Sponges often harbour a dense and diverse microbial community. Presently, a large discrepancy exists between the cultivable bacterial fraction from sponges and the community in its natural environment. Here, we aimed to acquire additional insights into cultivability of (previously uncultured)
49 CFR 173.23 - Previously authorized packaging.
2010-10-01
... permit. Editorial Note: For Federal Register citations affecting § 173.23, see the List of CFR Sections... 49 Transportation 2 2010-10-01 2010-10-01 false Previously authorized packaging. 173.23 Section... REQUIREMENTS FOR SHIPMENTS AND PACKAGINGS Preparation of Hazardous Materials for Transportation §...
Haemophilus influenzae type f meningitis in a previously healthy boy
Ronit, Andreas; Berg, Ronan M G; Bruunsgaard, Helle;
2013-01-01
Non-serotype b strains of Haemophilus influenzae are extremely rare causes of acute bacterial meningitis in immunocompetent individuals. We report a case of acute bacterial meningitis in a 14-year-old boy, who was previously healthy and had been immunised against H influenzae serotype b (Hib...
Time to pregnancy after a previous miscarriage in subfertile couples
T. Cox; J.W. van der Steeg; P. Steures; P.G.A. Hompes; F. van der Veen; M.J.C. Eijkemans; J.H. Schagen van Leeuwen; C. Renckens; P.M.M. Bossuyt; B.W.J. Mol
2010-01-01
Objective: To assess the time to spontaneous ongoing pregnancy after a previous miscarriage in subfertile couples. Design: A prospective cohort study. Setting: The study was conducted in 38 fertility centers in the Netherlands. Patient(s): Subfertile couples who miscarried after completing their bas
Abiraterone in metastatic prostate cancer without previous chemotherapy
Ryan, C.J.; Smith, M.R.; Bono, J. De; Molina, A.; Logothetis, C.J.; Souza, P. de; Fizazi, K.; Mainwaring, P.; Piulats, J.M.; Ng, S.; Carles, J.; Mulders, P.F.A.; Basch, E.; Small, E.J.; Saad, F.; Schrijvers, D.; Poppel, H. van; Mukherjee, S.D.; Suttmann, H.; Gerritsen, W.R.; Flaig, T.W.; George, D.J.; Yu, E.Y.; Efstathiou, E.; Pantuck, A.; Winquist, E.; Higano, C.S.; Taplin, M.E.; Park, Y.; Kheoh, T.; Griffin, T.; Scher, H.I.; Rathkopf, D.E.
2013-01-01
BACKGROUND: Abiraterone acetate, an androgen biosynthesis inhibitor, improves overall survival in patients with metastatic castration-resistant prostate cancer after chemotherapy. We evaluated this agent in patients who had not received previous chemotherapy. METHODS: In this double-blind study, we
2 CFR 230.45 - Relationship to previous issuance.
2010-01-01
... PRINCIPLES FOR NON-PROFIT ORGANIZATIONS (OMB CIRCULAR A-122) § 230.45 Relationship to previous issuance. (a... contains the information that was in Attachment C (non-profit organizations not subject to the Circular) to... agencies for non-profit organizations....
Study of cystic artery by arteriography. Importance of previous cholecystography
Machado, G.O.
An oral cholecystography previously to celiac and mesenteric arteriography is performed, in order to identify the cystic artery, in 42 patients with pancreatitis, according Seldinger technique. The cystic artery was identified in all the cases, the pattern being the outlet of the cystic artery from the right hepatic artery. Infusion pump and seriography were not used.
"Battered Women" and Previous Victimization: Is the Question Relevant?
Gudim, Laurie, Comp.; And Others
This report discusses battered women and the role of their previous victimization. After a literature review on family violence in general, these topics are discussed: (1) family violence and the patriarchy; (2) the historical background of family violence; (3) intergenerational cycle of violence; and (4) psychological literature's four ways…
An experimental and theoretical investigation of the C(1D) + D2 reaction
Hickson, Kevin M
2016-01-01
In a previous joint experimental and theoretical study of the barrierless chemical reaction C(1D) + H2 at low temperatures (300-50 K) [K. M. Hickson, J.-C. Loison, H. Guo, Y. V. Suleimanov, J. Phys. Chem. Lett., 2015, 6, 4194.], excellent agreement was found between experimental thermal rate constants and theoretical estimates based on ring polymer molecular dynamics (RPMD) over the two lowest singlet potential energy surfaces (PESs). Here, we extend this work to one of its deuterated counterparts, C(1D) + D2, over the same temperature range. Experimental and RPMD results are in very good agreement when contributions from both PESs to this chemical reaction are included in the RPMD simulations. The deviation between experiment and the RPMD calculations does not exceed 25 % and both results exhibit a slight negative temperature dependence. The first excited 1A" PES plays a more important role than the ground 1A' PES as the temperature is decreased, similar to our previous studies of the C(1D) + H2 reaction but...
Widely Used Pesticides with Previously Unknown Endocrine Activity Revealed as in Vitro Antiandrogens
Orton, Frances; Rosivatz, Erika; Scholze, Martin; Kortenkamp, Andreas
2011-01-01
Background Evidence suggests that there is widespread decline in male reproductive health and that antiandrogenic pollutants may play a significant role. There is also a clear disparity between pesticide exposure and data on endocrine disruption, with most of the published literature focused on pesticides that are no longer registered for use in developed countries. Objective We used estimated human exposure data to select pesticides to test for antiandrogenic activity, focusing on highest use pesticides. Methods We used European databases to select 134 candidate pesticides based on highest exposure, followed by a filtering step according to known or predicted receptor-mediated antiandrogenic potency, based on a previously published quantitative structure–activity relationship (QSAR) model. In total, 37 pesticides were tested for in vitro androgen receptor (AR) antagonism. Of these, 14 were previously reported to be AR antagonists (“active”), 4 were predicted AR antagonists using the QSAR, 6 were predicted to not be AR antagonists (“inactive”), and 13 had unknown activity, which were “out of domain” and therefore could not be classified with the QSAR (“unknown”). Results All 14 pesticides with previous evidence of AR antagonism were confirmed as antiandrogenic in our assay, and 9 previously untested pesticides were identified as antiandrogenic (dimethomorph, fenhexamid, quinoxyfen, cyprodinil, λ-cyhalothrin, pyrimethanil, fludioxonil, azinphos-methyl, pirimiphos-methyl). In addition, we classified 7 compounds as androgenic. Conclusions Due to estimated antiandrogenic potency, current use, estimated exposure, and lack of previous data, we strongly recommend that dimethomorph, fludioxonil, fenhexamid, imazalil, ortho-phenylphenol, and pirimiphos-methyl be tested for antiandrogenic effects in vivo. The lack of human biomonitoring data for environmentally relevant pesticides presents a barrier to current risk assessment of pesticides on humans. PMID
In vitro culture of previously uncultured oral bacterial phylotypes.
Thompson, Hayley; Rybalka, Alexandra; Moazzez, Rebecca; Dewhirst, Floyd E; Wade, William G
2015-12-01
Around a third of oral bacteria cannot be grown using conventional bacteriological culture media. Community profiling targeting 16S rRNA and shotgun metagenomics methods have proved valuable in revealing the complexity of the oral bacterial community. Studies investigating the role of oral bacteria in health and disease require phenotypic characterizations that are possible only with live cultures. The aim of this study was to develop novel culture media and use an in vitro biofilm model to culture previously uncultured oral bacteria. Subgingival plaque samples collected from subjects with periodontitis were cultured on complex mucin-containing agar plates supplemented with proteose peptone (PPA), beef extract (BEA), or Gelysate (GA) as well as on fastidious anaerobe agar plus 5% horse blood (FAA). In vitro biofilms inoculated with the subgingival plaque samples and proteose peptone broth (PPB) as the growth medium were established using the Calgary biofilm device. Specific PCR primers were designed and validated for the previously uncultivated oral taxa Bacteroidetes bacteria HOT 365 and HOT 281, Lachnospiraceae bacteria HOT 100 and HOT 500, and Clostridiales bacterium HOT 093. All agar media were able to support the growth of 10 reference strains of oral bacteria. One previously uncultivated phylotype, Actinomyces sp. HOT 525, was cultivated on FAA. Of 93 previously uncultivated phylotypes found in the inocula, 26 were detected in in vitro-cultivated biofilms. Lachnospiraceae bacterium HOT 500 was successfully cultured from biofilm material harvested from PPA plates in coculture with Parvimonas micra or Veillonella dispar/parvula after colony hybridization-directed enrichment. The establishment of in vitro biofilms from oral inocula enables the cultivation of previously uncultured oral bacteria and provides source material for isolation in coculture.
The long-term consequences of previous hyperthyroidism
Hjelm Brandt Kristensen, Frans
2015-01-01
of hyperthyroidism and outcome as well as sporadic control for confounding such as co-morbidity. In addition, since hyperthyroidism and various morbidities (and mortality) as well as a number of environmental risk factors are under genetic influence, a possible co-inheritance could, at least theoretically, cause......-up (>10 years) as well as control for co-morbidity and genetic confounding. The results provide a number of interesting insights into the long-term consequences of hyperthyroidism. Firstly, hyperthyroidism is associated with an approximately 30% increased all-cause mortality. The all-cause mortality does...... with CVD, LD and DM both before and after the diagnosis of hyperthyroidism. Although the design used does not allow a stringent distinction between cause and effect, the findings indicate a possible direct association between hyperthyroidism and these morbidities, or vice versa....
Montserrat Yepes
2009-07-01
Full Text Available A sample of 46 men was evaluated with the DAPP (Questionnaire of Domestic Aggressor Psychological Profile. All were inmates convicted for various degrees of violence against their wives in different prisons. The sample was divided into three groups: homicides without previous violence against their wives (H (n=11, homicides with previous violence (VH (n=9 and domestic batterers without previous homicide attempts against their partners (B (n=26. The aim of the study was to analyze the possible existence of three different kinds of profiles and more specifically if it’s possible to obtain an independent profile for domestic homicides with previous episodes of violence against their wives. The results neither confirm the hypothesis as whole nor for the violent homicides. However, differences between groups were obtained in the admission and description of the facts, in the risk of future violence, in some sociodemographical characteristics (i.e., level of education, social status, in the couple relationship, in the dissatisfaction concerning the unachieved ideal woman, in the use of extreme physical force during the aggression, the time of the first aggression, the use of verbal threats during the aggression, explanation of the events to the family and the period of time between the beginning of the romantic relationship and the manifestation of violence. The implications of the results for the theoretical frameworks proposed and future research are discussed.
马俊海; 张强
2013-01-01
LIBOR market model has played more and more important role in pricing financial assets and managing risk. FX structured deposits driven by LIBOR interest rate also have get more and more application. Therefore, it is very necessary to make theoretic estimation and Monte Carlo simulation for LIBOR interest rate process and its FX structured deposits pricing. In this paper, firstly, on the basic of many existing improved methods for LIBOR market models, combining Heston stochastic volatility into standard market models, we set up a new LIBOR market model. Secondly, by using of Black inverse parameters calibrating methods and Markov chain Monte Carlo simulation, we calibrate and estimate parameters of the new LIBOR market models. Thirdly, we use the improvement LSM to price this FX structured product. Lastly, we make an empirical analysis. The research conclusions are: 1) LIBOR market model with the stochastic volatility process can describe the LIBOR rate very well and display a finer accuracy. 2) The improvement LSM can get the much more precise result.%LIBOR市场利率已经在金融资产定价和风险度量中发挥着越来越重要的作用,而以其为标的利率的外汇结构性存款也得到了广泛应用.因此,对LIBOR利率动态过程及其结构性存款定价进行有效理论估计和模拟计算则显得尤为重要.本文首先在标准市场模型中加入Heston随机波动率过程,建立随机波动率过程驱动的新型LIBOR市场模型；其次,运用Black逆推参数校正方法和MCMC参数估计方法对该LIBOR利率市场模型中的局部波动率和随机波动率过程中的参数进行校正和估计；再次,基于最优基本函数改进的LSM方法对可赎回外汇结构性存款定价进行模拟计算；最后是实证分析.研究结论认为:在单因子LIBOR利率市场模型基础上引入随机波动率过程,则可大大地提高利率模型的解释力；基于最优基本函数改进的LSM定价方法所得结果
A theoretical analysis of NADPH production and consumption in yeasts
Bruinenberg, P.M.; Van Dijken, J.P.; Scheffers, W.A.
1983-01-01
Theoretical calculations of the NADPH requirement for yeast biomass formation reveal that this parameter is strongly dependent on the carbon and nitrogen source. The data obtained have been used to estimate the carbon flow over the NADPH-producing pathways in these organisms, namely the hexose monop
Coefficients for tests from a decision theoretic point of view
van der Linden, Willem J.; Mellenbergh, Gideon J.
1978-01-01
From a decision theoretic point of view a general coefficient for tests, d, is derived. The coefficient is applied to three kinds of decision situations. First, the situation is considered in which a true score is estimated by a function of the observed score of a subject on a test (point
Stringent theoretical and experimental bounds on graviton mass
Ali, Ahmed Farag
2016-01-01
We show from theoretical considerations, that if the graviton is massive, its mass is constrained to be about $10^{-32}~eV/c^2$. This estimate is consistent with those obtained from experiments, including the recent gravitational wave detection in advanced LIGO.
[Fatal amnioinfusion with previous choriocarcinoma in a parturient woman].
Hrgović, Z; Bukovic, D; Mrcela, M; Hrgović, I; Siebzehnrübl, E; Karelovic, D
2004-04-01
The case of 36-year-old tercipare is described who developed choriocharcinoma in a previous pregnancy. During the first term labour the patient developed cardiac arrest, so reanimation and sectio cesarea was performed. A male new-born was delivered in good condition, but even after intensive therapy and reanimation occurred death of parturient woman with picture of disseminate intravascular coagulopathia (DIK). On autopsy and on histology there was no sign of malignant disease, so it was not possible to connect previous choricarcinoma with amniotic fluid embolism. Maybe was place of choriocarcinoma "locus minoris resistentiae" which later resulted with failure in placentation what was hard to prove. On autopsy we found embolia of lung with a microthrombosis of terminal circulation with punctiformis bleeding in mucous, what stands for DIK.
Previous studies underestimate BMAA concentrations in cycad flour.
Cheng, Ran; Banack, Sandra Anne
2009-01-01
The traditional diet of the Chamorro people of Guam has high concentrations of the neurotoxin BMAA, beta-methyl-amino-L-alanine, in cycad tortillas and from animals that feed on cycad seeds. We measured BMAA concentration in washed cycad flour and compared different extraction methods used by previous researchers in order to determine how much BMAA may have been unaccounted for in prior research. Samples were analyzed with AQC precolumn derivatization using HPLC-FD detection and verified with UPLC-UV, UPLC-MS, and triple quadrupole LC/MS/MS. Although previous workers had studied only the free amino acid component of BMAA in washed cycad flour, we detected significant levels of protein-associated BMAA in washed cycad flour. These data support a link between ALS/PDC and exposure to BMAA.
Cutaneous responses to vaccinia in individuals with previous smallpox vaccination.
Simpson, Eric L; Hercher, Michelle; Hammarlund, Erika K; Lewis, Matthew W; Slifka, Mark K; Hanifin, Jon M
2007-09-01
The durability of immune responses to smallpox vaccine is a subject of considerable debate. We compared cutaneous vaccinia responses in patients vaccinated in the distant past with vaccine-naïve individuals using serial close-up photographs. The previously vaccinated group had a significantly reduced time course and milder cutaneous reactions. Vaccinated individuals appear to maintain clinically detectable immunity against vaccinia for at least 20 years after smallpox vaccination.
Robotic Assisted Laparoscopic Prostatectomy Performed after Previous Suprapubic Prostatectomy
Tsui, Johnson F.; Feuerstein, Michael; Jazayeri, Seyed Behzad
2016-01-01
Operative management of prostate cancer in a patient who has undergone previous open suprapubic simple prostatectomy poses a unique surgical challenge. Herein, we describe a case of intermediate risk prostate cancer in a man who had undergone simple prostatectomy ten years prior to presentation. The patient was found to have Gleason 7 prostate cancer on MRI fusion biopsy of the prostate for elevated PSA and underwent an uncomplicated robot assisted laparoscopic radical prostatectomy. PMID:27882057
Antenatal diagnosis of Patau syndrome with previous anomalous baby
Keerthi Kocherla; Vasantha Kocherla
2014-01-01
Patau syndrome is the least common and most severe of the viable autosomal trisomies with median survival of fewer than 3 days was first identified as a cytogenetic syndrome in 1960. Patau syndrome is caused by an extra copy of chromosome 13. In this case report, we present antenatal imaging findings and gross foetal specimen correlation of foetus with Patau syndrome confirmed by karyotyping in third gravida who had significant previous obstetric history of gastrochisis in monochorionic and...
Fontan completion in a patient with previous liver transplantation.
Haida, Hirofumi; Aeba, Ryo; Hoshino, Ken; Morikawa, Yasuhide
2014-10-01
We present the first case of a successful Fontan completion in a patient with previous liver transplantation. An infant with polysplenia syndrome with a functional single ventricle and biliary atresia had been surgically managed by pulmonary artery banding, Kasai operation and living donor liver transplantation. Subsequently, the patient successfully underwent bidirectional cavopulmonary shunt and total cavopulmonary connection with extracardiac conduit at 3 and 5 years of age, respectively.
Pleuritis due to Brevundimonas diminuta in a previously healthy man.
Lu, Binghuai; Shi, Yanli; Zhu, Fengxia; Xu, Xiaolin
2013-03-01
Brevundimonas diminuta is rarely associated with invasive infections. We report the case of a previously healthy young man with pleural effusion, in which B. diminuta was recovered but incorrectly identified as Kingella kingae when it was freshly isolated. Consequently, the misidentification resulted in initial treatment failure. The correct identification was achieved through further incubation, sequencing of the 16S rRNA gene and MS.
Antenatal diagnosis of Patau syndrome with previous anomalous baby
Keerthi Kocherla; Vasantha Kocherla
2014-01-01
Patau syndrome is the least common and most severe of the viable autosomal trisomies with median survival of fewer than 3 days was first identified as a cytogenetic syndrome in 1960. Patau syndrome is caused by an extra copy of chromosome 13. In this case report, we present antenatal imaging findings and gross foetal specimen correlation of foetus with Patau syndrome confirmed by karyotyping in third gravida who had significant previous obstetric history of gastrochisis in monochorionic and...
Theoretical behaviorism meets embodied cognition : Two theoretical analyses of behavior
Keijzer, F.A.
2005-01-01
This paper aims to do three things: First, to provide a review of John Staddon's book Adaptive dynamics: The theoretical analysis of behavior. Second, to compare Staddon's behaviorist view with current ideas on embodied cognition. Third, to use this comparison to explicate some outlines for a theore
Theoretical behaviorism meets embodied cognition : Two theoretical analyses of behavior
Keijzer, F.A.
2005-01-01
This paper aims to do three things: First, to provide a review of John Staddon's book Adaptive dynamics: The theoretical analysis of behavior. Second, to compare Staddon's behaviorist view with current ideas on embodied cognition. Third, to use this comparison to explicate some outlines for a theore
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n" setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Subtalar Fusion Rate in Patients With Previous Ipsilateral Ankle Arthrodesis.
Zanolli, Diego H; Nunley, James A; Easley, Mark E
2015-09-01
Isolated subtalar arthrodesis is generally successful, with reported fusion rates of 84% to 100%. However, alteration of subtalar joint mechanics and talar body vasculature after ankle fusion may negatively influence subsequent ipsilateral subtalar joint fusion. Because there is very limited information on the subtalar fusion rate in patients with previous ipsilateral ankle fusion, the purpose of this study was to describe fusion rates in subtalar joint arthrodesis with and without preexisting ankle fusion in a large consecutive series of primary subtalar arthrodesis cases. All primary subtalar fusions performed between January 2000 and December 2010 were reviewed. Thirteen of 151 consecutive cases were in patients with existing ipsilateral ankle fusions. All patients were evaluated for clinical and radiographic evidence of nonunion at follow-up, and fusion rates in the groups with and without previous ipsilateral ankle fusion were compared. Five nonunions occurred in the 13 cases with prior ipsilateral ankle arthrodesis, a 61.5% fusion rate. Twelve nonunions were identified in the 138 cases without prior ankle arthrodesis, a significantly higher fusion rate of 91.3% (P = .007). In our series, the subtalar fusion rate in patients with previous ipsilateral ankle arthrodesis was significantly lower than that for subtalar arthrodesis in the absence of ipsilateral ankle arthrodesis. Level III, retrospective comparative study. © The Author(s) 2015.
Optimized tuner selection for engine performance estimation
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
Mechanics lectures on theoretical physics
Sommerfeld, Arnold Johannes Wilhelm
1952-01-01
Mechanics: Lectures on Theoretical Physics, Volume I covers a general course on theoretical physics. The book discusses the mechanics of a particle; the mechanics of systems; the principle of virtual work; and d'alembert's principle. The text also describes oscillation problems; the kinematics, statics, and dynamics of a rigid body; the theory of relative motion; and the integral variational principles of mechanics. Lagrange's equations for generalized coordinates and the theory of Hamilton are also considered. Physicists, mathematicians, and students taking Physics courses will find the book
Theoretical Framework for Robustness Evaluation
Sørensen, John Dalsgaard
2011-01-01
This paper presents a theoretical framework for evaluation of robustness of structural systems, incl. bridges and buildings. Typically modern structural design codes require that ‘the consequence of damages to structures should not be disproportional to the causes of the damages’. However, although...... the importance of robustness for structural design is widely recognized the code requirements are not specified in detail, which makes the practical use difficult. This paper describes a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines...
Group theoretical approach to entanglement
Korbicz, J K
2006-01-01
We examine a potential relevance of methods of harmonic analysis for the study of quantum entanglement. By changing the mathematical object representing quantum states, we reformulate the separability problem in group-theoretical terms. We also translate the positivity of partial transpose (PPT) criterion and one of the necessary-and-sufficient criteria for pure states to the group-theoretical language. The formal relation of our formalism to local hidden variable models is briefly examined. We also remark on the connection between entanglement and some certain non-commutativity.
Social Impact, a Theoretical Model
Jenny Onyx
2014-01-01
Full Text Available This paper constructs a theoretical model of social impact as it applies to civil society organisations. It does so by drawing on the recent literature on the topic as well as recently completed empirical studies. First, the relationship between impact and evaluation is examined. This is followed by an exploration of the capitals, notably social, human, and cultural capital and their interrelationships, as a theoretical base for the explication of social impact. A formal model of social impact is then identified together with a set of basic principles that may be said to define social impact. Finally the implications of the model are discussed for social policy and organisational management.
Theoretical mechanics for sixth forms
Plumpton, C
1971-01-01
Theoretical Mechanics for Sixth Forms, Second Edition is a 14-chapter book that begins by elucidating the nature of theoretical mechanics. The book then describes the statics of a particle in illustration of the techniques of handling vector quantities. Subsequent chapters focus on the principle of moments, parallel forces and centers of gravity; and the application of Newton's second law to the dynamics of a particle and the ideas of work and energy, impulse and momentum, and power. The concept of friction is also explained. This volume concludes with chapters concerning motion in a circle an
Theoretical chemistry advances and perspectives
Eyring, Henry
1977-01-01
Theoretical Chemistry: Advances and Perspectives, Volume 2 covers all aspects of theoretical chemistry.This book reviews the techniques that have been proven successful in the study of interatomic potentials in order to describe the interactions between complex molecules. The ground state properties of the interacting electron gas when a magnetic field is present are also elaborated, followed by a discussion on the Gellman-Brueckner-Macke theory of the correlation energy that has applications in atomic and molecular systems.This volume considers the instability of the Hartree-Fock ground state
Dynamics in Higher Education Politics: A Theoretical Model
Kauko, Jaakko
2013-01-01
This article presents a model for analysing dynamics in higher education politics (DHEP). Theoretically the model draws on the conceptual history of political contingency, agenda-setting theories and previous research on higher education dynamics. According to the model, socio-historical complexity can best be analysed along two dimensions: the…
Dramaturgical and Music-Theoretical Approaches to Improvisation Pedagogy
Huovinen, Erkki; Tenkanen, Atte; Kuusinen, Vesa-Pekka
2011-01-01
The aim of this article is to assess the relative merits of two approaches to teaching musical improvisation: a music-theoretical approach, focusing on chords and scales, and a "dramaturgical" one, emphasizing questions of balance, variation and tension. Adult students of music pedagogy, with limited previous experience in improvisation,…
Previous prelabor or intrapartum cesarean delivery and risk of placenta previa.
Downes, Katheryne L; Hinkle, Stefanie N; Sjaarda, Lindsey A; Albert, Paul S; Grantz, Katherine L
2015-05-01
The purpose of this study was to examine the association between previous cesarean delivery and subsequent placenta previa while distinguishing cesarean delivery before the onset of labor from intrapartum cesarean delivery. We conducted a retrospective cohort study of electronic medical records from 20 Utah hospitals (2002-2010) with restriction to the first 2 singleton deliveries of nulliparous women at study entry (n=26,987). First pregnancy delivery mode was classified as (1) vaginal (reference), (2) cesarean delivery before labor onset (prelabor), or (3) cesarean delivery after labor onset (intrapartum). Risk of second delivery previa was estimated by previous delivery mode with the use of logistic regression and was adjusted for maternal age, insurance, smoking, comorbidities, previous pregnancy loss, and history of previa. Most first deliveries were vaginal (82%; n=22,142), followed by intrapartum cesarean delivery (14.6%; n=3931), or prelabor cesarean delivery (3.4%; n=914). Incidence of second delivery previa was 0.29% (n=78) and differed by previous delivery mode: vaginal, 0.24%; prelabor cesarean delivery, 0.98%; intrapartum cesarean delivery, 0.38% (Pprevia (adjusted odds ratio, 2.62; 95% confidence interval, 1.24-5.56). There was no significant association between previous intrapartum cesarean delivery and previa (adjusted odds ratio, 1.22; 95% confidence interval, 0.68-2.19). Previous prelabor cesarean delivery was associated with a >2-fold significantly increased risk of previa in the second delivery, although the approximately 20% increased risk of previa that was associated with previous intrapartum cesarean delivery was not significant. Although rare, the increased risk of placenta previa after previous prelabor cesarean delivery may be important when considering nonmedically indicated prelabor cesarean delivery. Published by Elsevier Inc.
Clinical Validation of Adjusted Corneal Power in Patients with Previous Myopic Lasik Surgery
Vicente J. Camps
2015-01-01
Full Text Available Purpose. To validate clinically a new method for estimating the corneal power (Pc using a variable keratometric index (nkadj in eyes with previous laser refractive surgery. Setting. University of Alicante and Medimar International Hospital (Oftalmar, Alicante, (Spain. Design. Retrospective case series. Methods. This retrospective study comprised 62 eyes of 62 patients that had undergone myopic LASIK surgery. An algorithm for the calculation of nkadj was used for the estimation of the adjusted keratometric corneal power (Pkadj. This value was compared with the classical keratometric corneal power (Pk, the True Net Power (TNP, and the Gaussian corneal power (PcGauss. Likewise, Pkadj was compared with other previously described methods. Results. Differences between PcGauss and Pc values obtained with all methods evaluated were statistically significant (p<0.01. Differences between Pkadj and PcGauss were in the limit of clinical significance (p<0.01, loA [−0.33,0.60] D. Differences between Pkadj and TNP were not statistically and clinically significant (p=0.319, loA [−0.50,0.44] D. Differences between Pkadj and previously described methods were statistically significant (p<0.01, except with PcHaigisL (p=0.09, loA [−0.37,0.29] D. Conclusion. The use of the adjusted keratometric index (nkadj is a valid method to estimate the central corneal power in corneas with previous myopic laser refractive surgery, providing results comparable to PcHaigisL.
Theoretical study of multiatomic vacancies in single-layer hexagonal boron nitride
Urasaki, Syu; Kageshima, Hiroyuki
2017-02-01
The physical properties of multiatomic vacancies are investigated by first-principles total-energy calculations. The formation energies of various vacancies as functions of chemical potential and charge states are calculated. The relationship between optimized atomic structures and charge states is analyzed. On the basis of the results, it is confirmed that the variations of formation energies and atomic structures are closely related to the changes in electronic states. In addition, the stabilities of generally large multiatomic vacancies are estimated on the basis of edges and corner energies. It is found that larger vacancies are not stable and have lower densities than smaller ones. The results are also compared with previous theoretical and experimental results.
HEART TRANSPLANTATION IN PATIENTS WITH PREVIOUS OPEN HEART SURGERY
R. Sh. Saitgareev
2016-01-01
Full Text Available Heart Transplantation (HTx to date remains the most effective and radical method of treatment of patients with end-stage heart failure. The defi cit of donor hearts is forcing to resort increasingly to the use of different longterm mechanical circulatory support systems, including as a «bridge» to the follow-up HTx. According to the ISHLT Registry the number of recipients underwent cardiopulmonary bypass surgery increased from 40% in the period from 2004 to 2008 to 49.6% for the period from 2009 to 2015. HTx performed in repeated patients, on the one hand, involves considerable technical diffi culties and high risks; on the other hand, there is often no alternative medical intervention to HTx, and if not dictated by absolute contradictions the denial of the surgery is equivalent to 100% mortality. This review summarizes the results of a number of published studies aimed at understanding the immediate and late results of HTx in patients, previously underwent open heart surgery. The effect of resternotomy during HTx and that of the specifi c features associated with its implementation in recipients previously operated on open heart, and its effects on the immediate and long-term survival were considered in this review. Results of studies analyzing the risk factors for perioperative complications in repeated recipients were also demonstrated. Separately, HTx risks after implantation of prolonged mechanical circulatory support systems were examined. The literature does not allow to clearly defi ning the impact factor of earlier performed open heart surgery on the course of perioperative period and on the prognosis of survival in recipients who underwent HTx. On the other hand, subject to the regular fl ow of HTx and the perioperative period the risks in this clinical situation are justifi ed as a long-term prognosis of recipients previously conducted open heart surgery and are comparable to those of patients who underwent primary HTx. Studies
Clinical Analysis of Placenta Previa Complicated with Previous Caesarean Section
Liang-kun Ma; Na Han; Jian-qiu Yang; Xu-ming Bian; Jun-tao Liu
2012-01-01
Objective To investigate the clinical features and treatment of placenta previa complicated with previous caesarean section.Methods The clinical data of 29 patients with placenta previa complicated with a previous caesarean section (RCS group) admitted in Peking Union Medical College Hospital during a period from 2003 to 2011 were retrospectively reviewed and compared with those of 243 patients with placenta previa without a previous caesarean section (FCS group) during the same period.Results There was no difference in the mean age (28.9±3.6 vs.28.1±4.5 years) and the average gravidity (2.35 ± 1.48 vs.2.21 ± 1.53) between RCS group and FCS group (all P＞0.05).The RCS group had more preterm births (24.1％ vs.13.2％),complete placenta previa (55.2％ vs.4.9％),placenta accreta (34.5％ vs.2.5％),more blood loss during caesarean section (1412±602 vs.648 ±265 mL),blood transfusion (51.7％ vs.4.9％),disseminated intravascular coagulation (13.8％ vs.2.1％),and obstetric hysterectomy ( 13.8 ％ vs.0.8 ％) than the F C S group (all P＜ 0.05).The preterm infant rate ( 30.0％ vs.13.0％),neonatal asphyxia rate (10.0％ vs.4.9％),and perinatal mortality rate (6.7％ vs.0.4％) of the RCS group were higher than those of the FCS group (all P＜0.05).Conclusions More patients had complete placenta previa and placenta accreta,postpartum hemorrhage,transfusion,uterine packing,obstetric hysterectomy,and perinatal morbidity in the placenta previa patients with previous caesarean section.The patient should be informed of the risk and unnecessary first cesarean sections should be avoided.
Antenatal diagnosis of Patau syndrome with previous anomalous baby
Keerthi Kocherla
2014-06-01
Full Text Available Patau syndrome is the least common and most severe of the viable autosomal trisomies with median survival of fewer than 3 days was first identified as a cytogenetic syndrome in 1960. Patau syndrome is caused by an extra copy of chromosome 13. In this case report, we present antenatal imaging findings and gross foetal specimen correlation of foetus with Patau syndrome confirmed by karyotyping in third gravida who had significant previous obstetric history of gastrochisis in monochorionic and monoamniotic twins who died at 14 weeks of gestation. [Int J Res Med Sci 2014; 2(3.000: 1172-1175
Moyamoya disease in a child with previous acute necrotizing encephalopathy
Kim, Taik-Kun; Cha, Sang Hoon; Chung, Kyoo Byung; Kim, Jung Hyuck; Kim, Baek Hyun; Chung, Hwan Hoon [Department of Diagnostic Radiology, Korea University College of Medicine, Ansan Hospital, 516 Kojan-Dong, Ansan City, Kyungki-Do 425-020 (Korea); Eun, Baik-Lin [Department of Pediatrics, Korea University College of Medicine, Seoul (Korea)
2003-09-01
A previously healthy 24-day-old boy presented with a 2-day history of fever and had a convulsion on the day of admission. MRI showed abnormal signal in the thalami, caudate nuclei and central white matter. Acute necrotising encephalopathy was diagnosed, other causes having been excluded after biochemical and haematological analysis of blood, urine and CSF. He recovered, but with spastic quadriparesis. At the age of 28 months, he suffered sudden deterioration of consciousness and motor weakness of his right limbs. MRI was consistent with an acute cerebrovascular accident. Angiography showed bilateral middle cerebral artery stenosis or frank occlusion with numerous lenticulostriate collateral vessels consistent with moyamoya disease. (orig.)
Theoretical Calculation of MMF's Bandwidth
LI Xiao-fu; JIANG De-sheng; YU Hai-hu
2004-01-01
The difference between over-filled launch bandwidth (OFL BW) and restricted mode launch bandwidth (RML BW) is described. A theoretical model is founded to calculate the OFL BW of grade index multimode fiber (GI-MMF),and the result is useful to guide the modification of the manufacturing method.
Theoretical foundations for collaboration engineering
Kolfschoten, G.L.
2007-01-01
Collaboration is often presented as the solution to numerous problems in business and society. However, collaboration is challenging, and collaboration support is not an off-the-shelf-product. This research offers theoretical foundations for Collaboration Engineering. Collaboration Engineering is an
Theoretical Framework for Robustness Evaluation
Sørensen, John Dalsgaard
2011-01-01
This paper presents a theoretical framework for evaluation of robustness of structural systems, incl. bridges and buildings. Typically modern structural design codes require that ‘the consequence of damages to structures should not be disproportional to the causes of the damages’. However, althou...
Lightning Talks 2015: Theoretical Division
Shlachter, Jack S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-25
This document is a compilation of slides from a number of student presentations given to LANL Theoretical Division members. The subjects cover the range of activities of the Division, including plasma physics, environmental issues, materials research, bacterial resistance to antibiotics, and computational methods.
Theoretical Advanced Study Institute: 2014
DeGrand, Thomas [Univ. of Colorado, Boulder, CO (United States)
2016-08-17
The Theoretical Advanced Study Institute was held at the University of Colorado, Boulder, during June 2-27, 2014. The topic was "Journeys through the Precision Frontier: Amplitudes for Colliders." The organizers were Professors Lance Dixon (SLAC) and Frank Petriello (Northwestern and Argonne). There were fifty one students. Nineteen lecturers gave sixty seventy five minute lectures. A Proceedings was published.
Theoretical Approaches to Political Communication.
Chesebro, James W.
Political communication appears to be emerging as a theoretical and methodological academic area of research within both speech-communication and political science. Five complimentary approaches to political science (Machiavellian, iconic, ritualistic, confirmational, and dramatistic) may be viewed as a series of variations which emphasize the…
Data, Methods, and Theoretical Implications
Hannagan, Rebecca J.; Schneider, Monica C.; Greenlee, Jill S.
2012-01-01
Within the subfields of political psychology and the study of gender, the introduction of new data collection efforts, methodologies, and theoretical approaches are transforming our understandings of these two fields and the places at which they intersect. In this article we present an overview of the research that was presented at a National…
Mapping Neural Network Derived from the Parzen Window Estimator
Schiøler, Henrik; Hartmann, U.
1992-01-01
The article presents a general theoretical basis for the construction of mapping neural networks. The theory is based on the Parzen Window estimator for......The article presents a general theoretical basis for the construction of mapping neural networks. The theory is based on the Parzen Window estimator for...
Electron microscopy and theoretical modeling of cochleates.
Nagarsekar, Kalpa; Ashtikar, Mukul; Thamm, Jana; Steiniger, Frank; Schacher, Felix; Fahr, Alfred; May, Sylvio
2014-11-11
Cochleates are self-assembled cylindrical condensates that consist of large rolled-up lipid bilayer sheets and represent a novel platform for oral and systemic delivery of therapeutically active medicinal agents. With few preceding investigations, the physical basis of cochleate formation has remained largely unexplored. We address the structure and stability of cochleates in a combined experimental/theoretical approach. Employing different electron microscopy methods, we provide evidence for cochleates consisting of phosphatidylserine and calcium to be hollow tubelike structures with a well-defined constant lamellar repeat distance and statistically varying inner and outer radii. To rationalize the relation between inner and outer radii, we propose a theoretical model. Based on the minimization of a phenomenological free energy expression containing a bending, adhesion, and frustration contribution, we predict the optimal tube dimensions of a cochleate and estimate ratios of material constants for cochleates consisting of phosphatidylserines with varied hydrocarbon chain structures. Knowing and understanding these ratios will ultimately benefit the successful formulation of cochleates for drug delivery applications.
Intravitreal ranibizumab for diabetic macular oedema in previously vitrectomized eyes
Laugesen, Caroline Schmidt; Ostri, Christoffer; Brynskov, Troels
2017-01-01
PURPOSE: There is little information about the efficacy of intravitreal vascular endothelial growth factor (VEGF) inhibition in vitrectomized eyes. This study aimed to evaluate the efficacy of anti-VEGF (ranibizumab) on diabetic macular oedema in previously vitrectomized eyes. METHODS: A nationwide...... retrospective review of medical records from 2010 to 2013. RESULTS: We identified 33 previously vitrectomized eyes in 28 patients treated with ranibizumab injections for diabetic macular oedema. Median follow-up was 323 days (interquartile range 72-1404 days). Baseline mean visual acuity was 0.57 logMAR (95% CI...... 0.13-1.01) before injections. After an average of 4.7 injections (range 1-15), mean visual acuity remained stable at 0.54 logMAR (95% CI 0.13-0.95) with a mean improvement of 0.03 (p = 0. 45, 95% CI -0.12 to 0.06). In 12 eyes (36%), visual acuity improved 0.1 logMAR or more, in 12 eyes (36%), vision...
Examining Neurocognitive Function in Previously Concussed Interscholastic Female Soccer Players.
Forbes, Cameron R; Glutting, Joseph J; Kaminski, Thomas W
2016-01-01
Awareness of sport-related concussions in soccer has gained recent attention in the medical community. Interestingly, purposeful heading-a unique yet strategic and inherent part of soccer-involves repeated subconcussive blows to the head. We divided 210 female interscholastic soccer players into control (CON [never concussed]) and experimental (EXP [previously concussed]) groups. We assessed neurocognitive performance using the Automated Neuropsychological Assessment Metrics computer program before and after the players' competitive season. Headers were recorded at all sanctioned matches. Data were analyzed using a series of one-way analyses of covariance and t tests. Both groups essentially played in the same number of games (EXP = 16.1 vs. CON = 16.1) and had an equal number of total headers (EXP = 24.9 vs. CON = 24.3). Additionally, headers per game were surprisingly low in both groups (1.4 in EXP vs. 1.3 in CON). Unexpectedly, there were no significant differences between the EXP and CON groups across all dependent variables measured (p > .05). This study suggests that although previously concussed players involve themselves in purposeful heading (i.e., subconcussive insults) throughout a competitive season, there appear to be no negative consequences on neuropsychological test performance or concussion-related symptoms. Additional research is needed to determine what may result during the course of a playing career.
Relationship of deer and moose populations to previous winters' snow
Mech, L.D.; McRoberts, R.E.; Peterson, R.O.; Page, R.E.
1987-01-01
(1) Linear regression was used to relate snow accumulation during single and consecutive winters with white-tailed deer (Odocoileus virginianus) fawn:doe ratios, mosse (Alces alces) twinning rates and calf:cow ratios, and annual changes in deer and moose populations. Significant relationships were found between snow accumulation during individual winters and these dependent variables during the following year. However, the strongest relationships were between the dependent variables and the sums of the snow accumulations over the previous three winters. The percentage of the variability explained was 36 to 51. (2) Significant relationships were also found between winter vulnerability of moose calves and the sum of the snow accumulations in the current, and up to seven previous, winters, with about 49% of the variability explained. (3) No relationship was found between wolf numbers and the above dependent variables. (4) These relationships imply that winter influences on maternal nutrition can accumulate for several years and that this cumulative effect strongly determines fecundity and/or calf and fawn survivability. Although wolf (Canis lupus L.) predation is the main direct mortality agent on fawns and calves, wolf density itself appears to be secondary to winter weather in influencing the deer and moose populations.
Identification of previously unrecognized FAP in children with Gardner fibroma.
Vieira, Joana; Pinto, Carla; Afonso, Mariana; do Bom Sucesso, Maria; Lopes, Paula; Pinheiro, Manuela; Veiga, Isabel; Henrique, Rui; Teixeira, Manuel R
2015-05-01
Fibromatous soft tissue lesions, namely desmoid-type fibromatosis and Gardner fibroma, may occur sporadically or as a result of inherited predisposition (as part of familial adenomatous polyposis, FAP). Whereas desmoid-type fibromatosis often present β-catenin overexpression (by activating CTNNB1 somatic variants or APC biallelic inactivation), the pathogenetic mechanisms in Gardner fibroma are unknown. We characterized in detail Gardner fibromas diagnosed in two infants to evaluate their role as sentinel lesions of previously unrecognized FAP. In the first infant we found a 5q deletion including APC in the tumor and the novel APC variant c.4687dup in constitutional DNA. In the second infant we found the c.5826_5829del and c.1678A>T APC variants in constitutional and tumor DNA, respectively. None of the constitutional APC variants occurred de novo and both tumors showed nuclear staining for β-catenin and no CTNNB1 variants. We present the first comprehensive characterization of the pathogenetic mechanisms of Gardner fibroma, which may be a sentinel lesion of previously unrecognized FAP families.
Melanoma diagnosed in lesions previously treated by laser therapy.
Delker, Sarah; Livingstone, Elisabeth; Schimming, Tobias; Schadendorf, Dirk; Griewank, Klaus G
2017-01-01
Laser therapy has become a routine procedure in dermatological practice and is frequently also used for pigmented lesions. Few reports exist of melanomas diagnosed in lesions previously treated by laser therapy. Between 2007 and 2014, we identified 11 patients who presented to our department with a melanoma diagnosed in a region previously treated by laser therapy. The course of events until the diagnosis of melanoma was assessed as well as patient outcome including treatment for disease progression. No histological assessment had been performed prior to laser therapy in nine of 11 (82%) cases. Benign melanocytic lesions had been diagnosed by biopsy prior to laser therapy in the other two cases. Time from laser therapy to diagnosis of melanoma ranged from less than 1 to 10 years. Stage of disease at diagnosis varied from stage IA to IIIC. Four patients progressed to stage IV disease, of whom at least one died of melanoma. We conclude that laser treatment of pigmented lesions can complicate the diagnosis of melanoma and lead to diagnosis delay with potentially fatal consequences. Considering this risk, we believe laser therapy for pigmented lesions should either be avoided entirely or at a minimum performed only after prior histological assessment. © 2016 Japanese Dermatological Association.
The Problems of Multiple Feedback Estimation.
Bulcock, Jeffrey W.
The use of two-stage least squares (2SLS) for the estimation of feedback linkages is inappropriate for nonorthogonal data sets because 2SLS is extremely sensitive to multicollinearity. It is argued that what is needed is use of a different estimating criterion than the least squares criterion. Theoretically the variance normalization criterion has…
Quantum turbulence: Theoretical and numerical problems
Nemirovskii, Sergey K.
2013-03-01
The term “quantum turbulence” (QT) unifies the wide class of phenomena where the chaotic set of one dimensional quantized vortex filaments (vortex tangles) appear in quantum fluids and greatly influence various physical features. Quantum turbulence displays itself differently depending on the physical situation, and ranges from quasi-classical turbulence in flowing fluids to a near equilibrium set of loops in phase transition. The statistical configurations of the vortex tangles are certainly different in, say, the cases of counterflowing helium and a rotating bulk, but in all the physical situations very similar theoretical and numerical problems arise. Furthermore, quite similar situations appear in other fields of physics, where a chaotic set of one dimensional topological defects, such as cosmic strings, or linear defects in solids, or lines of darkness in nonlinear light fields, appear in the system. There is an interpenetration of ideas and methods between these scientific topics which are far apart in other respects. The main purpose of this review is to bring together some of the most commonly discussed results on quantum turbulence, focusing on analytic and numerical studies. We set out a series of results on the general theory of quantum turbulence which aim to describe the properties of the chaotic vortex configuration, starting from vortex dynamics. In addition we insert a series of particular questions which are important both for the whole theory and for the various applications. We complete the article with a discussion of the hot topic, which is undoubtedly mainstream in this field, and which deals with the quasi-classical properties of quantum turbulence. We discuss this problem from the point of view of the theoretical results stated in the previous sections. We also included section, which is devoted to the experimental and numerical suggestions based on the discussed theoretical models.
An information theoretic approach to pedigree reconstruction.
Almudevar, Anthony
2016-02-01
Network structure is a dominant feature of many biological systems, both at the cellular level and within natural populations. Advances in genotype and gene expression screening made over the last few decades have permitted the reconstruction of these networks. However, resolution to a single model estimate will generally not be possible, leaving open the question of the appropriate method of formal statistical inference. The nonstandard structure of the problem precludes most traditional statistical methodologies. Alternatively, a Bayesian approach provides a natural methodology for formal inference. Construction of a posterior density on the space of network structures allows formal inference regarding features of network structure using specific marginal posterior distributions. An information theoretic approach to this problem will be described, based on the Minimum Description Length principle. This leads to a Bayesian inference model based on the information content of data rather than on more commonly used probabilistic models. The approach is applied to the problem of pedigree reconstruction based on genotypic data. Using this application, it is shown how the MDL approach is able to provide a truly objective control for model complexity. A two-cohort model is used for a simulation study. The MDL approach is compared to COLONY-2, a well known pedigree reconstruction application. The study highlights the problem of genotyping error modeling. COLONY-2 requires prior error rate estimates, and its accuracy proves to be highly sensitive to these estimates. In contrast, the MDL approach does not require prior error rate estimates, and is able to accurately adjust for genotyping error across the range of models considered. Copyright © 2015 Elsevier Inc. All rights reserved.
Measles Outbreak Among Previously Immunized Healthcare Workers, the Netherlands, 2014.
Hahné, Susan J M; Nic Lochlainn, Laura M; van Burgel, Nathalie D; Kerkhof, Jeroen; Sane, Jussi; Yap, Kioe Bing; van Binnendijk, Rob S
2016-12-15
We investigated a measles outbreak among healthcare workers (HCWs) by assessing laboratory characteristics, measles vaccine effectiveness, and serological correlates for protection. Cases were laboratory-confirmed measles in HCWs from hospital X during weeks 12-20 of 2014. We assessed cases' severity and infectiousness by using a questionnaire. We tested cases' sera for measles immunoglobulin M, immunoglobulin G, avidity, and plaque reduction neutralization (PRN). Throat swabs and oral fluid samples were tested by quantitative polymerase chain reaction. We calculated attack rates (ARs) by vaccination status and estimated measles vaccine effectiveness as 1 - [ARvaccinated/ARunvaccinated]. Eight HCWs were notified as measles cases; 6 were vaccinated with measles vaccine twice, 1 was vaccinated once, and 1 was unvaccinated. All 6 twice-vaccinated cases had high avidity and PRN titers. None reported severe measles or onward transmission. Two of 4 investigated twice-vaccinated cases had pre-illness PRN titers of >120 mIU/mL. Among 106 potentially exposed HCWs, the estimated effectiveness of 2 doses of measles vaccine was 52% (95% confidence interval [CI], -207%-93%). Measles occurred in 6 twice-vaccinated HCWs, despite 2 having adequate pre-exposure neutralizing antibodies. None of the twice-vaccinated cases had severe measles, and none had onward transmission, consistent with laboratory findings suggesting a secondary immune response. Improving 2-dose MMR coverage among HCWs would have likely reduced the size of this outbreak. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.
Herpes zoster recurrences more frequent than previously reported.
Yawn, Barbara P; Wollan, Peter C; Kurland, Marge J; St Sauver, Jennifer L; Saddier, Patricia
2011-02-01
To present population-based estimates of herpes zoster (HZ) recurrence rates among adults. To identify recurrent cases of HZ, we reviewed the medical records (through December 31, 2007) of all Olmsted County, Minnesota, residents aged 22 years or older who had an incident case of HZ between January 1, 1996, and December 31, 2001. Kaplan-Meier curves and Cox regression models were used to describe recurrences by age, immune status, and presence of prolonged pain at the time of the incident HZ episode. Of the 1669 persons with a medically documented episode of HZ, 95 had 105 recurrences (8 persons with >1 recurrence) by December 31, 2007, an average follow-up of 7.3 years. The Kaplan-Meier estimate of the recurrence rate at 8 years was 6.2%. With a maximum follow-up of 12 years, the time between HZ episodes in the same person varied from 96 days to 10 years. Recurrences were significantly more likely in persons with zoster-associated pain of 30 days or longer at the initial episode (hazard ratio, 2.80; 95% confidence interval, 1.84-4.27; P<.001) and in immunocompromised individuals (hazard ratio, 2.35; 95% confidence interval, 1.35-4.08; P=.006). Women and anyone aged 50 years or older at the index episode also had a greater likelihood of recurrence. Rates of HZ recurrence appear to be comparable to rates of first HZ occurrence in immunocompetent individuals, suggesting that recurrence is sufficiently common to warrant investigation of vaccine prevention in this group.
Theoretical Models of the Galactic Bulge
Shen, Juntai; Li, Zhao-Yu
Near infrared images from the COBE satellite presented the first clear evidence that our Milky Way galaxy contains a boxy shaped bulge. Recent years have witnessed a gradual paradigm shift in the formation and evolution of the Galactic bulge. Bulges were commonly believed to form in the dynamical violence of galaxy mergers. However, it has become increasingly clear that the main body of the Milky Way bulge is not a classical bulge made by previous major mergers, instead it appears to be a bar seen somewhat end-on. The Milky Way bar can form naturally from a precursor disc and thicken vertically by the internal firehose/buckling instability, giving rise to the boxy appearance. This picture is supported by many lines of evidence, including the asymmetric parallelogram shape, the strong cylindrical rotation (i.e., nearly constant rotation regardless of the height above the disc plane), the existence of an intriguing X-shaped structure in the bulge, and perhaps the metallicity gradients. We review the major theoretical models and techniques to understand the Milky Way bulge. Despite the progresses in recent theoretical attempts, a complete bulge formation model that explains the full kinematics and metallicity distribution is still not fully understood. Upcoming large surveys are expected to shed new light on the formation history of the Galactic bulge.
Theoretical and practical significance of formal reasoning
Linn, Marcia C.
Piaget's theory has profoundly influenced science education research. Following Piaget, researchers have focused on content-free strategies, developmentally based mechanisms, and structural models of each stage of reasoning. In practice, factors besides those considered in Piaget's theory influence whether or not a theoretically available strategy is used. Piaget's focus has minimized the research attention placed on what could be called practical factors in reasoning. Practical factors are factors that influence application of a theoretically available strategy, for example, previous experience with the task content, familiarity with task instructions, or personality style of the student. Piagetian theory has minimized the importance of practical factors and discouraged investigation of (1) the role of factual knowledge in reasoning, (2) the diagnosis of specific, task-based errors in reasoning, (3) the influence of individual aptitudes on reasoning (e.g., field dependence-independence), and (4) the effect of educational interventions designed to change reasoning. This article calls for new emphasis on practical factors in reasoning and suggests why research on practical factors in reasoning will enhance our understanding of how scientific reasoning is acquired and of how science education programs can foster it.
Theoretical study of the NO beta system
Langhoff, Stephen R.; Partridge, Harry; Bauschlicher, Charles W., Jr.; Komornicki, Andrew
1991-01-01
A theoretical determination of the transition moment functions (TMFs) for the beta system of NO is presented. High levels of correlation treatment are required to show the changing degree of Rydberg character in the B2II with decreasing r values. The state-averaged complete-active-space self-consistent-field multireference configuration-interaction method is used for the determination. Previous lifetime measurements made with laser-induced fluorescence, varying between 2 and 0.85 microns, are discussed in terms of the calculated lifetimes for v-prime values 0-6, which vary from 2.12-1.17 microns. When larger r values are used for the transition moment function, the calculated lifetimes correlate with experimental lifetimes. The Einstein coefficients agree with experimental results, although limitations in the calibration of the spectrometer can account for systematic differences. The correlation with earlier experimental results suggests that radiative lifetimes are in the range of 1-2 microns.
Theoretical physics 6 quantum mechanics : basics
Nolting, Wolfgang
2017-01-01
This textbook offers a clear and comprehensive introduction to the basics of quantum mechanics, one of the core components of undergraduate physics courses. It follows on naturally from the previous volumes in this series, thus developing the physical understanding further on to quantized states. The first part of the book introduces wave equations while exploring the Schrödinger equation and the hydrogen atom. More complex themes are covered in the second part of the book, which describes the Dirac formulism of quantum mechanics. Ideally suited to undergraduate students with some grounding in classical mechanics and electrodynamics, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets. About the Theoretical Physics series Translated from the renowned and highly successful German editions, the eight volumes of this...
Property Testing via Set-Theoretic Operations
Chen, Victor; Xie, Ning
2010-01-01
Given two testable properties $\\mathcal{P}_{1}$ and $\\mathcal{P}_{2}$, under what conditions are the union, intersection or set-difference of these two properties also testable? We initiate a systematic study of these basic set-theoretic operations in the context of property testing. As an application, we give a conceptually different proof that linearity is testable, albeit with much worse query complexity. Furthermore, for the problem of testing disjunction of linear functions, which was previously known to be one-sided testable with a super-polynomial query complexity, we give an improved analysis and show it has query complexity $O(1/\\eps^2)$, where $\\eps$ is the distance parameter.
Surgical management of intracranial aneurysms previously treated with endovascular therapy
Kumar Rajiv
2010-01-01
Full Text Available Endovascular treatment with coils of cerebral aneurysm is being increasingly used for definitive treatment. An increasing number of patients are coming for surgical intervention either for recurrences, incomplete coil embolization or its complications. Our objective was to assess the surgical management in such patients. This was a retrospective analysis of the patients who were initially treated with endovascular embolization and later managed surgically with clipping either for unsuccessful coiling, recurrence of aneurysm or post-procedural complication, between 2003 and 2007. Anatomical results were excellent in all five patients, and all the aneurysms were totally excluded from the circulation. All patients had good recovery. None of the patients suffered any major intraoperative or postoperative complication. Neurosurgical management of intracranial aneurysms previously treated with endovascular therapy is an emerging challenge, but with proper patient selection and careful planning, this subset of aneurysms can be managed with good results.
Primary papillary thyroid carcinoma previously treated incompletely with radiofrequency ablation.
Kim, Hoon Yub; Ryu, Woo Sang; Woo, Sang Uk; Son, Gil Soo; Lee, Eun Sook; Lee, Jae Bok; Bae, Jeoung Won
2010-01-01
Radiofrequency ablation (RFA) recently has been applied to benign thyroid nodules, mainly for the cosmetic reasons, and limited cases of local recurrences or focal distant metastases of well-differentiated thyroid cancer, in the high-risk reoperative condition or for the palliative purpose. But no report has been made on the RFA for primary thyroid cancer to date. We report on a patient with primary papillary carcinoma of thyroid gland who had undergone RFA before the cytological diagnosis of malignancy, later referred and treated with robotic surgery successfully. We can learn the following lessons from our case; (1) the RFA for operable primary thyroid malignancy should be avoided, because of the possibility of remnant viable cancer and undetectable nodal metastasis, and (2) robotic or endoscopic thyroid surgery may be a feasible operative method for benign or malignant thyroid nodules previously treated with RFA.
Primary papillary thyroid carcinoma previously treated incompletely with radiofrequency ablation
Kim Hoon
2010-01-01
Full Text Available Radiofrequency ablation (RFA recently has been applied to benign thyroid nodules, mainly for the cosmetic reasons, and limited cases of local recurrences or focal distant metastases of well-differentiated thyroid cancer, in the high-risk reoperative condition or for the palliative purpose. But no report has been made on the RFA for primary thyroid cancer to date. We report on a patient with primary papillary carcinoma of thyroid gland who had undergone RFA before the cytological diagnosis of malignancy, later referred and treated with robotic surgery successfully. We can learn the following lessons from our case; (1 the RFA for operable primary thyroid malignancy should be avoided, because of the possibility of remnant viable cancer and undetectable nodal metastasis, and (2 robotic or endoscopic thyroid surgery may be a feasible operative method for benign or malignant thyroid nodules previously treated with RFA.
Twelve previously unknown phage genera are ubiquitous in global oceans.
Holmfeldt, Karin; Solonenko, Natalie; Shah, Manesh; Corrier, Kristen; Riemann, Lasse; Verberkmoes, Nathan C; Sullivan, Matthew B
2013-07-30
Viruses are fundamental to ecosystems ranging from oceans to humans, yet our ability to study them is bottlenecked by the lack of ecologically relevant isolates, resulting in "unknowns" dominating culture-independent surveys. Here we present genomes from 31 phages infecting multiple strains of the aquatic bacterium Cellulophaga baltica (Bacteroidetes) to provide data for an underrepresented and environmentally abundant bacterial lineage. Comparative genomics delineated 12 phage groups that (i) each represent a new genus, and (ii) represent one novel and four well-known viral families. This diversity contrasts the few well-studied marine phage systems, but parallels the diversity of phages infecting human-associated bacteria. Although all 12 Cellulophaga phages represent new genera, the podoviruses and icosahedral, nontailed ssDNA phages were exceptional, with genomes up to twice as large as those previously observed for each phage type. Structural novelty was also substantial, requiring experimental phage proteomics to identify 83% of the structural proteins. The presence of uncommon nucleotide metabolism genes in four genera likely underscores the importance of scavenging nutrient-rich molecules as previously seen for phages in marine environments. Metagenomic recruitment analyses suggest that these particular Cellulophaga phages are rare and may represent a first glimpse into the phage side of the rare biosphere. However, these analyses also revealed that these phage genera are widespread, occurring in 94% of 137 investigated metagenomes. Together, this diverse and novel collection of phages identifies a small but ubiquitous fraction of unknown marine viral diversity and provides numerous environmentally relevant phage-host systems for experimental hypothesis testing.
Single antiplatelet therapy for patients with previous gastrointestinal bleeds.
Gellatly, Rochelle M; Ackman, Margaret L
2008-06-01
To determine whether aspirin plus a proton pump inhibitor (PPI) is preferable, from a gastrointestinal bleed (GIB) risk perspective, to clopidogrel in patients who have experienced a GIB while on aspirin and who require single antiplatelet therapy for secondary prevention of cardiovascular disease. A literature search was conducted using EMBASE (1980-January 2008), PubMed (1966-January 2008), Google, and a manual search of the reference lists using the search terms gastrointestinal bleed, gastrointestinal hemorrhage, peptic ulcer hemorrhage, ASA, aspirin, Plavix, clopidogrel, and PPI. The search, limited to human and English studies, yielded 110 returns. Randomized trials that compared aspirin with clopidogrel, involved patients who had previously experienced a GIB, and provided detailed information on the type and dose of drugs used were included. Studies were required to provide information on the recurrence of GIB. Two randomized trials were reviewed to assess the safety of secondary prevention of cardiovascular disease with respect to previous GIB. These noninferiority trials compared aspirin plus a PPI with clopidogrel over 12 months following confirmed healing of an aspirin-induced ulcer. In both trials, the majority of the GIB recurrences were in the clopidogrel group (8.6% vs 0.7%; difference 7.9%; 95% CI 3.4 to 12.4; p = 0.001 and 13.6% vs 0%; difference 13.6%; 95% CI 6.3 to 20.9; p = 0.0019) and the difference in recurrence rates exceeded the a priori selected upper boundary. Findings reported in the limited literature available support that clopidogrel is not equivalent to the combination of aspirin plus a PPI in the patient population studied. Aspirin plus a PPI would be considered clinically superior and should be used in medically managed patients who require single antiplatelet therapy but have had a prior GIB while on aspirin. Further research regarding dual antiplatelet therapy and a PPI is required.
Surgical treatment of breast cancer in previously augmented patients.
Karanas, Yvonne L; Leong, Darren S; Da Lio, Andrew; Waldron, Kathleen; Watson, James P; Chang, Helena; Shaw, William W
2003-03-01
The incidence of breast cancer is increasing each year. Concomitantly, cosmetic breast augmentation has become the second most often performed cosmetic surgical procedure. As the augmented patient population ages, an increasing number of breast cancer cases among previously augmented women can be anticipated. The surgical treatment of these patients is controversial, with several questions remaining unanswered. Is breast conservation therapy feasible in this patient population and can these patients retain their implants? A retrospective review of all breast cancer patients with a history of previous augmentation mammaplasty who were treated at the Revlon/UCLA Breast Center between 1991 and 2001 was performed. During the study period, 58 patients were treated. Thirty patients (52 percent) were treated with a modified radical mastectomy with implant removal. Twenty-eight patients (48 percent) underwent breast conservation therapy, which consisted of lumpectomy, axillary lymph node dissection, and radiotherapy. Twenty-two of the patients who underwent breast conservation therapy initially retained their implants. Eleven of those 22 patients (50 percent) ultimately required completion mastectomies with implant removal because of implant complications (two patients), local recurrences (five patients), or the inability to obtain negative margins (four patients). Nine additional patients experienced complications resulting from their implants, including contracture, erosion, pain, and rupture. The data illustrate that breast conservation therapy with maintenance of the implant is not ideal for the majority of augmented patients. Breast conservation therapy with explantation and mastopexy might be appropriate for rare patients with large volumes of native breast tissue. Mastectomy with immediate reconstruction might be a more suitable choice for these patients.
FETOMATERNAL OUTCOME OF PREGNANCY WITH PREVIOUS CESAREAN SECTION
Nigamananda
2014-09-01
Full Text Available OBJECTIVE: The aim of the study was to see the fetomaternal outcome of pregnancy with previous cesarean section. METHODS: This study was conducted in the department of OBGYN, BARC Hospital, Mumbai from October 2011 to September 2012, a period of one year. All the pregnant women with previous one cesarean section attending ANC clinic for confinement were included in the study group after giving consent. RESULTS: Out of total75 cases, a total of 23 patients (30.67% were given trial of labor. Out of 23 patients given trial of labor, 12 patients (52.17% had successful VBAC. Commonest indication for unsuccessful trial of labor undergoing repeat cesarean section was non-progress of labor (54.55% and failed IOL (36.67%. Out of 12 patients who had successful VBAC, 3 patients (25% had complication like episiotomy hematoma, perineal tear and cervical tear. No patients had major complications. In present study no baby had apgar score <7 at 1min and 5 min in VBAC group and elective LSCS group. CONCLUSION: The current study concludes that women with a prior cesarean are at increased risk for repeat cesarean section. Vigilance with respect to indication at primary cesarean delivery, proper counseling for trial of labor and proper antepartum and intrapartum monitoring of patients, are key to reducing the cesarean section rates. The antepartum, intrapartum and postpartum complications are more in repeat cesarean section cases. There is no doubt that a trial of labor is a relatively safe procedure, but it is not risk free. Therefore, patient evaluation prior to TOLAC, careful observation throughout labor in a well-equipped unit with around the clock services for emergency surgery and availability of expertise is the backbone for successful VBAC.
Modelling Opinion Dynamics: Theoretical analysis and continuous approximation
Pinasco, Juan Pablo; Balenzuela, Pablo
2016-01-01
Frequently we revise our first opinions after talking over with other individuals because we get convinced. Argumentation is a verbal and social process aimed at convincing. It includes conversation and persuasion. In this case, the agreement is reached because the new arguments are incorporated. In this paper we deal with a simple model of opinion formation with such persuasion dynamics, and we find the exact analytical solutions for both, long and short range interactions. A novel theoretical approach has been used in order to solve the master equations of the model with non-local kernels. Simulation results demonstrate an excellent agreement with results obtained by the theoretical estimation.
Annual Gross Primary Production from Vegetation Indices: A Theoretically Sound Approach
María Amparo Gilabert
2017-02-01
Full Text Available A linear relationship between the annual gross primary production (GPP and a PAR-weighted vegetation index is theoretically derived from the Monteith equation. A semi-empirical model is then proposed to estimate the annual GPP from commonly available vegetation indices images and a representative PAR, which does not require actual meteorological data. A cross validation procedure is used to calibrate and validate the model predictions against reference data. As the calibration/validation process depends on the reference GPP product, the higher the quality of the reference GPP, the better the performance of the semi-empirical model. The annual GPP has been estimated at 1-km scale from MODIS NDVI and EVI images for eight years. Two reference data sets have been used: an optimized GPP product for the study area previously obtained and the MOD17A3 product. Different statistics show a good agreement between the estimates and the reference GPP data, with correlation coefficient around 0.9 and relative RMSE around 20%. The annual GPP is overestimated in semiarid areas and slightly underestimated in dense forest areas. With the above limitations, the model provides an excellent compromise between simplicity and accuracy for the calculation of long time series of annual GPP.
Quantum mechanics the theoretical minimum
Susskind, Leonard
2014-01-01
From the bestselling author of The Theoretical Minimum, an accessible introduction to the math and science of quantum mechanicsQuantum Mechanics is a (second) book for anyone who wants to learn how to think like a physicist. In this follow-up to the bestselling The Theoretical Minimum, physicist Leonard Susskind and data engineer Art Friedman offer a first course in the theory and associated mathematics of the strange world of quantum mechanics. Quantum Mechanics presents Susskind and Friedman’s crystal-clear explanations of the principles of quantum states, uncertainty and time dependence, entanglement, and particle and wave states, among other topics. An accessible but rigorous introduction to a famously difficult topic, Quantum Mechanics provides a tool kit for amateur scientists to learn physics at their own pace.
Theoretical Provision of Tax Transformation
Feofanova Iryna V.
2016-05-01
Full Text Available The article is aimed at defining the questions, giving answers to which is necessary for scientific substantiation of the tax transformation in Ukraine. The article analyzes the structural-logical relationships of the theories, providing substantiation of tax systems and transformation of them. Various views on the level of both the tax burden and the distribution of the tax burden between big and small business have been systematized. The issues that require theoretical substantiation when choosing a model of tax system have been identified. It is determined that shares of both indirect and direct taxes and their rates can be substantiated by calculations on the basis of statistical data. The results of the presented research can be used to develop the algorithm for theoretical substantiation of tax transformation
Theoretical Advanced Study Institute: 2014
DeGrand, Thomas [Univ. of Colorado, Boulder, CO (United States)
2016-08-17
The Theoretical Advanced Study Institute (TASI) was held at the University of Colorado, Boulder, during June 2-27, 2014. The topic was "Journeys through the Precision Frontier: Amplitudes for Colliders." The organizers were Professors Lance Dixon (SLAC) and Frank Petriello (Northwestern and Argonne). There were fifty-one students. Nineteen lecturers gave sixty seventy-five minute lectures. A Proceedings was published. This TASI was unique for its large emphasis on methods for calculating amplitudes. This was embedded in a program describing recent theoretical and phenomenological developments in particle physics. Topics included introductions to the Standard Model, to QCD (both in a collider context and on the lattice), effective field theories, Higgs physics, neutrino interactions, an introduction to experimental techniques, and cosmology.
Theoretical issues in Spheromak research
Cohen, R. H.; Hooper, E. B.; LoDestro, L. L.; Mattor, N.; Pearlstein, L. D.; Ryutov, D. D.
1997-04-01
This report summarizes the state of theoretical knowledge of several physics issues important to the spheromak. It was prepared as part of the preparation for the Sustained Spheromak Physics Experiment (SSPX), which addresses these goals: energy confinement and the physics which determines it; the physics of transition from a short-pulsed experiment, in which the equilibrium and stability are determined by a conducting wall (``flux conserver``) to one in which the equilibrium is supported by external coils. Physics is examined in this report in four important areas. The status of present theoretical understanding is reviewed, physics which needs to be addressed more fully is identified, and tools which are available or require more development are described. Specifically, the topics include: MHD equilibrium and design, review of MHD stability, spheromak dynamo, and edge plasma in spheromaks.
Theoretical Aspects of Cosmic Acceleration
Trodden, Mark
2016-01-01
Efforts to understand and map the possible explanations for the late time acceleration of the universe have led to a broad range of suggestions, ranging from the cosmological constant and straightforward dark energy, to exotically coupled models, to infrared modifications of General Relativity. If we are to uncover which, if any, of these approaches might provide a serious answer to the problem, it is crucial to understand the constraints that theoretical consistency places on the models, and on the regimes in which they make predictions. In this talk, delivered as an invited plenary lecture at the Dark Side of the Universe conference in Kyoto, Japan, I briefly describe some modern attempts to carry out this program and some of the more interesting ideas that have emerged. As an example, I use the Galileon model, discussing how the Vainshtein mechanism occurs, and how a number of these theoretical problems arise around such backgrounds.
A course in theoretical physics
Shepherd, P J
2013-01-01
This book is a comprehensive account of five extended modules covering the key branches of twentieth-century theoretical physics, taught by the author over a period of three decades to students on bachelor and master university degree courses in both physics and theoretical physics. The modules cover nonrelativistic quantum mechanics, thermal and statistical physics, many-body theory, classical field theory (including special relativity and electromagnetism), and, finally, relativistic quantum mechanics and gauge theories of quark and lepton interactions, all presented in a single, self-contained volume. In a number of universities, much of the material covered (for example, on Einstein’s general theory of relativity, on the BCS theory of superconductivity, and on the Standard Model, including the theory underlying the prediction of the Higgs boson) is taught in postgraduate courses to beginning PhD students. A distinctive feature of the book is that full, step-by-step mathematical proofs of all essentia...
Theoretical Prospects for B Physics
Fleischer, Robert
2015-01-01
The exploration of B-meson decays has reached an unprecedented level of sophistication, with a phase of even much higher precision ahead of us thanks to run 2 of the LHC and the future era of Belle II and the LHCb upgrade. For many processes, the theoretical challenge in the quest to reveal possible footprints of physics beyond the Standard Model will be the control of uncertainties from strong interactions. After a brief discussion of the global picture emerging from the LHC data, I will focus on the theoretical prospects and challenges for benchmark B decays to search for new sources of CP violation, and highlight future opportunities to probe the Standard Model with strongly suppressed rare B decays.
Obstetric Outcomes of Mothers Previously Exposed to Sexual Violence.
Agnes Gisladottir
Full Text Available There is a scarcity of data on the association of sexual violence and women's subsequent obstetric outcomes. Our aim was to investigate whether women exposed to sexual violence as teenagers (12-19 years of age or adults present with different obstetric outcomes than women with no record of such violence.We linked detailed prospectively collected information on women attending a Rape Trauma Service (RTS to the Icelandic Medical Birth Registry (IBR. Women who attended the RTS in 1993-2010 and delivered (on average 5.8 years later at least one singleton infant in Iceland through 2012 formed our exposed cohort (n = 1068. For each exposed woman's delivery, nine deliveries by women with no RTS attendance were randomly selected from the IBR (n = 9126 matched on age, parity, and year and season of delivery. Information on smoking and Body mass index (BMI was available for a sub-sample (n = 792 exposed and n = 1416 non-exposed women. Poisson regression models were used to estimate Relative Risks (RR with 95% confidence intervals (CI.Compared with non-exposed women, exposed women presented with increased risks of maternal distress during labor and delivery (RR 1.68, 95% CI 1.01-2.79, prolonged first stage of labor (RR 1.40, 95% CI 1.03-1.88, antepartum bleeding (RR 1.95, 95% CI 1.22-3.07 and emergency instrumental delivery (RR 1.16, 95% CI 1.00-1.34. Slightly higher risks were seen for women assaulted as teenagers. Overall, we did not observe differences between the groups regarding the risk of elective cesarean section (RR 0.86, 95% CI 0.61-1.21, except for a reduced risk among those assaulted as teenagers (RR 0.56, 95% CI 0.34-0.93. Adjusting for maternal smoking and BMI in a sub-sample did not substantially affect point estimates.Our prospective data suggest that women with a history of sexual assault, particularly as teenagers, are at increased risks of some adverse obstetric outcomes.
Theoretical Studies of Nanocluster Formation
2016-05-26
5f. WORK UNIT NUMBER Q188 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NO. Air Force Research...For presentation at AFOSR Molecular Dynamics and Theoretical Chemistry Program Review; Arlington, VA (25 May 2016) PA Case Number: #16215; Clearance...Approved for public release; Distribution Unlimited. PA Clearance No: 16215 This briefing contains information up to: 2 Outline 1. Introduction
CONTRACT ASSIGNMENT – THEORETICAL ASPECTS
Bogdan NAZAT
2016-12-01
Full Text Available This project aims to study in detail the theoretical aspects concerning the contract assignment, as provided by the relevant regulation, and the doctrine corresponding to old and current regulations. In this respect, this project aims to give the reader a comprehensive look on the institution in question, the regulation offered by the current Civil Code is reviewed taking into account the national and international doctrine.
ABOUT COMMON AND THEORETICAL INFORMATICS
Andrey A. Mayorov
2015-01-01
Full Text Available In this article are considered the integrant importance of informatics and informational technologys includes the sciences and the humanities.There are a differences between scientifi c grounds of the various information orientations, which include physical informatics, bioinfomatics, technical and social informatics. Creation of a united theoretical base for these orientations is very problematical. The metodologically important issue of classifi cation different informatics is a part of the general informatics, the example of which are considered here.
Theoretical Studies of Silicon Chemistry
1990-02-01
Molecular and Electronic Structure of Silyl Nitrene , M.S. Gordon, Chem. Phys. Lett., 146, 148 (1988). 18. A Theoretical Study of the Three-Membered Rings...phase and crystal structures. Of course, all three possibilities may contribute. B. The Electronic and Molecular Structure of Silyl Nitrene , M.S...a silaimine. An interesting question regarding the primary process is whether the silyl nitrene , R3SiN, is formed as an intermediate. As a first step
Theoretical Studies of Reaction Surfaces
2007-11-02
Similar levels of agreement are being found in studies of water clusters12 , the Menshutkin reaction 13 (ion separation reaction ), a prototypical SN2 ...of both reactants and products. These analyses reveal that Bery pseudorotation occurs repeatedly during the side attack, whereas the SN2 reaction H...31 Aug 97 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS AASERT93 THEORETICAL STUDIES OF REACTION SURFACES F49620-93-1-0556 3484/XS 6. AUTHOR(S) 61103D DR
Estimation of Correlation Functions by Random Decrement
Asmussen, J. C.; Brincker, Rune
This paper illustrates how correlation functions can be estimated by the random decrement technique. Several different formulations of the random decrement technique, estimating the correlation functions are considered. The speed and accuracy of the different formulations of the random decrement...... and the length of the correlation functions. The accuracy of the estimates with respect to the theoretical correlation functions and the modal parameters are both investigated. The modal parameters are extracted from the correlation functions using the polyreference time domain technique....
Fliess, Michel; Sira-Ramirez, Hebertt
2007-01-01
Non-linear state estimation and some related topics, like parametric estimation, fault diagnosis, and perturbation attenuation, are tackled here via a new methodology in numerical differentiation. The corresponding basic system theoretic definitions and properties are presented within the framework of differential algebra, which permits to handle system variables and their derivatives of any order. Several academic examples and their computer simulations, with on-line estimations, are illustrating our viewpoint.
Central Diabetes Insipidus: A Previously Unreported Side Effect of Temozolomide
Nachtigall, Lisa; Wexler, Deborah; Miller, Karen K.; Klibanski, Anne; Makimura, Hideo
2013-01-01
Context: Temozolomide (TMZ) is an alkylating agent primarily used to treat tumors of the central nervous system. We describe 2 patients with apparent TMZ-induced central diabetes insipidus. Using our institution's Research Patient Database Registry, we identified 3 additional potential cases of TMZ-induced diabetes insipidus among a group of 1545 patients treated with TMZ. Case Presentations: A 53-year-old male with an oligoastrocytoma and a 38-year-old male with an oligodendroglioma each developed symptoms of polydipsia and polyuria approximately 2 months after the initiation of TMZ. Laboratory analyses demonstrated hypernatremia and urinary concentrating defects, consistent with the presence of diabetes insipidus, and the patients were successfully treated with desmopressin acetate. Desmopressin acetate was withdrawn after the discontinuation of TMZ, and diabetes insipidus did not recur. Magnetic resonance imaging of the pituitary and hypothalamus was unremarkable apart from the absence of a posterior pituitary bright spot in both of the cases. Anterior pituitary function tests were normal in both cases. Using the Research Patient Database Registry database, we identified the 2 index cases and 3 additional potential cases of diabetes insipidus for an estimated prevalence of 0.3% (5 cases of diabetes insipidus per 1545 patients prescribed TMZ). Conclusions: Central diabetes insipidus is a rare but reversible side effect of treatment with TMZ. PMID:23928668
Central diabetes insipidus: a previously unreported side effect of temozolomide.
Faje, Alexander T; Nachtigall, Lisa; Wexler, Deborah; Miller, Karen K; Klibanski, Anne; Makimura, Hideo
2013-10-01
Temozolomide (TMZ) is an alkylating agent primarily used to treat tumors of the central nervous system. We describe 2 patients with apparent TMZ-induced central diabetes insipidus. Using our institution's Research Patient Database Registry, we identified 3 additional potential cases of TMZ-induced diabetes insipidus among a group of 1545 patients treated with TMZ. A 53-year-old male with an oligoastrocytoma and a 38-year-old male with an oligodendroglioma each developed symptoms of polydipsia and polyuria approximately 2 months after the initiation of TMZ. Laboratory analyses demonstrated hypernatremia and urinary concentrating defects, consistent with the presence of diabetes insipidus, and the patients were successfully treated with desmopressin acetate. Desmopressin acetate was withdrawn after the discontinuation of TMZ, and diabetes insipidus did not recur. Magnetic resonance imaging of the pituitary and hypothalamus was unremarkable apart from the absence of a posterior pituitary bright spot in both of the cases. Anterior pituitary function tests were normal in both cases. Using the Research Patient Database Registry database, we identified the 2 index cases and 3 additional potential cases of diabetes insipidus for an estimated prevalence of 0.3% (5 cases of diabetes insipidus per 1545 patients prescribed TMZ). Central diabetes insipidus is a rare but reversible side effect of treatment with TMZ.
How to prevent type 2 diabetes in women with previous gestational diabetes?
Pedersen, Anne Louise Winkler; Terkildsen Maindal, Helle; Juul, Lise
2017-01-01
OBJECTIVES: Women with previous gestational diabetes (GDM) have a seven times higher risk of developing type 2 diabetes (T2DM) than women without. We aimed to review the evidence of effective behavioural interventions seeking to prevent T2DM in this high-risk group. METHODS: A systematic review...... of RCTs in several databases in March 2016. RESULTS: No specific intervention or intervention components were found superior. The pooled effect on diabetes incidence (four trials) was estimated to: -5.02 per 100 (95% CI: -9.24; -0.80). CONCLUSIONS: This study indicates that intervention is superior...... to no intervention in prevention of T2DM among women with previous GDM....
Theoretical Study of the Compound Parabolic Trough Solar Collector
Dr. Subhi S. Mahammed
2012-06-01
Full Text Available Theoretical design of compound parabolic trough solar collector (CPC without tracking is presented in this work. The thermal efficiency is obtained by using FORTRAN 90 program. The thermal efficiency is between (60-67% at mass flow rate between (0.02-0.03 kg/s at concentration ratio of (3.8 without need to tracking system.The total and diffused radiation is calculated for Tikrit city by using theoretical equations. Good agreement between present work and the previous work.
Theoretical Compton profile of diamond, boron nitride and carbon nitride
Aguiar, Julio C.; Quevedo, Carlos R.; Gomez, José M.; Di Rocco, Héctor O.
2017-09-01
In the present study, we used the generalized gradient approximation method to determine the electron wave functions and theoretical Compton profiles of the following super-hard materials: diamond, boron nitride (h-BN), and carbon nitride in its two known phases: βC3N4 and gC3N4 . In the case of diamond and h-BN, we compared our theoretical results with available experimental data. In addition, we used the Compton profile results to determine cohesive energies and found acceptable agreement with previous experiments.
Zanol, Joana; Ruta, Christine
2015-09-18
The family Oenonidae consists of Eunicida species with prionognath jaws. Its Australian fauna had been reported to comprise six species belonging to Arabella, Drilonereis, and Oenone. This study provides descriptions for four new species, redescriptions for three species (two previously recorded and a new record, Drilonereis cf. logani) and diagnoses for the genera recorded from Australia. Currently, eleven species of oenonids, distributed in three genera, are known for the Australian coast. On Lizard Island, this family shows low abundance (19 specimens collected) and high richness (seven species). Our results suggest that despite the increasing accumulation of information, the biodiversity of the family is still poorly estimated.
A Catalog of previously unstudied clusters of IC, King and Dias
Tadross, A L
2008-01-01
The main astrophysical properties of 14 previously unstudied open star clusters are probed with JHK Near-IR (2MASS) photometry of Cutri et al. (2003) and proper motions (NOMAD) astrometry of Zacharias et al. (2004). The fundamental parameters have been derived for IC (1023, 1434, 2156); King (17, 18, 23, 25, 26); and Dias (2, 3, 4, 7, 8, 11), for which no prior parameters are available in the literature. The clusters' centers coordinates and angular diameters are re-determined, while ages, distances, and color excesses for these clusters are estimated here for the first time.
RESEARCH ON SOME THEORETICAL PROBLEMS OF MAP DATA HANDLING USING FRACTAL APPROACH
无
2000-01-01
Some theoretical problems of fractal geographical map data handling are dis cussed and some new methods about fractal dimension introducing, developing, comparing and estimating are proposed in this paper.
Glucomannan prevents postprandial hypoglycaemia in patients with previous gastric surgery.
Hopman, W P; Houben, P G; Speth, P A; Lamers, C B
1988-07-01
Glucomannan (Propol), a potent gel forming dietary fibre, was added to a carbohydrate rich breakfast in eight patients with previous gastric surgery suffering from postprandial hypoglycaemia. Addition of only 2.6 g and 5.2 g glucomannan to the meal dose dependently improved reactive hypoglycaemia from 2.3 (0.2) mmol/l to 3.3 (0.2) mmol/l (p less than 0.0005) after 2.6 g and 4.1 (0.2) mmol/l (p = 0.0005) after 5.2 g, and decreased postprandial rise in plasma insulin (p less than 0.05). Expiratory breath hydrogen excretion tended to decrease reflecting improvement of carbohydrate metabolism. Addition of glucomannan to an intraduodenal sucrose solution significantly raised plasma glucose nadirs, indicating glucomannan to be effective during the intestinal phase. It is concluded that small amounts of glucomannan may be beneficial to patients with reactive postprandial hypoglycaemia, without the disadvantage of unpalatability and carbohydrate malabsorption.
Influence of Previous Knowledge in Torrance Tests of Creative Thinking
María Aranguren
2015-07-01
Full Text Available The aim of this work is to analyze the influence of study field, expertise and recreational activities participation in Torrance Tests of Creative Thinking (TTCT, 1974 performance. Several hypotheses were postulated to explore the possible effects of previous knowledge in TTCT verbal and TTCT figural university students’ outcomes. Participants in this study included 418 students from five study fields: Psychology;Philosophy and Literature, Music; Engineering; and Journalism and Advertising (Communication Sciences. Results found in this research seem to indicate that there in none influence of the study field, expertise and recreational activities participation in neither of the TTCT tests. Instead, the findings seem to suggest some kind of interaction between certain skills needed to succeed in specific studies fields and performance on creativity tests, such as the TTCT. These results imply that TTCT is a useful and valid instrument to measure creativity and that some cognitive process involved in innovative thinking can be promoted using different intervention programs in schools and universities regardless the students study field.
Measles Outbreak among Previously Immunized Adult Healthcare Workers, China, 2015
Zhengyi Zhang
2016-01-01
Full Text Available Measles is caused by measles virus belonging to genus Morbillivirus of the family Paramyxoviridae. Vaccination has played a critical role in controlling measles infection worldwide. However, in the recent years, outbreaks of measles infection still occur in many developing countries. Here, we report an outbreak of measles among healthcare workers and among the 60 measles infected patients 50 were healthcare workers including doctors, nurses, staff, and medics. Fifty-one patients (85% tested positive for IgM antibodies against the measles virus and 50 patients (83.3% tested positive for measles virus RNA. Surprisingly, 73.3% of the infected individuals had been previously immunized against measles. Since there is no infection division in our hospital, the fever clinics are located in the Emergency Division. In addition, the fever and rash were not recognized as measles symptoms at the beginning of the outbreak. These factors result in delay in isolation and early confirmation of the suspected patients and eventually a measles outbreak in the hospital. Our report highlights the importance of following a two-dose measles vaccine program in people including the healthcare workers. In addition, vigilant attention should be paid to medical staff with clinical fever and rash symptoms to avoid a possible nosocomial transmission of measles infection.
Recurrent arthralgias in a patient with previous Mayaro fever infection.
Taylor, Shawn F; Patel, Paresh R; Herold, Thomas J S
2005-04-01
Mayaro fever is an acute, self-limited, febrile, mosquito-borne viral disease manifested by fever, chills, headache, myalgias, and arthralgias. The virus belongs to the family Togaviridae and the genus Alphavirus. Five other mosquito-borne viruses have been described as causing a similar dengue-like illness. The virus was first isolated in 1954, and the first epidemics were described in 1955 in Brazil and Bolivia. Other cases have been reported in Suriname, Brazil, Peru, French Guiana, and Trinidad. Up to 10 to 15% of febrile illnesses in endemic areas have been attributed to Mayaro virus. The exact pathogenesis and pathophysiology among humans is unknown. Animal models have demonstrated necrosis of skeletal muscle, periosteum, perichondrial tissues, and evidence of meningitis and encephalitis. All previous cases of Mayaro fever describe a self-limited illness. No reports of recurrent symptoms exist in the literature. This report describes a case of recurrent arthralgias in a military service member presenting to the emergency department.
Cerebral Metastasis from a Previously Undiagnosed Appendiceal Adenocarcinoma
Antonio Biroli
2012-01-01
Full Text Available Brain metastases arise in 10%–40% of all cancer patients. Up to one third of the patients do not have previous cancer history. We report a case of a 67-years-old male patient who presented with confusion, tremor, and apraxia. A brain MRI revealed an isolated right temporal lobe lesion. A thorax-abdomen-pelvis CT scan showed no primary lesion. The patient underwent a craniotomy with gross-total resection. Histopathology revealed an intestinal-type adenocarcinoma. A colonoscopy found no primary lesion, but a PET-CT scan showed elevated FDG uptake in the appendiceal nodule. A right hemicolectomy was performed, and the specimen showed a moderately differentiated mucinous appendiceal adenocarcinoma. Whole brain radiotherapy was administrated. A subsequent thorax-abdomen CT scan revealed multiple lung and hepatic metastasis. Seven months later, the patient died of disease progression. In cases of undiagnosed primary lesions, patients present in better general condition, but overall survival does not change. Eventual identification of the primary tumor does not affect survival. PET/CT might be a helpful tool in detecting lesions of the appendiceal region. To the best of our knowledge, such a case was never reported in the literature, and an appendiceal malignancy should be suspected in patients with brain metastasis from an undiagnosed primary tumor.
Previously unknown class of metalorganic compounds revealed in meteorites
Ruf, Alexander; Kanawati, Basem; Hertkorn, Norbert; Yin, Qing-Zhu; Moritz, Franco; Harir, Mourad; Lucio, Marianna; Michalke, Bernhard; Wimpenny, Joshua; Shilobreeva, Svetlana; Bronsky, Basil; Saraykin, Vladimir; Gabelica, Zelimir; Gougeon, Régis D.; Quirico, Eric; Ralew, Stefan; Jakubowski, Tomasz; Haack, Henning; Gonsior, Michael; Jenniskens, Peter; Hinman, Nancy W.; Schmitt-Kopplin, Philippe
2017-01-01
The rich diversity and complexity of organic matter found in meteorites is rapidly expanding our knowledge and understanding of extreme environments from which the early solar system emerged and evolved. Here, we report the discovery of a hitherto unknown chemical class, dihydroxymagnesium carboxylates [(OH)2MgO2CR]−, in meteoritic soluble organic matter. High collision energies, which are required for fragmentation, suggest substantial thermal stability of these Mg-metalorganics (CHOMg compounds). This was corroborated by their higher abundance in thermally processed meteorites. CHOMg compounds were found to be present in a set of 61 meteorites of diverse petrological classes. The appearance of this CHOMg chemical class extends the previously investigated, diverse set of CHNOS molecules. A connection between the evolution of organic compounds and minerals is made, as Mg released from minerals gets trapped into organic compounds. These CHOMg metalorganic compounds and their relation to thermal processing in meteorites might shed new light on our understanding of carbon speciation at a molecular level in meteorite parent bodies. PMID:28242686
Global functional atlas of Escherichia coli encompassing previously uncharacterized proteins.
Pingzhao Hu
2009-04-01
Full Text Available One-third of the 4,225 protein-coding genes of Escherichia coli K-12 remain functionally unannotated (orphans. Many map to distant clades such as Archaea, suggesting involvement in basic prokaryotic traits, whereas others appear restricted to E. coli, including pathogenic strains. To elucidate the orphans' biological roles, we performed an extensive proteomic survey using affinity-tagged E. coli strains and generated comprehensive genomic context inferences to derive a high-confidence compendium for virtually the entire proteome consisting of 5,993 putative physical interactions and 74,776 putative functional associations, most of which are novel. Clustering of the respective probabilistic networks revealed putative orphan membership in discrete multiprotein complexes and functional modules together with annotated gene products, whereas a machine-learning strategy based on network integration implicated the orphans in specific biological processes. We provide additional experimental evidence supporting orphan participation in protein synthesis, amino acid metabolism, biofilm formation, motility, and assembly of the bacterial cell envelope. This resource provides a "systems-wide" functional blueprint of a model microbe, with insights into the biological and evolutionary significance of previously uncharacterized proteins.
Measles Outbreak among Previously Immunized Adult Healthcare Workers, China, 2015
Zhang, Zhengyi; Zhao, Yuan; Yang, Lili; Lu, Changhong; Meng, Ying; Guan, Xiaoli; An, Hongjin; Zhang, Meizhong; Guo, Wenqin; Shang, Bo; Yu, Jing
2016-01-01
Measles is caused by measles virus belonging to genus Morbillivirus of the family Paramyxoviridae. Vaccination has played a critical role in controlling measles infection worldwide. However, in the recent years, outbreaks of measles infection still occur in many developing countries. Here, we report an outbreak of measles among healthcare workers and among the 60 measles infected patients 50 were healthcare workers including doctors, nurses, staff, and medics. Fifty-one patients (85%) tested positive for IgM antibodies against the measles virus and 50 patients (83.3%) tested positive for measles virus RNA. Surprisingly, 73.3% of the infected individuals had been previously immunized against measles. Since there is no infection division in our hospital, the fever clinics are located in the Emergency Division. In addition, the fever and rash were not recognized as measles symptoms at the beginning of the outbreak. These factors result in delay in isolation and early confirmation of the suspected patients and eventually a measles outbreak in the hospital. Our report highlights the importance of following a two-dose measles vaccine program in people including the healthcare workers. In addition, vigilant attention should be paid to medical staff with clinical fever and rash symptoms to avoid a possible nosocomial transmission of measles infection. PMID:27366157
Premorbid adjustment and previous personality in schizophrenic patients
José Juan Rodríguez Solano
2005-12-01
Full Text Available Psychosocial adjustment and premorbid personality are two factors that are frequently studied in order to elucidate the etiopathogenesis of schizophrenia. Premorbid adjustment alterations and personality disorders (principally those of the schizophrenia spectrum have been considered vulnerability elements or have been linked with the early manifestations of a disease that is still underdeveloped (hypothesis of neurodevelopment. In this paper we review the literature. We also studied the relationship between premorbid adjustment (PAS scale and previous personality disorders (SCID-II in a sample of 40 patients with schizophrenia (DSM-III-R, DSM-IV, CIE-10, and statistically correlated them. The results show that premorbid adjustment correlates with avoidant, schizotypal and schizoid personality disorders: the more personality pathology found, the poorer is the premorbid psychosocial adjustment. Premorbid adjustment positively correlates with histrionic personality traits. The pathological traits of schizotypal and schizoid personalities account for up to 77% of the variance of the total premorbid adjustment in schizophrenic patients. Conclusion: The degrees of premorbid adjustment in schizophrenia are related to the different premorbid personality disorders of schizophrenic patients, which are mainly those most genetically related with schizophrenia, that is, the spectrum of the schizophrenia.
Previously unknown class of metalorganic compounds revealed in meteorites.
Ruf, Alexander; Kanawati, Basem; Hertkorn, Norbert; Yin, Qing-Zhu; Moritz, Franco; Harir, Mourad; Lucio, Marianna; Michalke, Bernhard; Wimpenny, Joshua; Shilobreeva, Svetlana; Bronsky, Basil; Saraykin, Vladimir; Gabelica, Zelimir; Gougeon, Régis D; Quirico, Eric; Ralew, Stefan; Jakubowski, Tomasz; Haack, Henning; Gonsior, Michael; Jenniskens, Peter; Hinman, Nancy W; Schmitt-Kopplin, Philippe
2017-03-14
The rich diversity and complexity of organic matter found in meteorites is rapidly expanding our knowledge and understanding of extreme environments from which the early solar system emerged and evolved. Here, we report the discovery of a hitherto unknown chemical class, dihydroxymagnesium carboxylates [(OH)2MgO2CR](-), in meteoritic soluble organic matter. High collision energies, which are required for fragmentation, suggest substantial thermal stability of these Mg-metalorganics (CHOMg compounds). This was corroborated by their higher abundance in thermally processed meteorites. CHOMg compounds were found to be present in a set of 61 meteorites of diverse petrological classes. The appearance of this CHOMg chemical class extends the previously investigated, diverse set of CHNOS molecules. A connection between the evolution of organic compounds and minerals is made, as Mg released from minerals gets trapped into organic compounds. These CHOMg metalorganic compounds and their relation to thermal processing in meteorites might shed new light on our understanding of carbon speciation at a molecular level in meteorite parent bodies.
High-Grade Leiomyosarcoma Arising in a Previously Replanted Limb
Tiffany J. Pan
2015-01-01
Full Text Available Sarcoma development has been associated with genetics, irradiation, viral infections, and immunodeficiency. Reports of sarcomas arising in the setting of prior trauma, as in burn scars or fracture sites, are rare. We report a case of a leiomyosarcoma arising in an arm that had previously been replanted at the level of the elbow joint following traumatic amputation when the patient was eight years old. He presented twenty-four years later with a 10.8 cm mass in the replanted arm located on the volar forearm. The tumor was completely resected and pathology examination showed a high-grade, subfascial spindle cell sarcoma diagnosed as a grade 3 leiomyosarcoma with stage pT2bNxMx. The patient underwent treatment with brachytherapy, reconstruction with a free flap, and subsequently chemotherapy. To the best of our knowledge, this is the first case report of leiomyosarcoma developing in a replanted extremity. Development of leiomyosarcoma in this case could be related to revascularization, scar formation, or chronic injury after replantation. The patient remains healthy without signs of recurrence at three-year follow-up.
Motivational activities based on previous knowledge of students
García, J. A.; Gómez-Robledo, L.; Huertas, R.; Perales, F. J.
2014-07-01
Academic results depend strongly on the individual circumstances of students: background, motivation and aptitude. We think that academic activities conducted to increase motivation must be tuned to the special situation of the students. Main goal of this work is analyze the students in the first year of the Degree in Optics and Optometry in the University of Granada and the suitability of an activity designed for those students. Initial data were obtained from a survey inquiring about the reasons to choose this degree, their knowledge of it, and previous academic backgrounds. Results show that: 1) the group is quite heterogeneous, since students have very different background. 2) Reasons to choose the Degree in Optics and Optometry are also very different, and in many cases were selected as a second option. 3) Knowledge and motivations about the Degree are in general quite low. Trying to increase the motivation of the students we designed an academic activity in which we show different topics studied in the Degree. Results show that students that have been involved in this activity are the most motivated and most satisfied with their election of the degree.
Cryptobiosis: a new theoretical perspective.
Neuman, Yair
2006-10-01
The tardigrade is a microscopic creature that under environmental stress conditions undergoes cryptobiosis [Feofilova, E.P., 2003. Deceleration of vital activity as a universal biochemical mechanism ensuring adaptation of microorganisms to stress factors: A review. Appl. Biochem. Microbiol. 39, 1-18; Nelson, D.R., 2002. Current status of the tardigrada: Evolution and ecology. Integrative Comp. Biol. 42, 652-659]-a temporary metabolic depression-which is considered to be a third state between life and death [Clegg, J.S., 2001. Cryptobiosis-a peculiar state of biological organization. Comp. Biochem. Physiol. Part B 128, 613-624]. In contrast with death, cryptobiosis is a reversible state, and as soon as environmental conditions change, the tardigrade "returns to life." Cryptobiosis in general, and among the tardigrade in particular, is a phenomenon poorly understood [Guppy, M., 2004. The biochemistry of metabolic depression: a history of perceptions. Comp. Biochem. Physiol. Part B 139, 435-442; Schill, R.O., et al., 2004. Stress gene (hsp70) sequences and quantitative expression in Milensium tardigradum (Tardigrade) during active and cryptobiotic stages. J. Exp. Biol. 207, 1607-1613; Watanabe, M., et al., 2002. Mechanisn allowing an insect to survive complete dehydration and extreme temperatures. J. Exp. Biol. 205, 2799-2802; Wright, J.C., 2001. Cryptobiosis 300 years on from van Leuwenhoek: what have we learned about tardigrades? Zool. Anz. 240, 563-582]. Moreover, the ability of the tardigrade to bootstrap itself and to return to life seems paradoxical like the legendary Baron von Munchausen who pulled himself out of the swamp by grabbing his own hair. Two theoretical obstacles prevent us from advancing our knowledge of cryptobiosis. First, we lack appropriate theoretical understanding of reversible processes of biological computation in living systems. Second, we lack appropriate theoretical understanding of bootstrapping in living systems. In this short opinion
Site characterization: a spatial estimation approach
Candy, J.V.; Mao, N.
1980-10-01
In this report the application of spatial estimation techniques or kriging to groundwater aquifers and geological borehole data is considered. The adequacy of these techniques to reliably develop contour maps from various data sets is investigated. The estimator is developed theoretically in a simplified fashion using vector-matrix calculus. The practice of spatial estimation is discussed and the estimator is then applied to two groundwater aquifer systems and used also to investigate geological formations from borehole data. It is shown that the estimator can provide reasonable results when designed properly.
2011-02-07
... Company (Type Certificate Previously Held by Columbia Aircraft Manufacturing (Previously the Lancair... Aircraft Company (Type Certificate Previously Held by Columbia Aircraft Manufacturing (Previously The... Aircraft Company (type certificate previously held by Columbia Aircraft Manufacturing (previously...
Theoretical study of rock mass investigation efficiency
Holmen, Johan G.; Outters, Nils [Golder Associates, Uppsala (Sweden)
2002-05-01
The study concerns a mathematical modelling of a fractured rock mass and its investigations by use of theoretical boreholes and rock surfaces, with the purpose of analysing the efficiency (precision) of such investigations and determine the amount of investigations necessary to obtain reliable estimations of the structural-geological parameters of the studied rock mass. The study is not about estimating suitable sample sizes to be used in site investigations.The purpose of the study is to analyse the amount of information necessary for deriving estimates of the geological parameters studied, within defined confidence intervals and confidence level In other words, how the confidence in models of the rock mass (considering a selected number of parameters) will change with amount of information collected form boreholes and surfaces. The study is limited to a selected number of geometrical structural-geological parameters: Fracture orientation: mean direction and dispersion (Fisher Kappa and SRI). Different measures of fracture density (P10, P21 and P32). Fracture trace-length and strike distributions as seen on horizontal windows. A numerical Discrete Fracture Network (DFN) was used for representation of a fractured rock mass. The DFN-model was primarily based on the properties of an actual fracture network investigated at the Aespoe Hard Rock Laboratory. The rock mass studied (DFN-model) contained three different fracture sets with different orientations and fracture densities. The rock unit studied was statistically homogeneous. The study includes a limited sensitivity analysis of the properties of the DFN-model. The study is a theoretical and computer-based comparison between samples of fracture properties of a theoretical rock unit and the known true properties of the same unit. The samples are derived from numerically generated boreholes and surfaces that intersect the DFN-network. Two different boreholes are analysed; a vertical borehole and a borehole that is
A diagnostic model to estimate winds and small-scale drag from Mars Observer PMIRR data
Barnes, J. R.
1993-01-01
Theoretical and modeling studies indicate that small-scale drag due to breaking gravity waves is likely to be of considerable importance for the circulation in the middle atmospheric region (approximately 40-100 km altitude) on Mars. Recent earth-based spectroscopic observations have provided evidence for the existence of circulation features, in particular, a warm winter polar region, associated with gravity wave drag. Since the Mars Observer PMIRR experiment will obtain temperature profiles extending from the surface up to about 80 km altitude, it will be extensively sampling middle atmospheric regions in which gravity wave drag may play a dominant role. Estimating the drag then becomes crucial to the estimation of the atmospheric winds from the PMIRR-observed temperatures. An interative diagnostic model based upon one previously developed and tested with earth satellite temperature data will be applied to the PMIRR measurements to produce estimates of the small-scale zonal drag and three-dimensional wind fields in the Mars middle atmosphere. This model is based on the primitive equations, and can allow for time dependence (the time tendencies used may be based upon those computed in a Fast Fourier Mapping procedure). The small-scale zonal drag is estimated as the residual in the zonal momentum equation; the horizontal winds having first been estimated from the meridional momentum equation and the continuity equation. The scheme estimates the vertical motions from the thermodynamic equation, and thus needs estimates of the diabatic heating based upon the observed temperatures. The latter will be generated using a radiative model. It is hoped that the diagnostic scheme will be able to produce good estimates of the zonal gravity wave drag in the Mars middle atmosphere, estimates that can then be used in other diagnostic or assimilation efforts, as well as more theoretical studies.
Order statistics & inference estimation methods
Balakrishnan, N
1991-01-01
The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co
Sacrococcygeal pilonidal disease: analysis of previously proposed risk factors
Ali Harlak
2010-01-01
Full Text Available PURPOSE: Sacrococcygeal pilonidal disease is a source of one of the most common surgical problems among young adults. While male gender, obesity, occupations requiring sitting, deep natal clefts, excessive body hair, poor body hygiene and excessive sweating are described as the main risk factors for this disease, most of these need to be verified with a clinical trial. The present study aimed to evaluate the value and effect of these factors on pilonidal disease. METHOD: Previously proposed main risk factors were evaluated in a prospective case control study that included 587 patients with pilonidal disease and 2,780 healthy control patients. RESULTS: Stiffness of body hair, number of baths and time spent seated per day were the three most predictive risk factors. Adjusted odds ratios were 9.23, 6.33 and 4.03, respectively (p<0.001. With an adjusted odds ratio of 1.3 (p<.001, body mass index was another risk factor. Family history was not statistically different between the groups and there was no specific occupation associated with the disease. CONCLUSIONS: Hairy people who sit down for more than six hours a day and those who take a bath two or less times per week are at a 219-fold increased risk for sacrococcygeal pilonidal disease than those without these risk factors. For people with a great deal of hair, there is a greater need for them to clean their intergluteal sulcus. People who engage in work that requires sitting in a seat for long periods of time should choose more comfortable seats and should also try to stand whenever possible.
Gastrointestinal tolerability with ibandronate after previous weekly bisphosphonate treatment
Richard Derman
2009-09-01
Full Text Available Richard Derman1, Joseph D Kohles2, Ann Babbitt31Department of Obstetrics and Gynecology, Christiana Hospital, Newark, DE, USA; 2Roche, Nutley, NJ, USA; 3Greater Portland Bone and Joint Specialists, Portland, ME, USAAbstract: Data from two open-label trials (PRIOR and CURRENT of women with postmenopausal osteoporosis or osteopenia were evaluated to assess whether monthly oral and quarterly intravenous (IV ibandronate dosing improved self-reported gastrointestinal (GI tolerability for patients who had previously experienced GI irritation with bisphosphonate (BP use. In PRIOR, women who had discontinued daily or weekly BP treatment due to GI intolerance received monthly oral or quarterly IV ibandronate for 12 months. The CURRENT subanalysis included women receiving weekly BP treatment who switched to monthly oral ibandronate for six months. GI symptom severity and frequency were assessed using the Osteoporosis Patient Satisfaction Questionnaire™. In PRIOR, mean GI tolerability scores increased significantly at month 1 from screening for both treatment groups (oral: 79.3 versus 54.1; IV: 84.4 versus 51.0; p < 0.001 for both. Most patients reported improvement in GI symptom severity and frequency from baseline at all post-screening assessments (>90% at Month 10. In the CURRENT subanalysis >60% of patients reported improvements in heartburn or acid reflux and >70% indicated improvement in other stomach upset at month 6. Postmenopausal women with GI irritability with daily or weekly BPs experienced improvement in symptoms with extended dosing monthly or quarterly ibandronate compared with baseline.Keywords: ibandronate, osteoporosis, bisphosphonate, gastrointestinal
Previous Heart Disease Complications And Hypertension In Pregnancy
Nayer Pishnamaz
2013-05-01
Full Text Available Objective(sHypertensive disorders in pregnancy with incidence of 3/7% are one of the most severe complications. Cardiovascular diseases are apparent in 2% of the pregnancies. Physiologic changes during pregnancy intensify the underlying disorders and the severity of this problem. Researches indicate that pregnant women with heart disease greatly confront unfavorable maternal and fetal outcomes with increased risk of abortion, intra uterine fetal death, preterm labor and intra uterine growth retardation. This study aim was to magnify the outcomes of pregnancy accompaniment with cardiovascular diseases and hypertension.Materials and Methods: This is a retrospective descriptive study in which patient records of 2500 pregnant women referring to Alzahra hospital from 2006 to 2008 were assessed. Data was gatherated by medical files and analyzed by SPSS soft ware.Results: In this study incidence of moderate Aorta Stenosis (AS was about 45.5 % (10cases, Mitral valvuloplasty (MVP was about 22/73% (5cases and Mitral stenosis (MS was 18/18% (4 cases; two patient with MR(mitral regurgitation+MS had underwent valvuloplasty. 72/7% (16 of the mothers were hospitalized due to hypertension, 9/1% due to tachycardia and dyspnea and 18/2% showed mixed form of theses complains. They were using hydralazine and methyldopa as anti hypertensive drugs. Only 10% of the patients had history of anticoagulant using during pregnancy. We found only one patient with pulmonary stenosis (PS in these patients.Conclusion: Women with hypertension and previous heart disease showed many serious complications and high fetal mortality during pregnancy. Proper and on time care giving and support during pregnancy is dependent on the accurate diagnosis of the heart disease; any health problem should be seriously noted.
Hippocampal NMDA receptors and the previous experience effect on memory.
Cercato, Magalí C; Colettis, Natalia; Snitcofsky, Marina; Aguirre, Alejandra I; Kornisiuk, Edgar E; Baez, María V; Jerusalinsky, Diana A
2014-01-01
N-methyl-D-aspartate receptors (NMDAR) are thought to be responsible for switching synaptic activity specific patterns into long-term changes in synaptic function and structure, which would support learning and memory. Hippocampal NMDAR blockade impairs memory consolidation in rodents, while NMDAR stimulation improves it. Adult rats that explored twice an open field (OF) before a weak though overthreshold training in inhibitory avoidance (IA), expressed IA long-term memory in spite of the hippocampal administration of MK-801, which currently leads to amnesia. Those processes would involve different NMDARs. The selective blockade of hippocampal GluN2B-containing NMDAR with ifenprodil after training promoted memory in an IA task when the training was weak, suggesting that this receptor negatively modulates consolidation. In vivo, after 1h of an OF exposure-with habituation to the environment-, there was an increase in GluN1 and GluN2A subunits in the rat hippocampus, without significant changes in GluN2B. Coincidentally, in vitro, in both rat hippocampal slices and neuron cultures there was an increase in GluN2A-NMDARs surface expression at 30min; an increase in GluN1 and GluN2A levels at about 1h after LTP induction was also shown. We hypothesize that those changes in NMDAR composition could be involved in the "anti-amnesic effect" of the previous OF. Along certain time interval, an increase in GluN1 and GluN2A would lead to an increase in synaptic NMDARs, facilitating synaptic plasticity and memory; while then, an increase in GluN2A/GluN2B ratio could protect the synapse and the already established plasticity, perhaps saving the specific trace.
Sacrococcygeal pilonidal disease: analysis of previously proposed risk factors
Harlak, Ali; Mentes, Oner; Kilic, Selim; Coskun, Kagan; Duman, Kazim; Yilmaz, Fahri
2010-01-01
PURPOSE Sacrococcygeal pilonidal disease is a source of one of the most common surgical problems among young adults. While male gender, obesity, occupations requiring sitting, deep natal clefts, excessive body hair, poor body hygiene and excessive sweating are described as the main risk factors for this disease, most of these need to be verified with a clinical trial. The present study aimed to evaluate the value and effect of these factors on pilonidal disease. METHOD Previously proposed main risk factors were evaluated in a prospective case control study that included 587 patients with pilonidal disease and 2,780 healthy control patients. RESULTS Stiffness of body hair, number of baths and time spent seated per day were the three most predictive risk factors. Adjusted odds ratios were 9.23, 6.33 and 4.03, respectively (p<0.001). With an adjusted odds ratio of 1.3 (p<.001), body mass index was another risk factor. Family history was not statistically different between the groups and there was no specific occupation associated with the disease. CONCLUSIONS Hairy people who sit down for more than six hours a day and those who take a bath two or less times per week are at a 219-fold increased risk for sacrococcygeal pilonidal disease than those without these risk factors. For people with a great deal of hair, there is a greater need for them to clean their intergluteal sulcus. People who engage in work that requires sitting in a seat for long periods of time should choose more comfortable seats and should also try to stand whenever possible. PMID:20186294
Estimation of the extragalactic background light using TeV observations of BL Lac objects
Sinha, Atreyee; Acharya, B. S. [Department of High Energy Physics, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400005 (India); Sahayanathan, S.; Godambe, S. [Astrophysical Sciences Division, Bhabha Atomic Research Center, Mumbai (India); Misra, R., E-mail: atreyee@tifr.res.in, E-mail: acharya@tifr.res.in, E-mail: sunder@barc.ernet.in, E-mail: gsagar@barc.ernet.in, E-mail: rmisra@iucaa.ernet.in [Inter-University Center for Astronomy and Astrophysics, Post Bag 4, Ganeshkhind, Pune 411007 (India)
2014-11-01
The very high-energy (VHE) gamma-ray spectral index of high-energy peaked blazars correlates strongly with its corresponding redshift, whereas no such correlation is observed in the X-ray or GeV bands. We attribute this correlation to photon-photon absorption of TeV photons with the extragalactic background light (EBL), and utilizing this we compute the allowed flux range for the EBL, which is independent of previous estimates. The observed VHE spectrum of the sources in our sample can be well approximated by a power law, and if the de-absorbed spectrum is also assumed to be a power law, then we show that the spectral shape of EBL will be εn(ε) ∼ klog (ε/ε {sub p}). We estimate the range of values for the parameters defining the EBL spectrum, k and ε {sub p}, such that the correlation of the intrinsic VHE spectrum with redshift is nullified. The estimated EBL depends only on the observed correlation and the assumption of a power-law source spectrum. Specifically, it does not depend on the spectral modeling or radiative mechanism of the sources or on any theoretical shape of the EBL spectrum obtained through cosmological calculations. The estimated EBL spectrum is consistent with the upper and lower limits imposed by different observations. Moreover, it also agrees closely with the theoretical estimates obtained through cosmological evolution models.
Carotid surgery following previous carotid endarterectomy is safe and effective.
Abou-Zamzam, Ahmed M; Moneta, Gregory L; Landry, Gregory J; Yeager, Richard A; Edwards, James M; McConnell, Donald B; Taylor, Lloyd M; Porter, John M
2002-01-01
With the perceived high risk of repeat carotid surgery, carotid angioplasty and stenting have been advocated recently as the preferred treatment of recurrent carotid disease following carotid endarterectomy. An experience with the operative treatment of recurrent carotid disease to document the risks and benefits of this procedure is presented. A review of a prospectively acquired vascular registry over a 10-year period (Jan. 1990-Jan. 2000) was undertaken to identify patients undergoing repeat carotid surgery following previous carotid endarterectomy. All patients were treated with repeat carotid endarterectomy, carotid interposition graft, or subclavian-carotid bypass. The perioperative stroke and death rate, operative complications, life-table freedom from stroke, and rates of recurrent stenosis were documented. During the study period 56 patients underwent repeat carotid surgery, comprising 6% of all carotid operations during this period. The indication for operation was symptomatic disease recurrence in 41 cases (73%) and asymptomatic recurrent stenosis >/=80% in 15 cases (27%). The average interval from the prior carotid endarterectomy to the repeat operation was 78 months (range 3 weeks-297 months). The operations performed included repeat carotid endarterectomy with patch angioplasty in 31 cases (55%), interposition grafts in 19 cases (34%), and subclavian-carotid bypass in 6 cases (11%). There were three perioperative strokes with one resulting in death for a perioperative stroke and death rate of 5.4%. One minor transient cranial nerve (CN IX) injury occurred. Mean follow-up was 29 months (range, 1-116 months). Life-table freedom from stroke was 95% at 1 year and 90% at 5 years. Recurrent stenosis (>/=80%) developed in three patients (5.4%) during follow-up, including one internal carotid artery occlusion. Two patients (3.6%) underwent repeat surgery. Repeat surgery for recurrent cerebrovascular disease following carotid endarterectomy is safe and
THEORETICAL CONCEPTIONS OF GEOGRAPHY TEACHERS
Eloy Montes Galbán
2007-11-01
Full Text Available The main goal of this research was to determine the current theoretical concepts handled by third stage basic education geography teachers. A non experimental descriptive study was made. Data was collected through a semi structured questionnaire. The population was conformed by the teachers who work at the National schools placed in the parishes Raul Leoni and Cacique Mara of Maracaibo city, Zulia State. There is not clarity in regard to the correct handling of the different geographic currents, and the slight notion teachers have leans towards a traditional, descriptive, retrospective memory based conception.
Theoretical studies of combustion dynamics
Bowman, J.M. [Emory Univ., Atlanta, GA (United States)
1993-12-01
The basic objectives of this research program are to develop and apply theoretical techniques to fundamental dynamical processes of importance in gas-phase combustion. There are two major areas currently supported by this grant. One is reactive scattering of diatom-diatom systems, and the other is the dynamics of complex formation and decay based on L{sup 2} methods. In all of these studies, the authors focus on systems that are of interest experimentally, and for which potential energy surfaces based, at least in part, on ab initio calculations are available.
Interconnection policy: a theoretical survey
César Mattos
2003-01-01
Full Text Available This article surveys the theoretical foundations of interconnection policy. The requirement of an interconnection policy should not be taken for granted in all circumstances, even considering the issue of network externalities. On the other hand, when it is required, an encompassing interconnection policy is usually justified. We provide an overview of the theory on interconnection pricing that results in several different prescriptions depending on which problem the regulator aims to address. We also present a survey on the literature on two-way interconnection.
Machine learning a theoretical approach
Natarajan, Balas K
2014-01-01
This is the first comprehensive introduction to computational learning theory. The author's uniform presentation of fundamental results and their applications offers AI researchers a theoretical perspective on the problems they study. The book presents tools for the analysis of probabilistic models of learning, tools that crisply classify what is and is not efficiently learnable. After a general introduction to Valiant's PAC paradigm and the important notion of the Vapnik-Chervonenkis dimension, the author explores specific topics such as finite automata and neural networks. The presentation
Stoustrup, Jakob; Niemann, H.
2002-01-01
This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis prob-lems are reformulated in the so-called standard problem setup introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis pr...... problems can be solved by standard optimization tech-niques. The proposed methods include: (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; (2) FE for systems with parametric faults, and (3) FE for a class of nonlinear systems.......This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis prob-lems are reformulated in the so-called standard problem setup introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...
Information-theoretic model selection applied to supernovae data
Biesiada, M
2007-01-01
There are several different theoretical ideas invoked to explain the dark energy with relatively little guidance of which one of them might be right. Therefore the emphasis of ongoing and forthcoming research in this field shifts from estimating specific parameters of cosmological model to the model selection. In this paper we apply information-theoretic model selection approach based on Akaike criterion as an estimator of Kullback-Leibler entropy. In particular, we present the proper way of ranking the competing models based on Akaike weights (in Bayesian language - posterior probabilities of the models). Out of many particular models of dark energy we focus on four: quintessence, quintessence with time varying equation of state, brane-world and generalized Chaplygin gas model and test them on Riess' Gold sample. As a result we obtain that the best model - in terms of Akaike Criterion - is the quintessence model. The odds suggest that although there exist differences in the support given to specific scenario...
A combined crossed molecular beams and theoretical study of the reaction CN + C{sub 2}H{sub 4}
Balucani, Nadia, E-mail: nadia.balucani@unipg.it [Dipartimento di Chimica, Biologia e Biotecnologie, Università degli Studi di Perugia, Perugia (Italy); Leonori, Francesca; Petrucci, Raffaele [Dipartimento di Chimica, Biologia e Biotecnologie, Università degli Studi di Perugia, Perugia (Italy); Wang, Xingan [Dipartimento di Chimica, Biologia e Biotecnologie, Università degli Studi di Perugia, Perugia (Italy); Department of Chemical Physics, University of Science and Technology of China, Hefei 230026 (China); Casavecchia, Piergiorgio [Dipartimento di Chimica, Biologia e Biotecnologie, Università degli Studi di Perugia, Perugia (Italy); Skouteris, Dimitrios [Scuola Normale Superiore, Pisa (Italy); Albernaz, Alessandra F. [Instituto de Física, Universidade de Brasília, Brasília (Brazil); Gargano, Ricardo [Instituto de Física, Universidade de Brasília, Brasília (Brazil); Departments of Chemistry and Physics, University of Florida, Quantum Theory Project, Gainesville, FL 32611 (United States)
2015-03-01
Highlights: • The CN + C{sub 2}H{sub 4} reaction was investigated in crossed beam experiments. • Electronic structure calculations of the potential energy surface were performed. • RRKM estimates qualitatively reproduce the experimental C{sub 2}H{sub 3}NC yield. - Abstract: The CN + C{sub 2}H{sub 4} reaction has been investigated experimentally, in crossed molecular beam (CMB) experiments at the collision energy of 33.4 kJ/mol, and theoretically, by electronic structure calculations of the relevant potential energy surface and Rice–Ramsperger–Kassel–Marcus (RRKM) estimates of the product branching ratio. Differently from previous CMB experiments at lower collision energies, but similarly to a high energy study, we have some indication that a second reaction channel is open at this collision energy, the characteristics of which are consistent with the channel leading to CH{sub 2}CHNC + H. The RRKM estimates using M06L electronic structure calculations qualitatively support the experimental observation of C{sub 2}H{sub 3}NC formation at this and at the higher collision energy of 42.7 kJ/mol of previous experiments.
3rd Joint Dutch-Brazil School on Theoretical Physics
2015-01-01
The Joint Dutch-Brazil School on Theoretical Physics is now in its third edition with previous schools in 2007 and 2011. This edition of the school will feature minicourses by Nima Arkani-Hamed (IAS Princeton), Jan de Boer (University of Amsterdam) and Cumrun Vafa (Harvard University), as well as student presentations. The school is jointly organized with the Dutch Research School of Theoretical Physics (DRSTP) and is intended for graduate students and researchers in the field of high-energy theoretical physics. There is no registration fee and limited funds are available for local and travel support of participants. This school in São Paulo will be preceded by the XVIII J. A. Swieca School in Campos de Jordão.
Health promotion with adolescents: examining theoretical perspectives to guide research.
Montgomery, Kristen S
2002-01-01
A guiding theoretical framework in research serves not only to guide a single research study, but also to link previous and future research that is guided by the same framework. Existing theoretical perspectives appropriate for use with adolescent health promotion research were reviewed. Instead of randomly selecting several theories for comparison, an intensive review of the literature was conducted to identify which theories were most commonly used with adolescent health promotion research. The results of this review revealed some interesting and noteworthy information regarding the state of theory use in adolescent health research for the last decade. Information is provided on theoretical perspectives by journal and year of publication. Trends are analyzed so that nurses can evaluate the current state of the science. Social cognitive theory (Bandura, 1986), the health belief model (Becker, 1978), and the health promotion model (Pender, 1996) emerged as the most significant theories for adolescent health promotion research and thus are discussed at the end of the article.
CO2 storage capacity estimation: Methodology and gaps
Bachu, S.; Bonijoly, D.; Bradshaw, J.; Burruss, R.; Holloway, S.; Christensen, N.P.; Mathiassen, O.M.
2007-01-01
Implementation of CO2 capture and geological storage (CCGS) technology at the scale needed to achieve a significant and meaningful reduction in CO2 emissions requires knowledge of the available CO2 storage capacity. CO2 storage capacity assessments may be conducted at various scales-in decreasing order of size and increasing order of resolution: country, basin, regional, local and site-specific. Estimation of the CO2 storage capacity in depleted oil and gas reservoirs is straightforward and is based on recoverable reserves, reservoir properties and in situ CO2 characteristics. In the case of CO2-EOR, the CO2 storage capacity can be roughly evaluated on the basis of worldwide field experience or more accurately through numerical simulations. Determination of the theoretical CO2 storage capacity in coal beds is based on coal thickness and CO2 adsorption isotherms, and recovery and completion factors. Evaluation of the CO2 storage capacity in deep saline aquifers is very complex because four trapping mechanisms that act at different rates are involved and, at times, all mechanisms may be operating simultaneously. The level of detail and resolution required in the data make reliable and accurate estimation of CO2 storage capacity in deep saline aquifers practical only at the local and site-specific scales. This paper follows a previous one on issues and development of standards for CO2 storage capacity estimation, and provides a clear set of definitions and methodologies for the assessment of CO2 storage capacity in geological media. Notwithstanding the defined methodologies suggested for estimating CO2 storage capacity, major challenges lie ahead because of lack of data, particularly for coal beds and deep saline aquifers, lack of knowledge about the coefficients that reduce storage capacity from theoretical to effective and to practical, and lack of knowledge about the interplay between various trapping mechanisms at work in deep saline aquifers. ?? 2007 Elsevier Ltd
Efficiency of the estimate refinement method for polyhedral approximation of multidimensional balls
Kamenev, G. K.
2016-05-01
The estimate refinement method for the polyhedral approximation of convex compact bodies is analyzed. When applied to convex bodies with a smooth boundary, this method is known to generate polytopes with an optimal order of growth of the number of vertices and facets depending on the approximation error. In previous studies, for the approximation of a multidimensional ball, the convergence rates of the method were estimated in terms of the number of faces of all dimensions and the cardinality of the facial structure (the norm of the f-vector) of the constructed polytope was shown to have an optimal rate of growth. In this paper, the asymptotic convergence rate of the method with respect to faces of all dimensions is compared with the convergence rate of best approximation polytopes. Explicit expressions are obtained for the asymptotic efficiency, including the case of low dimensions. Theoretical estimates are compared with numerical results.
An estimation method for InSAR interferometric phase combined with image auto-coregistration
LI Hai; LI Zhenfang; LIAO Guisheng; BAO Zheng
2006-01-01
In this paper we propose a method to estimate the InSAR interferometric phase of the steep terrain based on the terrain model of local plane by using the joint subspace projection technique proposed in our previous paper. The method takes advantage of the coherence information of neighboring pixel pairs to auto-coregister the SAR images and employs the projection of the joint signal subspace onto the corresponding joint noise subspace to estimate the terrain interferometric phase. The method can auto-coregister the SAR images and reduce the interferometric phase noise simultaneously. Theoretical analysis and computer simulation results show that the method can provide accurate estimate of the interferometric phase (interferogram) of very steep terrain even if the coregistration error reaches one pixel. The effectiveness of the method is verified via simulated data and real data.
Bounds and self-consistent estimates of the elastic constants of polycrystals
Kube, Christopher M.; Arguelles, Andrea P.
2016-10-01
The Hashin-Shtrikman bounds on the elastic constants have been previously calculated for polycrystalline materials with crystallites having general elastic symmetry (triclinic crystallite symmetry). However, the calculation of tighter bounds and the self-consistent estimates of these elastic constants has remained unsolved. In this paper, a general theoretical expression for the self-consistent elastic constants is formulated. An iterative method is used to solve the expression for the self-consistent estimates. Each iteration of the solution gives the next tighter set of bounds including the well-known Voigt-Reuss and Hashin-Shtrikman bounds. Thus, all of the bounds on the elastic constants and the self-consistent estimates for any crystallite symmetry are obtained in a single, computationally efficient procedure. The bounds and self-consistent elastic constants are reported for several geophysical materials having crystallites of monoclinic and triclinic symmetries.
Sandar Tin Tin
Full Text Available BACKGROUND: It is known that experience of a previous crash is related to incidence of future crashes in a cohort of New Zealand cyclists. This paper investigated if the strength of such association differed by crash involvement propensity and by the need for medical care in the previous crash. METHODS: The Taupo Bicycle Study involved 2590 adult cyclists recruited in 2006 and followed over a median period of 4.6 years through linkage to four national databases. The crash involvement propensity was estimated using propensity scores based on the participants' demographic, cycling and residential characteristics. Cox regression modelling for repeated events was performed with multivariate and propensity score adjustments. Analyses were then stratified by quintiles of the propensity score. RESULTS: A total of 801 (31.0% participants reported having experienced at least one bicycle crash in the twelve months prior to the baseline survey. They had a higher risk of experiencing crash events during follow-up (hazard ratio (HR: 1.43; 95% CI: 1.28, 1.60 but in the stratified analysis, this association was significant only in the highest two quintiles of the propensity score where the likelihood of having experienced a crash was more than 33%. The association was stronger for previous crashes that had received medical care (HR 1.63; 95% CI: 1.41, 1.88 compared to those that had not (HR 1.30; 95% CI: 1.14, 1.49. CONCLUSIONS: Previous crash experience increased the risk of future crash involvement in high-risk cyclists and the association was stronger for previous crashes attended medically. What distinguishes the high risk group warrants closer investigation, and the findings indicate also that health service providers could play an important role in prevention of bicycle crash injuries.
The European Theoretical Spectroscopy Facility
Godby, Rex
2007-03-01
The ETSF (www.etsf.eu) is being created as a permanent output of the EU-funded Nanoquanta Network of Excellence (www.nanoquanta.eu, 2004-8), which joins 10 groups and over 100 researchers in research on the theory and simulation of spectroscopy of electrons in matter, and related excited-state electronic properties including quantum transport. The ETSF is intended to contribute significantly to nanoscience and nanotechnology through the development and application of theoretical spectroscopy, involving close collaboration between theorists (the existing Nanoquanta groups together with further theoretical groups) and a new community of experimental and industrial researchers who wish to apply modern theories of spectroscopy. In this talk I shall review some of the scientific output of the project so far, including the development of new ideas and techniques in many-body perturbation theory and time-dependent density-functional theory, and their application to a variety of prototype and actual systems including quantum transport in nanostructures, optical absorption in biological molecules and advanced materials, optical properties of nanoclusters and nanotubes, non-linear optical response, and spectroscopies of complex surfaces. I shall also briefly describe the network's integration activities, including code interoperability and modularity, training of internal and external researchers, and the legal, financial and organizational preparations for the ETSF.
Theoretical approaches to superionic conductivity
C S Sunandana; P Senthil Kumar
2004-02-01
Recent theoretical approaches to the understanding of superionic conductivity in polycrystalline, glassy and polymeric materials are briefly reviewed. Phase transitions to the superionic conducting state in the AgI family are apparently triggered by cluster formation and strong mobile ion interaction within the clusters. Anomalous conductivity and related physical properties are explained in the cluster induced distortion model. Ionic composites such as AgX : Al2O3 ( = Cl, Br and I) involve conducting and non-conducting phases and the all-important interface between the two whose space charge enhances the conductivity and also trigger phase transitions to exotic polymorphic phases, for which the mechanisms are yet to be explored. Ion hopping dynamics controls the conductivity of superionic glasses. Mode coupling and jump relaxation theories account for the non-Debye relaxation observed in a.c. conductivity of these glasses. The theory of conductivity in polymer electrolytes-still in its infancy-involves their complex structure and glass transition behaviour. Preparative and thermal history, composition and crystallinity control ionic conductivity. New approaches to the synthesis of optimal polymer electrolytes such as rubbery electrolytes, crystalline polymers and nanocomposites must be considered before achieving a comprehensive theoretical understanding.
Theoretical perspectives on narrative inquiry.
Emden, C
1998-04-01
Narrative inquiry is gaining momentum in the field of nursing. As a research approach it does not have any single heritage of methodology and its practitioners draw upon diverse sources of influence. Central to all narrative inquiry however, is attention to the potential of stories to give meaning to people's lives, and the treatment of data as stories. This is the first of two papers on the topic and addresses the theoretical influences upon a particular narrative inquiry into nursing scholars and scholarship. The second paper, Conducting a narrative analysis, describes the actual narrative analysis as it was conducted in this same study. Together, the papers provide sufficient detail for others wishing to pursue a similar approach to do so, or to develop the ideas and procedures according to their own way of thinking. Within this first theoretical paper, perspectives from Jerome Bruner (1987) and Wade Roof (1993) are outlined. These relate especially to the notion of stories as 'imaginative constructions' and as 'cultural narratives' and as such, highlight the profound importance of stories as being individually and culturally meaningful. As well, perspectives on narrative inquiry from nursing literature are highlighted. Narrative inquiry in this instance lies within the broader context of phenomenology.
Theoretical perspectives on strange physics
Ellis, J.
1983-04-01
Kaons are heavy enough to have an interesting range of decay modes available to them, and light enough to be produced in sufficient numbers to explore rare modes with satisfying statistics. Kaons and their decays have provided at least two major breakthroughs in our knowledge of fundamental physics. They have revealed to us CP violation, and their lack of flavor-changing neutral interactions warned us to expect charm. In addition, K/sup 0/-anti K/sup 0/ mixing has provided us with one of our most elegant and sensitive laboratories for testing quantum mechanics. There is every reason to expect that future generations of kaon experiments with intense sources would add further to our knowledge of fundamental physics. This talk attempts to set future kaon experiments in a general theoretical context, and indicate how they may bear upon fundamental theoretical issues. A survey of different experiments which would be done with an Intense Medium Energy Source of Strangeness, including rare K decays, probes of the nature of CP isolation, ..mu.. decays, hyperon decays and neutrino physics is given. (WHK)
Sourcing quality-of-life weights obtained from previous studies: theory and reality in Korea.
Bae, SeungJin; Bae, Eun Young; Lim, Sang Hee
2014-01-01
The quality-of-life weights obtained in previous studies are frequently used in cost-utility analyses. The purpose of this study is to describe how the values obtained in previous studies are incorporated into the industry submissions requesting listing at the Korean National Health Insurance (NHI), focusing on the issues discussed in theoretical studies and national guidelines. The industry submissions requesting listing at the Korean NHI from January 2007 until December 2009 were evaluated by two independent researchers at the Health Insurance Review and Assessment Service (HIRA). Specifically, we observed the methods that were used to pool, predict joint health state utilities, and retain consistency within submissions in terms of the issues discussed in methodological research papers and recommendations from national guidelines. More than half of the submissions used QALY as an outcome measure, and most of these submissions were sourced from prior studies. Heterogeneous methodologies were frequently used within a submission, with the inconsistent use of upper and lower anchors being prevalent. Assumptions behind measuring joint health state utilities or pooling multiple values for single health states were omitted in all submissions. Most national guidelines were rather vague regarding how to predict joint health states, how to select the best available value, how to maintain consistency within a submission, and how to generalize values obtained from prior studies. Previously-generated values were commonly sourced, but this practice was frequently related to inconsistencies within and among submissions. Attention should be paid to the consistency and transparency of the value, especially if the value is sourced from prior studies.
Milky Way Past Was More Turbulent Than Previously Known
2004-04-01
Results of 1001 observing nights shed new light on our Galaxy [1] Summary A team of astronomers from Denmark, Switzerland and Sweden [2] has achieved a major breakthrough in our understanding of the Milky Way, the galaxy in which we live. After more than 1,000 nights of observations spread over 15 years, they have determined the spatial motions of more than 14,000 solar-like stars residing in the neighbourhood of the Sun. For the first time, the changing dynamics of the Milky Way since its birth can now be studied in detail and with a stellar sample sufficiently large to allow a sound analysis. The astronomers find that our home galaxy has led a much more turbulent and chaotic life than previously assumed. PR Photo 10a/04: Distribution on the sky of the observed stars. PR Photo 10b/04: Stars in the solar neigbourhood and the Milky Way galaxy (artist's view). PR Video Clip 04/04: The motions of the observed stars during the past 250 million years. Unknown history Home is the place we know best. But not so in the Milky Way - the galaxy in which we live. Our knowledge of our nearest stellar neighbours has long been seriously incomplete and - worse - skewed by prejudice concerning their behaviour. Stars were generally selected for observation because they were thought to be "interesting" in some sense, not because they were typical. This has resulted in a biased view of the evolution of our Galaxy. The Milky Way started out just after the Big Bang as one or more diffuse blobs of gas of almost pure hydrogen and helium. With time, it assembled into the flattened spiral galaxy which we inhabit today. Meanwhile, generation after generation of stars were formed, including our Sun some 4,700 million years ago. But how did all this really happen? Was it a rapid process? Was it violent or calm? When were all the heavier elements formed? How did the Milky Way change its composition and shape with time? Answers to these and many other questions are 'hot' topics for the
Knudsen, Torben
2014-01-01
Dynamic inflow is an effect which is normally not included in the models used for wind turbine control design. Therefore, potential improvement from including this effect exists. The objective in this project is to improve the methods previously developed for this and especially to verify...... the results using full-scale wind turbine data. The previously developed methods were based on extended Kalman filtering. This method has several drawback compared to unscented Kalman filtering which has therefore been developed. The unscented Kalman filter was first tested on linear and non-linear test cases...... which was successful. Then the estimation of a wind turbine state including dynamic inflow was tested on a simulated NREL 5MW turbine was performed. This worked perfectly with wind speeds from low to nominal wind speed as the output prediction errors where white. In high wind where the pitch actuator...
Generalized estimating equations
Hardin, James W
2013-01-01
Generalized Estimating Equations, Second Edition updates the best-selling previous edition, which has been the standard text on the subject since it was published a decade ago. Combining theory and application, the text provides readers with a comprehensive discussion of GEE and related models. Numerous examples are employed throughout the text, along with the software code used to create, run, and evaluate the models being examined. Stata is used as the primary software for running and displaying modeling output; associated R code is also given to allow R users to replicat
Electromechanical properties of smart aggregate: theoretical modeling and experimental validation
Wang, Jianjun; Kong, Qingzhao; Shi, Zhifei; Song, Gangbing
2016-09-01
Smart aggregate (SA), as a piezoceramic-based multi-functional device, is formed by sandwiching two lead zirconate titanate (PZT) patches with copper shielding between a pair of solid-machined cylindrical marble blocks with epoxy. Previous researches have successfully demonstrated the capability and reliability of versatile SAs to monitor the structural health of concrete structures. However, the previous works concentrated mainly on the applications of SAs in structural health monitoring; no reasonable theoretical model of SAs was proposed. In this paper, electromechanical properties of SAs were investigated using a proposed theoretical model. Based on one dimensional linear theory of piezo-elasticity, the dynamic solutions of a SA subjected to an external harmonic voltage were solved. Further, the electric impedance of the SA was computed, and the resonance and anti-resonance frequencies were calculated based on derived equations. Numerical analysis was conducted to discuss the effects of the thickness of epoxy layer and the dimension of PZT patch on the fundamental resonance and anti-resonance frequencies as well as the corresponding electromechanical coupling factor. The dynamic solutions based on the proposed theoretical model were further experimentally verified with two SA samples. The fundamental resonance and anti-resonance frequencies of SAs show good agreements in both theoretical and experimental results. The presented analysis and results contribute to the overall understanding of SA properties and help to optimize the working frequencies of SAs in structural health monitoring of civil structures.
Theory of Distribution Estimation of Hyperparameters in Markov Random Field Models
Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato
2016-06-01
We investigated the performance of distribution estimation of hyperparameters in Markov random field models proposed by Nakanishi-Ohno et al., http://doi.org/10.1088/1751-8113/47/4/045001, J. Phys. A 47, 045001 (2014) when used to evaluate the confidence of data. We analytically calculated the configurational average, with respect to data, of the negative logarithm of the posterior distribution, which is called free energy based on an analogy with statistical mechanics. This configurational average of free energy shrinks as the amount of data increases. Our results theoretically confirm the numerical results from that previous study.
Bauduin, Sophie; Clarisse, Lieven; Theunissen, Michael; George, Maya; Hurtmans, Daniel; Clerbaux, Cathy; Coheur, Pierre-François
2017-03-01
Separating concentrations of carbon monoxide (CO) in the boundary layer from the rest of the atmosphere with nadir satellite measurements is of particular importance to differentiate emission from transport. Although thermal infrared (TIR) satellite sounders are considered to have limited sensitivity to the composition of the near-surface atmosphere, previous studies show that they can provide information on CO close to the ground in case of high thermal contrast. In this work we investigate the capability of IASI (Infrared Atmospheric Sounding Interferometer) to retrieve near-surface CO concentrations, and we quantitatively assess the influence of thermal contrast on such retrievals. We present a 3-part analysis, which relies on both theoretical forward simulations and retrievals on real data, performed for a large range of negative and positive thermal contrast situations. First, we derive theoretically the IASI detection threshold of CO enhancement in the boundary layer, and we assess its dependence on thermal contrast. Then, using the optimal estimation formalism, we quantify the role of thermal contrast on the error budget and information content of near-surface CO retrievals. We demonstrate that, contrary to what is usually accepted, large negative thermal contrast values (ground cooler than air) lead to a better decorrelation between CO concentrations in the low and the high troposphere than large positive thermal contrast (ground warmer than the air). In the last part of the paper we use Mexico City and Barrow as test cases to contrast our theoretical predictions with real retrievals, and to assess the accuracy of IASI surface CO retrievals through comparisons to ground-based in-situ measurements.
Previous Land Use and Invasive Species Impacts on Long-term Afforestation Success
Joshua B. Nickelson
2015-09-01
Full Text Available The conversion of agricultural lands to forests has increased worldwide over the past few decades for multiple reasons including increasing forest connectivity and wildlife habitat. However, previous land cover and competing vegetation often impede afforestation. We established 219 plots in 29 Quercus plantations on four previous land cover types (LCT: Clover, Soybeans, Woody Brush, and Herbaceous Weeds. Plantations were located in Illinois, USA and were sampled 15–18 years after planting. Sampling data for all trees (planted and volunteer included species, diameter, and vine presence on the main bole of the tree. Free-to-grow status was recorded for all Quercus species and estimated cover of two invasive species, Elaeagnus umbellata and Lonicera japonica, was documented on each plot. There was a strong relationship between total tree density and invasive species cover across all sites. Stocking success was lower and E. umbellata cover was higher on Woody Brush sites compared to Clover and Soybean cover types. Additionally, significantly more free-to-grow Quercus saplings occurred in Clover and Soybean cover types compared to the Woody Brush sites. The results indicate that previous land cover plays a critical role in forest afforestation. Furthermore, while historically, volunteer tree species were thought to be detrimental to the development of planted species these results suggest that with the increasing prevalence of invasive species worldwide the role of volunteer species in afforestation should be reconsidered and silvicultural protocols adjusted accordingly.
Theoretical Dipole Moment for the X211 State of NO
Langhoff, Stephen R.; Bauschlicher, Charles W., Jr.; Partridge, Harry; Arnold, James O. (Technical Monitor)
1994-01-01
The dipole moment function for the X(sup 2)II state of NO is studied as a function of the completeness in both the one- and n-particle spaces. Einstein coefficients are presented that are significantly more accurate than previous tabulations for the higher vibrational levels. The theoretical values give considerable insight into the limitations of recently published ratios of Einstein coefficients measured by spectrally resolved infrared chemiluminescence.
Mauro Ceccanti
2011-06-01
Full Text Available Objective: To determine the population-based epidemiology of fetal alcohol syndrome (FAS and other fetal alcohol spectrum disorders (FASD in towns representative of the general population of central Italy. Methods: Slightly revised U.S. Institute of Medicine diagnostic methods were used among children in randomly-selected schools near Rome. Consented first grade children (n = 976 were screened in Tier I for height, weight, or head circumference and all children
THEORETICAL ASPECTS OF FILMMUSIC STUDY
Egorova Tatiana K.
2014-04-01
Full Text Available In this article, author analyzes the theoretical aspects of the film music study taking into account with modern realities in the development of world film-process and attempts to its scientific understanding. Need for innovation in this area is long overdue, because the existing on this topic nonfiction no longer meets the new aesthetic and art-practical achievements and innovations in the film music development at the XXI century. Related to the phenomenon of music in screen arts a number of new terms and concepts require a certain adjustment as well. Their range of action is not yet fully defined. Author of the article offered her version of their content-semantic interpretation (largely experimental designed to promote new research methods for the film music study.
Theoretical motions of hydrofoil systems
Imlay, Frederick H
1948-01-01
Results are presented of an investigation that has been undertaken to develop theoretical methods of treating the motions of hydrofoil systems and to determine some of the important parameters. Variations of parameters include three distributions of area between the hydrofoils, two rates of change of downwash angle with angle of attack, three depths of immersion, two dihedral angles, two rates of change of lift with immersion, three longitudinal hydrofoil spacings, two radii of gyration in pitching, and various horizontal and vertical locations of the center of gravity. Graphs are presented to show locations of the center of gravity for stable motion, values of the stability roots, and motions following the sudden application of a vertical force or a pitching moment to the hydrofoil system for numerous sets of values of the parameters.
Dental Photothermal Radiometry: Theoretical Analysis.
Matvienko, Anna; Jeon, Raymond; Mandelis, Andreas; Abrams, Stephen
2007-03-01
Dental enamel demineralization in its early stages is very difficult to detect with conventional x-rays or visual examination. High-resolution techniques, such as scanning electron microscopy, usually require destruction of the tooth. Photothermal Radiomety (PTR) was recently applied as a safe, non-destructive, and highly sensitive tool for the detection of early dental demineralization, artificially created on the enamel surface. The experiments showed very high sensitivity of the measured signal to incipient changes in the surface structure, emphasizing the clinical capabilities of the method. In order to analyze the biothermophotonic phenomena in a tooth sample during the photothermal excitation, a theoretical model featuring coupled diffuse-photon-density-wave and thermal-wave fields was developed. Numerical simulations identified the effects on the PTR signal of changes in optical and thermal properties of enamel and dentin as a result of demineralization. The model predictions and experimental results will be compared and discussed.
Theoretical aspects of Chiral Dynamics
Leutwyler, H
2015-01-01
Many of the quantities of interest at the precision frontier in particle physics require a good understanding of the strong interaction at low energies. The present talk reviews the theoretical framework used in this context. In particular, I draw attention to the fact that applications of effective field theory methods in the low energy domain involve two different aspects: dependence of the quantities of interest on the quark masses and dependence on the momenta. While the lattice approach gives an excellent handle on the low energy constants that govern the quark mass dependence, the most efficient tool to pin down the momentum dependence is dispersion theory. At the same time, the dispersive analysis enlarges the energy range where the effective theory applies. In the meson sector, the interplay of the various sources of information has led to a coherent framework that describes the low energy structure at remarkably high resolution. The understanding of the low energy properties in the baryon sector is l...
Theoretical information reuse and integration
Rubin, Stuart
2016-01-01
Information Reuse and Integration addresses the efficient extension and creation of knowledge through the exploitation of Kolmogorov complexity in the extraction and application of domain symmetry. Knowledge, which seems to be novel, can more often than not be recast as the image of a sequence of transformations, which yield symmetric knowledge. When the size of those transformations and/or the length of that sequence of transforms exceeds the size of the image, then that image is said to be novel or random. It may also be that the new knowledge is random in that no such sequence of transforms, which produces it exists, or is at least known. The nine chapters comprising this volume incorporate symmetry, reuse, and integration as overt operational procedures or as operations built into the formal representations of data and operators employed. Either way, the aforementioned theoretical underpinnings of information reuse and integration are supported.
Theoretical Models of Generalized Quasispecies.
Wagner, Nathaniel; Atsmon-Raz, Yoav; Ashkenasy, Gonen
2016-01-01
Theoretical modeling of quasispecies has progressed in several directions. In this chapter, we review the works of Emmanuel Tannenbaum, who, together with Eugene Shakhnovich at Harvard University and later with colleagues and students at Ben-Gurion University in Beersheva, implemented one of the more useful approaches, by progressively setting up various formulations for the quasispecies model and solving them analytically. Our review will focus on these papers that have explored new models, assumed the relevant mathematical approximations, and proceeded to analytically solve for the steady-state solutions and run stochastic simulations . When applicable, these models were related to real-life problems and situations, including changing environments, presence of chemical mutagens, evolution of cancer and tumor cells , mutations in Escherichia coli, stem cells , chromosomal instability (CIN), propagation of antibiotic drug resistance , dynamics of bacteria with plasmids , DNA proofreading mechanisms, and more.
Hash Functions and Information Theoretic Security
Bagheri, Nasour; Knudsen, Lars R.; Naderi, Majid; Thomsen, Søren S.
Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic attack costs and real costs. We show that in the information theoretic model, many well-known and commonly used hash functions such as MD5 and SHA-256 fail to be preimage resistant.
Hash functions and information theoretic security
Bagheri, Nasoor; Knudsen, Lars Ramkilde; Naderi, Majid;
2009-01-01
Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic...... attack costs and real costs. We show that in the information theoretic model, many well-known and commonly used hash functions such as MD5 and SHA-256 fail to be preimage resistant....
On the Performance of Principal Component Liu-Type Estimator under the Mean Square Error Criterion
Jibo Wu
2013-01-01
Full Text Available Wu (2013 proposed an estimator, principal component Liu-type estimator, to overcome multicollinearity. This estimator is a general estimator which includes ordinary least squares estimator, principal component regression estimator, ridge estimator, Liu estimator, Liu-type estimator, r-k class estimator, and r-d class estimator. In this paper, firstly we use a new method to propose the principal component Liu-type estimator; then we study the superior of the new estimator by using the scalar mean squares error criterion. Finally, we give a numerical example to show the theoretical results.
Theoretical Foundations of Active Learning
2009-05-01
on the rate of convergence of the loss of an estimator, as a function of the number of labeled examples observed [e.g., Benedek and Itai, 1988...classes. In a personal communication, John Lang - ford reported that he and Rui Castro determined such improvements are in fact achieved by A2 for the...1537, 2005. 2.8 G. Benedek and A. Itai. Learnability by fixed distributions. In Proc. of the First Workshop on Computational Learning Theory, pages 80
M-estimator for the 3D symmetric Helmert coordinate transformation
Chang, Guobin; Xu, Tianhe; Wang, Qianxin
2017-06-01
The M-estimator for the 3D symmetric Helmert coordinate transformation problem is developed. Small-angle rotation assumption is abandoned. The direction cosine matrix or the quaternion is used to represent the rotation. The 3 × 1 multiplicative error vector is defined to represent the rotation estimation error. An analytical solution can be employed to provide the initial approximate for iteration, if the outliers are not large. The iteration is carried out using the iterative reweighted least-squares scheme. In each iteration after the first one, the measurement equation is linearized using the available parameter estimates, the reweighting matrix is constructed using the residuals obtained in the previous iteration, and then the parameter estimates with their variance-covariance matrix are calculated. The influence functions of a single pseudo-measurement on the least-squares estimator and on the M-estimator are derived to theoretically show the robustness. In the solution process, the parameter is rescaled in order to improve the numerical stability. Monte Carlo experiments are conducted to check the developed method. Different cases to investigate whether the assumed stochastic model is correct are considered. The results with the simulated data slightly deviating from the true model are used to show the developed method's statistical efficacy at the assumed stochastic model, its robustness against the deviations from the assumed stochastic model, and the validity of the estimated variance-covariance matrix no matter whether the assumed stochastic model is correct or not.
A novel approach for absolute radar calibration: formulation and theoretical validation
C. Merker
2015-06-01
Full Text Available The theoretical framework of a novel approach for absolute radar calibration is presented and its potential analysed by means of synthetic data to lay out a solid basis for future practical application. The method presents the advantage of an absolute calibration with respect to the directly measured reflectivity, without needing a previously calibrated reference device. It requires a setup comprising three radars: two devices oriented towards each other, measuring reflectivity along the same horizontal beam and operating within a strongly attenuated frequency range (e.g. K or X band, and one vertical reflectivity and drop size distribution (DSD profiler below this connecting line, which is to be calibrated. The absolute determination of the calibration factor is based on attenuation estimates. Using synthetic, smooth and geometrically idealised data, calibration is found to perform best using homogeneous precipitation events with rain rates high enough to ensure a distinct attenuation signal (reflectivity above ca. 30 dBZ. Furthermore, the choice of the interval width (in measuring range gates around the vertically pointing radar, needed for attenuation estimation, is found to have an impact on the calibration results. Further analysis is done by means of synthetic data with realistic, inhomogeneous precipitation fields taken from measurements. A calibration factor is calculated for each considered case using the presented method. Based on the distribution of the calculated calibration factors, the most probable value is determined by estimating the mode of a fitted shifted logarithmic normal distribution function. After filtering the data set with respect to rain rate and inhomogeneity and choosing an appropriate length of the considered attenuation path, the estimated uncertainty of the calibration factor is of the order of 1 to 11 %, depending on the chosen interval width. Considering stability and accuracy of the method, an interval of
Estimation of probability densities using scale-free field theories.
Kinney, Justin B
2014-07-01
The question of how best to estimate a continuous probability density from finite data is an intriguing open problem at the interface of statistics and physics. Previous work has argued that this problem can be addressed in a natural way using methods from statistical field theory. Here I describe results that allow this field-theoretic approach to be rapidly and deterministically computed in low dimensions, making it practical for use in day-to-day data analysis. Importantly, this approach does not impose a privileged length scale for smoothness of the inferred probability density, but rather learns a natural length scale from the data due to the tradeoff between goodness of fit and an Occam factor. Open source software implementing this method in one and two dimensions is provided.
Phase shift estimation in interferograms with unknown phase step
Dalmau, Oscar; Rivera, Mariano; Gonzalez, Adonai
2016-08-01
We first present two closed formulas for computing the phase shift in interferograms with unknown phase step. These formulas obtain theoretically the exact phase step in fringe pattern without noise and only require the information in two pixels of the image. The previous formulas allows us to define a functional that yields an estimate of the phase step in interferograms corrupted by noise. In the experiment we use the standard Least Square formulation which also yields a closed formula, although the general formulation admits a robust potential. We provide two possible implementations of our approach, one in which the sites can be randomly selected and the other in which we can scan the whole image. The experiments show that the proposed algorithm presents the best results compared with state of the art algorithms.
Carlson, H. W.; Walkley, K. B.
1982-01-01
Numerical methods incorporated into a computer program to provide estimates of the subsonic aerodynamic performance of twisted and cambered wings of arbitrary planform with attainable thrust and vortex lift considerations are described. The computational system is based on a linearized theory lifting surface solution which provides a spanwise distribution of theoretical leading edge thrust in addition to the surface distribution of perturbation velocities. The approach used relies on a solution by iteration. The method also features a superposition of independent solutions for a cambered and twisted wing and a flat wing of the same planform to provide, at little additional expense, results for a large number of angles of attack or lift coefficients. A previously developed method is employed to assess the portion of the theoretical thrust actually attainable and the portion that is felt as a vortex normal force.
Resource exploitation and cross-region growth trajectories: nonparametric estimates for Chile.
Mainardi, Stefano
2007-10-01
As a sector of primary concern for national development strategies, mining keeps stimulating an intensive debate in Chile, regarding its role for long-term growth. Partly drawn on theoretical contributions to growth and mineral resource accounting, this analysis assesses patterns of economic growth across Chilean regions. The theoretical and methodological rationale for focusing on weak sustainability, by testing convergence across regions in a distribution dynamics perspective, is first discussed. This is followed by a brief review of policy issues and previous empirical findings of concern to Chile's mining and regional growth. Panel data over the period 1960-2001 are analysed, with growth measured in terms of both income per capita as such, and sustainable measures of this variable. Kernel density and quantile regression estimates indicate persistent bimodal (if not possibly trimodal) distribution of nationally standardised regional incomes per capita, whereby conditions for cross-region convergence are matched only within the inner range of this distribution.
Theoretical concept of credit risk management
Dragosavac Miloš
2014-01-01
Full Text Available With the development of the banking business and the economy, exposure to different types of risk becomes greater. Identifying all risks and adequate measures have become an extremely important factor in business success in the increasingly complex economic conditions. Risks in business, in the last ten years have become the burning issue in debates among the scientific experts. With the aim of stable development of its business and equal participation in a large competitive market, primarily in order to protect its depositors and preserve system stability and liquidity, banks have to incorporate into their strategic goals the strategies of banking risks. Credit risk is of great value within the overall risks that accompany the business activity of banks, economy, and other forms of business organization. Its nature and presence in all segments of the business activities speak enough about its importance and the need for its management. Permanently growing trend of credit risk is a reality faced by not only the banking organization, but also the subjects in the economic and non-economic sector, which makes the issue of credit risk extremely important and relevant. The subject of this paper is a theoretical analysis of credit risk in banking business. Banking operations are increasingly exposed to credit risk, which indicates the inability of banks to settle their claims based on previously approved loans, and this is the case-in-point for this specific research subject.
Theoretical Assessment of Norfloxacin Redox and Photochemistry
Musa, Klefah A. K.; Eriksson, Leif A.
2009-09-01
Norfloxacin, 1-ethyl-6-fluoro-1,4-dihydo-4-oxo-7-(1-piperazinyl)-3-quinolinecarboxylic acid, NOR, is an antibiotic drug from the fluoroquinoline family. The different protonation states of this drug formed throughout the pH range is studied by means of density functional theory (DFT) and the spectra of the NOR species computed using time-dependent DFT. Details about their photochemistry are obtained from investigating the highest occupied and lowest unoccupied molecular orbitals. The predominant species under physiological pH, the zwitterion, is the most photoliable one, capable of producing singlet oxygen or/and superoxide radical anions from its triplet state. In addition, the main photodegradation step, defluorination, occurs more easily from this species compared with the other forms. The defluorination from the excited triplet state requires passing a barrier of 16.3 kcal/mol in the case of the zwitterion. The neutral and cationic forms display higher transition barriers, whereas the reaction path of defluorination is completely endothermic for the anionic species. The theoretical results obtained herein are in line with previous experimental data.
Theoretical model of ``fuzz'' growth
Krasheninnikov, Sergei; Smirnov, Roman
2012-10-01
Recent more detailed experiments on tungsten irradiation with low energy helium plasma, relevant to the near-wall plasma conditions in magnetic fusion reactor like ITER, demonstrated (e.g. see Ref. 1) a very dramatic change in both surface morphology and near surface material structure of the samples. In particular, it was shown that a long (mm-scale) and thin (nm-scale) fiber-like structures filled with nano-bubbles, so-called ``fuzz,'' start to grow. In this work theoretical model of ``fuzz'' growth [2] describing the main features observed in experiments is presented. This model, based on the assumption of enhancement of creep of tungsten containing significant fraction of helium atoms and clusters. The results of the MD simulations [3] support this idea and demonstrate a strong reduction of the yield strength for all temperature range. They also show that the ``flow'' of tungsten strongly facilitates coagulation of helium clusters and the formation of nano-bubbles.[4pt] [1] M. J. Baldwin, et al., J. Nucl. Mater. 390-391 (2009) 885;[0pt] [2] S. I. Krasheninnikov, Physica Scripta T145 (2011) 014040;[0pt] [3] R. D. Smirnov and S. I. Krasheninnikov, submitted to J. Nucl. Materials.
Climate Change: a Theoretical Review
Muhammad Ishaq-ur Rahman
2013-01-01
Full Text Available Climate Change has been undoubtedly the most illustrious environmental issue since late 20th century. But neither the discourse merely emerged during that time, nor it was problematized in the same way since its onset. History of Climate Change discourse reveals that from a purely scientific concern it has turned into a public agenda that is nowadays more inclined to be development problem. Transformations have brought about a complete new paradigm every time. This article presents a theoretical analysis of the Climate Change discourse and to do so it captured the underlying philosophy of the issue using Thomas Kuhn’s well-known thesis of ‘paradigm shift’. In particular it discusses about the crisis that lead the issue towards transformations; explores key perspectives around the crisis thus representation of the issue in the environmental discourse over the time. While this paper establishes that with the beginning of the 21st century, the discourse entered into a new paradigm and will reach to a critical point by the end of 2012, it finally postulates some measures that the discourse might integrate with the existing to advance beyond that point.
Studies in theoretical particle physics
Kaplan, D.B.
1991-07-01
This proposal focuses on research on three distinct areas of particle physics: (1) Nonperturbative QCD. I tend to continue work on analytic modelling of nonperturbative effects in the strong interactions. I have been investigating the theoretical connection between the nonrelativistic quark model and QCD. The primary motivation has been to understand the experimental observation of nonzero matrix elements involving current strange quarks in ordinary matter -- which in the quark model has no strange quark component. This has led to my present work on understanding constituent (quark model) quarks as collective excitations of QCD degrees of freedom. (2) Weak Scale Baryogenesis. A continuation of work on baryogenesis in the early universe from weak interactions. In particular, an investigation of baryogenesis occurring during the weak phase transition through anomalous baryon violating processes in the standard model of weak interactions. (3) Flavor and Compositeness. Further investigation of a new mechanism that I recently discovered for dynamical mass generation for fermions, which naturally leads to a family hierarchy structure. A discussion of recent past work is found in the next section, followed by an outline of the proposed research. A recent publication from each of these three areas is attached to this proposal.
Research in Theoretical Particle Physics
Feldman, Hume A; Marfatia, Danny
2014-09-24
This document is the final report on activity supported under DOE Grant Number DE-FG02-13ER42024. The report covers the period July 15, 2013 – March 31, 2014. Faculty supported by the grant during the period were Danny Marfatia (1.0 FTE) and Hume Feldman (1% FTE). The grant partly supported University of Hawaii students, David Yaylali and Keita Fukushima, who are supervised by Jason Kumar. Both students are expected to graduate with Ph.D. degrees in 2014. Yaylali will be joining the University of Arizona theory group in Fall 2014 with a 3-year postdoctoral appointment under Keith Dienes. The group’s research covered topics subsumed under the Energy Frontier, the Intensity Frontier, and the Cosmic Frontier. Many theoretical results related to the Standard Model and models of new physics were published during the reporting period. The report contains brief project descriptions in Section 1. Sections 2 and 3 lists published and submitted work, respectively. Sections 4 and 5 summarize group activity including conferences, workshops and professional presentations.
石建勋; 全淑琴; 李海英
2012-01-01
为更好地对人民币境外流通规模进行估算,文章对传统的货币需求缺口模型进行改进：增加用于股票市场和土地市场交易的第三种货币需求m3,通过从实际货币供应中扣除国内各种货币需求和沉淀货币,间接估算境外人民币流通数量。在此基础上,文章运用改进后的估算模型以1996年第一季度到2005年第二季度的数据为基础,对2005年第三季度到2010年第二季度的人民币境外持有规模进行了估算,并根据估算结果提出相应的政策建议。%To estimate the external demand of RMB currency, this paper improves the traditional model of money demand gap by increasing the third money demand M3 in the stock market and land market. By deducting the domestic money demands and Settling Money （SM） from the practical currency supply, this paper indirectly estimates the external demand of REM currency. Accordingly, based on the data collected between the first quarter of 1996 and the second quarter of 2005, the improved model is used to estimate the scale of RMB currency held outside China from the third quarter of 2005 to the second quarter of 2010. Suggests are put forward according to the result of the estimation.
Ivan Ivanov
2015-12-01
Full Text Available Purpose: to prove a technique of the improvement of special physical fitness due to the development of coordination abilities at a stage of the previous basic preparation. Material and Methods: theoretical analysis and synthesis of data of scientifically methodical literature and empirical materials of the scientific research, pedagogical methods of the research, tool methods: program of diagnostics of the development of psychophysiological abilities (APC "Sports psychophysiologist", methods of mathematical statistics. Results: the technique is developed which includes means of the general physical preparation, the specially developed complexes of danced-jumped exercises, the dancing combinations and means which are picked up taking into account the leading motive, functional abilities and special technical characteristics of movements and also has a strictly certain sequence of the realization of means during each cycle of classes. Conclusions: the application of the experimental technique of the improvement of special physical fitness of sportsman in the training process led to the improvement and enhancement of their technical preparedness and the increase of the competitive productivity of their performances
Christiansen, Mia N.; Andersson, Charlotte; Gislason, Gunnar H.
2017-01-01
BACKGROUND: The outcomes of emergent noncardiac, nonintracranial surgery in patients with previous stroke remain unknown. METHODS: All emergency surgeries performed in Denmark (2005 to 2011) were analyzed according to time elapsed between previous ischemic stroke and surgery. The risks of 30-day...... mortality and major adverse cardiovascular events were estimated as odds ratios (ORs) and 95% CIs using adjusted logistic regression models in a priori defined groups (reference was no previous stroke). In patients undergoing surgery immediately (within 1 to 3 days) or early after stroke (within 4 to 14...... days), propensity-score matching was performed. RESULTS: Of 146,694 nonvascular surgeries (composing 98% of all emergency surgeries), 5.3% had previous stroke (mean age, 75 yr [SD = 13]; 53% women, 50% major orthopedic surgery). Antithrombotic treatment and atrial fibrillation were more frequent...
Fliess, Michel; Join, Cédric; Sira-Ramirez, Hebertt
2008-01-01
International audience; Non-linear state estimation and some related topics, like parametric estimation, fault diagnosis, and perturbation attenuation, are tackled here via a new methodology in numerical differentiation. The corresponding basic system theoretic definitions and properties are presented within the framework of differential algebra, which permits to handle system variables and their derivatives of any order. Several academic examples and their computer simulations, with on-line ...
Zumla, A; McCloskey, B; Bin Saeed, A A; Dar, O; Al Otabi, B; Perlmann, S; Gautret, P; Roy, N; Blumberg, L; Azhar, E I; Barbeschi, M; Memish, Z; Petersen, E
2016-06-01
All previous experiences from different mass gathering show that vaccine preventable diseases is the most important infections like influemza, hepatitis A, polio and meningitis. Three mass gathering held in Africa during the Ebola outbreak accepted participants from West Africa and was able to handle the theoretical risk without any incident. Therefore we believe that the Olympc games in Rio de Janeiro should not be canceled. The number of visitors to the games is a tine fraction (1%) of other visitors to Zika endemic con tries and it will have no measurable effect on the risk of spreading Zika virus, if the games was cancelled.
A. Zumla
2016-06-01
Full Text Available All previous experiences from different mass gathering show that vaccine preventable diseases is the most important infections like influemza, hepatitis A, polio and meningitis. Three mass gathering held in Africa during the Ebola outbreak accepted participants from West Africa and was able to handle the theoretical risk without any incident. Therefore we believe that the Olympc games in Rio de Janeiro should not be canceled. The number of visitors to the games is a tine fraction (1% of other visitors to Zika endemic con tries and it will have no measurable effect on the risk of spreading Zika virus, if the games was cancelled.
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and
Theoretical mean-variance relationship of IP network traffic based on ON/OFF model
JIN Yi; ZHOU Gang; JIANG DongChen; YUAN Shuai; WANG LiLi; CAO JianTing
2009-01-01
Mean-variance relationship (MVR), nowadays agreed in power law form, is an important function. It Is currently used by traffic matrix estimation as a basic statistical assumption. Because all the existing papers obtain MVR only through empirical ways, they cannot provide theoretical support to power law MVR or the definition of its power exponent. Furthermore, because of the lack of theoretical model, all traffic matrix estimation methods based on MVR have not been theoretically supported yet. By observ-ing both our laboratory and campus network for more than one year, we find that such an empirical MVR is not sufficient to describe actual network traffic. In this paper, we derive a theoretical MVR from ON/OFF model. Then we prove that current empirical power law MVR is generally reasonable by the fact that it is an approximate form of theoretical MVR under specific precondition, which can theoretically support those traffic matrix estimation algorithms of using MVR. Through verifying our MVR by actual observation and public DECPKT traces, we verify that our theoretical MVR Is valid and more capable of describing actual network traffic than power law MVR.
Caro, R.; Francisco, J. L. de
1961-07-01
This report presents the theoretical study of some physical aspects previous to the design of an exponential facility. The are: Fast and slow flux distribution in the multiplicative medium and in the thermal column, slowing down in the thermal column, geometrical distribution and minimum needed intensity of sources access channels and perturbations produced by possible variations in its position and intensity. (Author) 4 refs.
Are merging black holes born from stellar collapse or previous mergers?
Gerosa, Davide; Berti, Emanuele
2017-06-01
Advanced LIGO detectors at Hanford and Livingston made two confirmed and one marginal detection of binary black holes during their first observing run. The first event, GW150914, was from the merger of two black holes much heavier that those whose masses have been estimated so far, indicating a formation scenario that might differ from "ordinary" stellar evolution. One possibility is that these heavy black holes resulted from a previous merger. When the progenitors of a black hole binary merger result from previous mergers, they should (on average) merge later, be more massive, and have spin magnitudes clustered around a dimensionless spin ˜0.7 . Here we ask the following question: can gravitational-wave observations determine whether merging black holes were born from the collapse of massive stars ("first generation"), rather than being the end product of earlier mergers ("second generation")? We construct simple, observationally motivated populations of black hole binaries, and we use Bayesian model selection to show that measurements of the masses, luminosity distance (or redshift), and "effective spin" of black hole binaries can indeed distinguish between these different formation scenarios.
The evolution of avian wing shape and previously unrecognized trends in covert feathering.
Wang, Xia; Clarke, Julia A
2015-10-07
Avian wing shape has been related to flight performance, migration, foraging behaviour and display. Historically, linear measurements of the feathered aerofoil and skeletal proportions have been used to describe this shape. While the distribution of covert feathers, layered over the anterior wing, has long been assumed to contribute to aerofoil properties, to our knowledge no previous studies of trends in avian wing shape assessed their variation. Here, these trends are explored using a geometric-morphometric approach with landmarks describing the wing outline as well as the extent of dorsal and ventral covert feathers for 105 avian species. We find that most of the observed variation is explained by phylogeny and ecology but shows only a weak relationship with previously described flight style categories, wing loading and an investigated set of aerodynamic variables. Most of the recovered variation is in greater primary covert feather extent, followed by secondary feather length and the shape of the wing tip. Although often considered a plastic character strongly linked to flight style, the estimated ancestral wing morphology is found to be generally conservative among basal parts of most major avian lineages. The radiation of birds is characterized by successive diversification into largely distinct areas of morphospace. However, aquatic taxa show convergence in feathering despite differences in flight style, and songbirds move into a region of morphospace also occupied by basal taxa but at markedly different body sizes. These results have implications for the proposed inference of flight style in extinct taxa.
Theoretical investigations of phonon intrinsic mean-free path in zinc-blende and wurtzite AlN
Alshaikhi, A.; Srivastava, G. P.
2007-11-01
We present theoretical investigations of the anharmonic phonon mean-free path in cubic and hexagonal AlN. The cubic anharmonicity in crystal potential has been modeled within an anharmonic elastic continuum model. Numerical calculations have been carried out within the Fermi’s golden rule scheme and by using the phonon dispersion and group velocity results from a full lattice dynamical model. The calculated mean-free path results for both crystal phases are compared with estimates made previously by Watari [J. Mater. Res. 17, 2940 (2002)] for the hexagonal phase, and a discussion on the level of agreement is provided. Our work predicts that at room temperature and above, the average phonon mean-free path for the zinc-blende phase is approximately four times that for the wurtzite phase, suggesting that AlN will exhibit far better high thermal conductivity behavior in its cubic phase.
Theoretical considerations of plant gravisensing
Kondrachuk, A. V.
The mechanisms proposed to explain gravity sensing can be divided into two groups, "statolith" and "non-statolith" mechanisms. The traditional estimates of the plausibility of these mechanisms are based on the analysis of the signal-to-noise ratio. The existing data indicate that the problem of plant gravisensing may be related to the general problem of the detection of weak signals in mechanoreceptors. This paper reviews the known mechanisms of plant gravisensing as well as the latest nonlinear stochastic models of mechanoreception in which noise promotes detection and amplification of weak signals. These models based on nonlinear stochastic phenomena may be used to explain plant gravisensing, if the cell is considered a dynamic, spatially distributed system of active intracellular cytoskeletal networks and mechanosensitive proteins.
Velocity estimation using synthetic aperture imaging
Nikolov, Svetoslav; Jensen, Jørgen Arendt
2001-01-01
In a previous paper we have demonstrated that the velocity can be estimated for a plug flow using recursive ultrasound imaging [1]. The approach involved the estimation of the velocity at every emission and using the estimates for motion compensation. An error in the estimates, however, would lead...... to an error in the compensation further increasing the error in the estimates. In this paper the approach is further developed such that no motion compensation is necessary. In recursive ultrasound imaging a new high resolution image is created after every emission. The velocity was estimated by cross...... and significantly improves the velocity estimates. The approach is verified using simulations with the program Field II and measurements on a blood-mimicking phantom. The estimates from the simulations have a bias of -3.5% and a mean standard deviation less than 2.0% for a parabolic velocity profile. The estimates...
Cheek, Kim A.
2016-09-01
Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude linearly. Magnitude estimation errors can be explained by confusion about the structure of the decimal number system, particularly in terms of how powers of ten are related to one another. Indonesian children regularly use large currency units. This study investigated if they estimate long time periods accurately and if they estimate those time periods the same way they estimate analogous currency units. Thirty-nine children from a private International Baccalaureate school estimated temporal magnitudes up to 10,000,000,000 years in a two-part study. Artifacts children created were compared to theoretical model predictions previously used in number magnitude estimation studies as reported by Landy et al. (Cognitive Science 37:775-799, 2013). Over one third estimated the magnitude of time periods up to 10,000,000,000 years linearly, exceeding what would be expected based upon prior research with children this age who lack daily experience with large quantities. About half treated successive powers of ten as a count sequence instead of multiplicatively related when estimating magnitudes of time periods. Children generally estimated the magnitudes of long time periods and familiar, analogous currency units the same way. Implications for ways to improve the teaching and learning of this crosscutting concept/overarching idea are discussed.
Cheek, Kim A.
2017-08-01
Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude linearly. Magnitude estimation errors can be explained by confusion about the structure of the decimal number system, particularly in terms of how powers of ten are related to one another. Indonesian children regularly use large currency units. This study investigated if they estimate long time periods accurately and if they estimate those time periods the same way they estimate analogous currency units. Thirty-nine children from a private International Baccalaureate school estimated temporal magnitudes up to 10,000,000,000 years in a two-part study. Artifacts children created were compared to theoretical model predictions previously used in number magnitude estimation studies as reported by Landy et al. (Cognitive Science 37:775-799, 2013). Over one third estimated the magnitude of time periods up to 10,000,000,000 years linearly, exceeding what would be expected based upon prior research with children this age who lack daily experience with large quantities. About half treated successive powers of ten as a count sequence instead of multiplicatively related when estimating magnitudes of time periods. Children generally estimated the magnitudes of long time periods and familiar, analogous currency units the same way. Implications for ways to improve the teaching and learning of this crosscutting concept/overarching idea are discussed.
Theoretical physics 2 analytical mechanics
Nolting, Wolfgang
2016-01-01
This textbook offers a clear and comprehensive introduction to analytical mechanics, one of the core components of undergraduate physics courses.It follows on naturally from the previous volumes in this series, thus expanding the knowledge in classical mechanics. The book starts with a thorough introduction into Lagrangian mechanics, detailing the d’Alembert principle, Hamilton’s principle and conservation laws. It continues with an in-depth explanation of Hamiltonian mechanics, illustrated by canonical and Legendre transformation, the generalization to quantum mechanics through Poisson brackets and all relevant variational principles. Finally, the Hamilton-Jacobi theory and the transition to wave mechanics are presented in detail. Ideally suited to undergraduate students with some grounding in classical mechanics, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by ...
Unbiased risk estimation method for covariance estimation
Lescornel, Hélène; Chabriac, Claudie
2011-01-01
We consider a model selection estimator of the covariance of a random process. Using the Unbiased Risk Estimation (URE) method, we build an estimator of the risk which allows to select an estimator in a collection of model. Then, we present an oracle inequality which ensures that the risk of the selected estimator is close to the risk of the oracle. Simulations show the efficiency of this methodology.
Adam M. Butz
2016-05-01
Full Text Available An estimate is provided of an innovative state-level measure of anti-immigrant sentiment for use in future policy and behavioral studies. State governments became increasingly active in adopting immigrant policies in the 2000s. Previous research highlights the role of public opinion, especially attitudes toward immigrants, in explaining policy priorities and outcomes. Unfortunately, most extant studies utilize political ideology or immigrant populations as rough proxies for public opinion. In this article, we estimate a reliable and valid measure of anti-immigrant sentiment at the state-level using survey aggregation with multilevel regression and post-stratification (MRP for the period 2004 to 2008. We compare our estimates of anti-immigrant sentiment to alternative measures of immigrant presence and political ideology in predicting multiple variations of state immigrant policies. Ultimately, we find theoretical and statistical advantages of using anti-immigrant sentiment over previous measures in predicting immigrant policies.
Structure and content of fencers’ theoretical training
Briskin Y.A.
2013-08-01
Full Text Available The structure and content of the theoretical training in the system of long-term improvement of athletes fencers is identified. Found that the theoretical training athletes selected 1.2-2.0% (initial training, 1,9-2,3% (preliminary stage of basic training, 2.1% (stage specialized basic training, 1, 8% (preparation stage for the highest achievements of the total training load in the annual cycle of training. It is established that there is a wrong approach to the planning of educational material on theoretical training. Under it, information material (by category and blocks presented based on the total phase, rather than individual grade levels. Recommended compensation direction of the individual components of theoretical training athletes: refining the means and methods of theoretical training, structuring of information, implementation of control theoretical training.
On image segmentation using information theoretic criteria
Aue, Alexander; 10.1214/11-AOS925
2012-01-01
Image segmentation is a long-studied and important problem in image processing. Different solutions have been proposed, many of which follow the information theoretic paradigm. While these information theoretic segmentation methods often produce excellent empirical results, their theoretical properties are still largely unknown. The main goal of this paper is to conduct a rigorous theoretical study into the statistical consistency properties of such methods. To be more specific, this paper investigates if these methods can accurately recover the true number of segments together with their true boundaries in the image as the number of pixels tends to infinity. Our theoretical results show that both the Bayesian information criterion (BIC) and the minimum description length (MDL) principle can be applied to derive statistically consistent segmentation methods, while the same is not true for the Akaike information criterion (AIC). Numerical experiments were conducted to illustrate and support our theoretical fin...
Rabosky, Daniel L; Mitchell, Jonathan S; Chang, Jonathan
2017-07-01
Bayesian analysis of macroevolutionary mixtures (BAMM) is a statistical framework that uses reversible jump Markov chain Monte Carlo to infer complex macroevolutionary dynamics of diversification and phenotypic evolution on phylogenetic trees. A recent article by Moore et al. (MEA) reported a number of theoretical and practical concerns with BAMM. Major claims from MEA are that (i) BAMM's likelihood function is incorrect, because it does not account for unobserved rate shifts; (ii) the posterior distribution on the number of rate shifts is overly sensitive to the prior; and (iii) diversification rate estimates from BAMM are unreliable. Here, we show that these and other conclusions from MEA are generally incorrect or unjustified. We first demonstrate that MEA's numerical assessment of the BAMM likelihood is compromised by their use of an invalid likelihood function. We then show that "unobserved rate shifts" appear to be irrelevant for biologically plausible parameterizations of the diversification process. We find that the purportedly extreme prior sensitivity reported by MEA cannot be replicated with standard usage of BAMM v2.5, or with any other version when conventional Bayesian model selection is performed. Finally, we demonstrate that BAMM performs very well at estimating diversification rate variation across the ${\\sim}$20% of simulated trees in MEA's data set for which it is theoretically possible to infer rate shifts with confidence. Due to ascertainment bias, the remaining 80% of their purportedly variable-rate phylogenies are statistically indistinguishable from those produced by a constant-rate birth-death process and were thus poorly suited for the summary statistics used in their performance assessment. We demonstrate that inferences about diversification rates have been accurate and consistent across all major previous releases of the BAMM software. We recognize an acute need to address the theoretical foundations of rate-shift models for
Toward a Theoretical Framework for Information Science
Amanda Spink
2000-01-01
Information Science is beginning to develop a theoretical framework for the modeling of users interactions with information retrieval (IR) technologies within the more holistic context of human information behavior (Spink, 1998b). This paper addresses the following questions: (1) What is the nature of Information Science? and (2) What theoretical framework and model is most appropriate for Information Science? This paper proposes a theoretical framework for Information Science based on ...
Estimating preselected and postselected ensembles
Massar, Serge [Laboratoire d' Information Quantique, C.P. 225, Universite libre de Bruxelles (U.L.B.), Av. F. D. Rooselvelt 50, B-1050 Bruxelles (Belgium); Popescu, Sandu [H. H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Hewlett-Packard Laboratories, Stoke Gifford, Bristol BS12 6QZ (United Kingdom)
2011-11-15
In analogy with the usual quantum state-estimation problem, we introduce the problem of state estimation for a pre- and postselected ensemble. The problem has fundamental physical significance since, as argued by Y. Aharonov and collaborators, pre- and postselected ensembles are the most basic quantum ensembles. Two new features are shown to appear: (1) information is flowing to the measuring device both from the past and from the future; (2) because of the postselection, certain measurement outcomes can be forced never to occur. Due to these features, state estimation in such ensembles is dramatically different from the case of ordinary, preselected-only ensembles. We develop a general theoretical framework for studying this problem and illustrate it through several examples. We also prove general theorems establishing that information flowing from the future is closely related to, and in some cases equivalent to, the complex conjugate information flowing from the past. Finally, we illustrate our approach on examples involving covariant measurements on spin-1/2 particles. We emphasize that all state-estimation problems can be extended to the pre- and postselected situation. The present work thus lays the foundations of a much more general theory of quantum state estimation.
Broadband DOA Estimation Based on Nested Arrays
Zhi-bo Shen
2015-01-01
Full Text Available Direction of arrival (DOA estimation is a crucial problem in electronic reconnaissance. A novel broadband DOA estimation method utilizing nested arrays is devised in this paper, which is capable of estimating the frequencies and DOAs of multiple narrowband signals in broadbands, even though they may have different carrier frequencies. The proposed method converts the DOA estimation of multiple signals with different frequencies into the spatial frequency estimation. Then, the DOAs and frequencies are pair matched by sparse recovery. It is possible to significantly increase the degrees of freedom (DOF with the nested arrays and the number of sources can be more than that of sensor array. In addition, the method can achieve high estimation precision without the two-dimensional search process in frequency and angle domain. The validity of the proposed method is verified by theoretic analysis and simulation results.
Theoretical analysis of balanced truncation for linear switched systems
Petreczky, Mihaly; Wisniewski, Rafal; Leth, John-Josef
2012-01-01
In this paper we present theoretical analysis of model reduction of linear switched systems based on balanced truncation, presented in [1,2]. More precisely, (1) we provide a bound on the estimation error using L2 gain, (2) we provide a system theoretic interpretation of grammians...... for showing this independence is realization theory of linear switched systems. [1] H. R. Shaker and R. Wisniewski, "Generalized gramian framework for model/controller order reduction of switched systems", International Journal of Systems Science, Vol. 42, Issue 8, 2011, 1277-1291. [2] H. R. Shaker and R....... Wisniewski, "Switched Systems Reduction Framework Based on Convex Combination of Generalized Gramians", Journal of Control Science and Engineering, 2009....
Model selection and inference a practical information-theoretic approach
Burnham, Kenneth P
1998-01-01
This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...
Information-theoretical noninvasive damage detection in bridge structures
Sudu Ambegedara, Amila; Sun, Jie; Janoyan, Kerop; Bollt, Erik
2016-11-01
Damage detection of mechanical structures such as bridges is an important research problem in civil engineering. Using spatially distributed sensor time series data collected from a recent experiment on a local bridge in Upper State New York, we study noninvasive damage detection using information-theoretical methods. Several findings are in order. First, the time series data, which represent accelerations measured at the sensors, more closely follow Laplace distribution than normal distribution, allowing us to develop parameter estimators for various information-theoretic measures such as entropy and mutual information. Second, as damage is introduced by the removal of bolts of the first diaphragm connection, the interaction between spatially nearby sensors as measured by mutual information becomes weaker, suggesting that the bridge is "loosened." Finally, using a proposed optimal mutual information interaction procedure to prune away indirect interactions, we found that the primary direction of interaction or influence aligns with the traffic direction on the bridge even after damaging the bridge.
Earthquake probabilities: theoretical assessments and reality
Kossobokov, V. G.
2013-12-01
It is of common knowledge that earthquakes are complex phenomena which classification and sizing remain serious problems of the contemporary seismology. In general, their frequency-magnitude distribution exhibit power law scaling. This scaling differs significantly when different time and/or space domains are considered. At the scale of a particular earthquake rupture zone the frequency of similar size events is usually estimated to be about once in several hundred years. Evidently, contemporary seismology does not possess enough reported instrumental data for any reliable quantification of an earthquake probability at a given place of expected event. Regretfully, most of the state-of-the-art theoretical approaches to assess probability of seismic events are based on trivial (e.g. Poisson, periodic, etc) or, conversely, delicately-designed (e.g. STEP, ETAS, etc) models of earthquake sequences. Some of these models are evidently erroneous, some can be rejected by the existing statistics, and some are hardly testable in our life-time. Nevertheless such probabilistic counts including seismic hazard assessment and earthquake forecasting when used on practice eventually mislead to scientifically groundless advices communicated to decision makers and inappropriate decisions. As a result, the population of seismic regions continues facing unexpected risk and losses. The international project Global Earthquake Model (GEM) is on the wrong track, if it continues to base seismic risk estimates on the standard, mainly probabilistic, methodology to assess seismic hazard. It is generally accepted that earthquakes are infrequent, low-probability events. However, they keep occurring at earthquake-prone areas with 100% certainty. Given the expectation of seismic event once per hundred years, the daily probability of occurrence on a certain date may range from 0 to 100% depending on a choice of probability space (which is yet unknown and, therefore, made by a subjective lucky chance
Estimation of Correlation Functions by the Random Decrement Technique
Brincker, Rune; Krenk, S.; Jensen, Jakob Laigaard
1993-01-01
The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...... from smaller estimation errors than the corresponding FFT estimates. However, in the case of estimating cross-correlations functions for stochastic processes with low mutual correlation, the FFT technique might be more accurate....
Estimation of Correlation Functions by the Random Decrement Technique
Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard
The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...... from smaller estimation errors than the corresponding FFT estimates. However, in the case of estimating cross-correlations functions for stochastic processes with low mutual correlation, the FFT technique might be more accurate....
Revisiting Cosmological parameter estimation
Prasad, Jayanti
2014-01-01
Constraining theoretical models with measuring the parameters of those from cosmic microwave background (CMB) anisotropy data is one of the most active areas in cosmology. WMAP, Planck and other recent experiments have shown that the six parameters standard $\\Lambda$CDM cosmological model still best fits the data. Bayesian methods based on Markov-Chain Monte Carlo (MCMC) sampling have been playing leading role in parameter estimation from CMB data. In one of the recent studies \\cite{2012PhRvD..85l3008P} we have shown that particle swarm optimization (PSO) which is a population based search procedure can also be effectively used to find the cosmological parameters which are best fit to the WMAP seven year data. In the present work we show that PSO not only can find the best-fit point, it can also sample the parameter space quite effectively, to the extent that we can use the same analysis pipeline to process PSO sampled points which is used to process the points sampled by Markov Chains, and get consistent res...
Quasar bolometric corrections: theoretical considerations
Nemmen, Rodrigo S
2010-01-01
Bolometric corrections based on the optical-to-ultraviolet continuum spectrum of quasars are widely used to quantify their radiative output, although such estimates are affected by a myriad of uncertainties, such as the generally unknown line-of-sight angle to the central engine. In order to shed light on these issues, we investigate the state-of-the-art models of Hubeny et al. that describe the continuum spectrum of thin accretion discs and include relativistic effects. We explore the bolometric corrections as a function of mass accretion rates, black hole masses and viewing angles, restricted to the parameter space expected for type-1 quasars. We find that a nonlinear relationship log L_bol=A + B log(lambda L_lambda) with B<=0.9 is favoured by the models and becomes tighter as the wavelength decreases. We calculate from the model the bolometric corrections corresponding to the wavelengths lambda = 1450A, 3000A and 5100A. In particular, for lambda=3000A we find A=9.24 +- 0.77 and B=0.81 +- 0.02. We demons...
Jones, Rupert CM; Chung, Man C; Berger, Zoë; Campbell, John L.
2007-01-01
Reported prevalence of myocardial infarction-related post-traumatic stress disorder (PTSD) varies from 0 to 25%. PTSD after myocardial infarction may affect quality of life, cardiovascular outcomes, and health service usage. Of 164 patients with previous myocardial infarction, 111 participated in the study and 36 had PTSD, giving a prevalence of 32%; the lowest possible estimate being 22%. PTSD was associated with significantly worse general health than that of individuals without PTSD. Preva...
A Game-Theoretic History of the Cuban Missile Crisis
Frank C. Zagare
2014-01-01
Full Text Available This study surveys and evaluates previous attempts to use game theory to explain the strategic dynamic of the Cuban missile crisis, including, but not limited to, explanations developed in the style of Thomas Schelling, Nigel Howard and Steven Brams. All of the explanations were judged to be either incomplete or deficient in some way. Schelling’s explanation is both empirically and theoretically inconsistent with the consensus interpretation of the crisis; Howard’s with the contemporary understanding of rational strategic behavior; and Brams’ with the full sweep of the events that define the crisis. The broad outlines of a more general explanation that addresses all of the foundational questions associated with the crisis within the confines of a single, integrated, game-theoretic model with incomplete information are laid out.
The truncation of stellar discs A theoretical model
Battaner, E; Jiménez-Vicente, J
1998-01-01
The truncation of stellar discs is not abrupt but characterized by a continuous distancing from the exponential profile. There exists a truncation curve, $t(r)$, ending at a truncation radius, $r_t$. We present here a theoretical model in which it is assumed that the magnetic hypothesis explaining the flat rotation curve also explains the truncation. Once stars are born, the centripetal magnetic force previously acting on the progenitor gas cloud is suddenly interrupted, and stars must move to larger orbits or escape. The agreement between theoretical and observed truncation curves is very satisfactory. Parameters defining the disc gas rotation curve should therefore be related to those defining the truncation. It is predicted that rotation curves that quickly reach the asymptotic value $\\theta_0 = \\theta (r=\\infty)$ would have small truncation radii. On the contrary, $r_t$ and $\\theta_0$ itself, would be uncorrelated quantities.
Classification of theoretical and methodological foundations of training future coaches
Chopyk T.V.
2013-06-01
Full Text Available The problem of training future trainers and teachers in higher education institutions is considered. Theoretically analyzed the problem of training future coaches. Systematized content of theoretical and methodological foundations of training future coaches. It was revealed that the system of knowledge creation is based on logically related topics of general characteristics of the professional activity of the future coaches and features of the content of the production functions. Also on the justification of typical tasks of future coaches, definition and development of professional competencies of future coaches during training. In addition, also the prospects for the development of sport as a future profession. The proposed classification is different from previous work defining the future of professional competence of trainers and teachers in the process of training, clarifying sections of future performance professional coaches and prospects of the development of sports.
Interactions between DNA purinic bases and amodiaquine: A theoretical approach
Valdemar Lacerda Júnior
2010-06-01
Full Text Available We study theoretically the amodiaquine-adenine and amodiaquine-guanine adducts formation using Density Functional Theory (B3LYP and the 6-31G(d basis set for the geometry optimizations and 6-31+G(d,p for the analysis of the global indexes: electrophilicity (w, electronic chemical potential (m, hardness (h and softness (S, based in the Frontier Molecular Orbital Theory – FMO. Local softness for nucleophilic reaction (sk+ sites over guanine was evaluated using Fukui function (f k. We also evaluated the guanine Electrostatic Potential (EP values using the (MSK charge scheme. The theoretical calculations had demonstrated that the amodiaquine has greater electronic affinity for the guanine, with irreversible formation of the amodiaquine-guanine adduct, as reported before on a previous experimental work.
Theoretical Constraint on Purely Kinetic k-Essence
YANG Rong-Jia; ZHANG Shuang-Nan
2008-01-01
Purely kinetic κ-essence models in which the Lagrangian contains only a kinetic factor and does not depend explicitly on the field itself are considered.and a theoretical constraint is obtained:Fx=F0α-3.Under this theoretical constraint,we discuss a kind of purely k-essence with form F(X)=-(1+2Xn)1/2n,which can be considered as the generalized tachyon field,and find that this kind of κ-essence is not likely a candidate of dark energy to describe the present accelerated expansion of the Universe.This is contrary to a previous suggestion that κ-essence with such a form may be used to describe phantom cosmologies.
Theoretical nuclear physics. Final report
NONE
1997-05-01
As the three-year period FY93-FY96 ended, there were six senior investigators on the grant full-time: Bulgac, Henley, Miller, Savage, van Kolck and Wilets. This represents an increase of two members from the previous three-year period, achieved with only a two percent increase over the budget for FY90-FY93. In addition, the permanent staff of the Institute for Nuclear Theory (George Bertsch, Wick Haxton, and David Kaplan) continued to be intimately associated with our physics research efforts. Aurel Bulgac joined the Group in September, 1993 as an assistant professor, with promotion requested by the Department and College of Arts and Sciences by September, 1997. Martin Savage, who was at Carnegie-Mellon University, jointed the Physics Department in September, 1996. U. van Kolck continued as research assistant professor, and we were supporting one postdoctoral research associate, Vesteinn Thorssen, who joined us in September, 1995. Seven graduate students were being supported by the Grant (Chuan-Tsung Chan, Michael Fosmire, William Hazelton, Jon Karakowski, Jeffrey Thompson, James Walden and Mitchell Watrous).
Indexes of estimation of efficiency of the use of intellectual resources of industrial enterprises
Audzeichyk Olga
2015-12-01
Full Text Available The article researches the theoretical and practical aspects of estimation of intellectual resources of industrial enterprises and proposes the method of estimation of efficiency of the use of intellectual resources.
Angle of arrival estimation using spectral interferometry
Barber, Z.W.; Harrington, C.; Thiel, C.W.; Babbitt, W.R. [Spectrum Lab, Montana State University, Bozeman, MT 59717 (United States); Krishna Mohan, R., E-mail: krishna@spectrum.montana.ed [Spectrum Lab, Montana State University, Bozeman, MT 59717 (United States)
2010-09-15
We have developed a correlative signal processing concept based on a Mach-Zehnder interferometer and spatial-spectral (S2) materials that enables direct mapping of RF spectral phase as well as power spectral recording. This configuration can be used for precise frequency resolved time delay estimation between signals received by a phased antenna array system that in turn could be utilized to estimate the angle of arrival. We present an analytical theoretical model and a proof-of-principle demonstration of the concept of time difference of arrival estimation with a cryogenically cooled Tm:YAG crystal that operates on microwave signals modulated onto a stabilized optical carrier at 793 nm.
Bayes-Optimal Joint Channel-and-Data Estimation for Massive MIMO With Low-Precision ADCs
Wen, Chao-Kai; Wang, Chang-Jen; Jin, Shi; Wong, Kai-Kit; Ting, Pangan
2016-05-01
This paper considers a multiple-input multiple-output (MIMO) receiver with very low-precision analog-to-digital convertors (ADCs) with the goal of developing massive MIMO antenna systems that require minimal cost and power. Previous studies demonstrated that the training duration should be {\\em relatively long} to obtain acceptable channel state information. To address this requirement, we adopt a joint channel-and-data (JCD) estimation method based on Bayes-optimal inference. This method yields minimal mean square errors with respect to the channels and payload data. We develop a Bayes-optimal JCD estimator using a recent technique based on approximate message passing. We then present an analytical framework to study the theoretical performance of the estimator in the large-system limit. Simulation results confirm our analytical results, which allow the efficient evaluation of the performance of quantized massive MIMO systems and provide insights into effective system design.
Theoretical spectra of floppy molecules
Chen, Hua
2000-09-01
Detailed studies of the vibrational dynamics of floppy molecules are presented. Six-D bound-state calculations of the vibrations of rigid water dimer based on several anisotropic site potentials (ASP) are presented. A new sequential diagonalization truncation approach was used to diagonalize the angular part of the Hamiltonian. Symmetrized angular basis and a potential optimized discrete variable representation for intermonomer distance coordinate were used in the calculations. The converged results differ significantly from the results presented by Leforestier et al. [J. Chem. Phys. 106 , 8527 (1997)]. It was demonstrated that ASP-S potential yields more accurate tunneling splittings than other ASP potentials used. Fully coupled 4D quantum mechanical calculations were performed for carbon dioxide dimer using the potential energy surface given by Bukowski et al [J. Chem. Phys., 110, 3785 (1999)]. The intermolecular vibrational frequencies and symmetry adapted force constants were estimated and compared with experiments. The inter-conversion tunneling dynamics was studied using the calculated virtual tunneling splittings. Symmetrized Radau coordinates and the sequential diagonalization truncation approach were formulated for acetylene. A 6D calculation was performed with 5 DVR points for each stretch coordinate, and an angular basis that is capable of converging the angular part of the Hamiltonian to 30 cm-1 for internal energies up to 14000 cm-1. The probability at vinylidene configuration were evaluated. It was found that the eigenstates begin to extend to vinylidene configuration from about 10000 cm-1, and the ra, coordinate is closely related to the vibrational dynamics at high energy. Finally, a direct product DVR was defined for coupled angular momentum operators, and the SDT approach were formulated. They were applied in solving the angular part of the Hamiltonian for carbon dioxide dimer problem. The results show the method is capable of giving very accurate
Derycke, A. C.; Dejaeger, M.; Salmer, G.; Debouard, A.
A theoretical and experimental study of eccentered radial pretuned modules is presented. The modules can be used with several active devices to make power oscillator modules in millimeter waves, or to make VCO modules. The model makes possible the calculation of load impedance by considering the particular mode spectrum of radial circuits. The theoretical model is sufficiently accurate to estimate the load impedance variation versus frequency and its dependence on geometrical dimensions.
Present theoretical uncertainties on charm hadroproduction in QCD and prompt neutrino fluxes
Garzelli M.V.
2016-01-01
Full Text Available Prompt neutrino fluxes are basic backgrounds in the search of high-energy neutrinos of astrophysical origin, performed by means of full-size neutrino telescopes located at Earth, under ice or under water. Predictions for these fluxes are provided on the basis of up-to-date theoretical results for charm hadroproduction in perturbative QCD, together with a comprehensive discussion of the various sources of theoretical uncertainty affecting their computation, and a quantitative estimate of each uncertainty contribution.
Applications of theoretical methods in atmospheric science
Johnson, Matthew Stanley; Goodsite, Michael E.
2008-01-01
Theoretical chemistry involves explaining chemical phenomenon using natural laws. The primary tool of theoretical chemistry is quantum chemistry, and the field may be divided into electronic structure calculations, reaction dynamics and statistical mechanics. These three all play a role in addres...
Beauty baryon decays: a theoretical overview
Wang, Yu-Ming
2014-11-01
I overview the theoretical status and recent progress on the calculations of beauty baryon decays focusing on the QCD aspects of the exclusive semi-leptonic Λb → plμ decay at large recoil and theoretical challenges of radiative and electro-weak penguin decays Λb → Λγ,Λl+l-.
Theoretical derivation of heliostat tracking errors distribution
Badescu, Viorel [Candida Oancea Institute of Solar Energy, Faculty of Mechanical Engineering, Polytechnic University of Bucharest, Spl. Independentei 313, Bucharest 060042 (Romania)
2008-12-15
The tracking error probability distribution is derived on a pure theoretical basis. Methods of integral geometry and geometrical probabilities are used to this purpose. The distribution performs reasonably well when compared with measurements from a small database. The performance of the theoretical distribution is compared with that of other (empirical) probability distributions. The practical relevance of using the present approach is also explained. (author)
A Set Theoretical Approach to Maturity Models
Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann
2016-01-01
Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models ch...
Cosmological parameter estimation using Particle Swarm Optimization
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
Information-theoretic analysis of electronic and printed document authentication
Voloshynovskiy, Sviatoslav; Koval, Oleksiy; Villan, Renato; Topak, Emre; Vila Forcén, José Emilio; Deguillaume, Frederic; Rytsar, Yuriy; Pun, Thierry
2006-02-01
In this paper we consider the problem of document authentication in electronic and printed forms. We formulate this problem from the information-theoretic perspectives and present the joint source-channel coding theorems showing the performance limits in such protocols. We analyze the security of document authentication methods and present the optimal attacking strategies with corresponding complexity estimates that, contrarily to the existing studies, crucially rely on the information leaked by the authentication protocol. Finally, we present the results of experimental validation of the developed concept that justifies the practical efficiency of the elaborated framework.
Coherent Change Detection: Theoretical Description and Experimental Results
2006-08-01
Correlation Coefficient Change Statistic . . . . 46 vii DSTO–TR–1851 5 Log Likelihood Change Statistic 52 5.1 PDF of Clairvoyant Log Likelihood...94 8 Acknowledgements 95 References 96 Appendices A Comparison of Theoretical PDFs and Histogram Estimates 100 Figures 1 A typical...1 Γ(N) ( N σ2f )N IN−1f exp ( − NIf σ2f ) , (80) P (Ig|σ2g) = 1 Γ(N) ( N σ2g )N IN−1g exp ( −NIg σ2g ) , (81) where If and Ig are the spatially
The Padé approximant in theoretical physics
Baker, George Allen
1970-01-01
In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank mat
Harboe, T; Johansson, S G O; Florvaag, E; Oman, H
2007-12-01
Neuromuscular blocking agents (NMBAs) can cause anaphylaxis through immunoglobulin E (IgE) antibodies that bind quaternary ammonium ion epitopes. These epitopes are present in numerous common chemicals and drugs, exposure to which, theoretically, could be of importance in the development and maintenance of the IgE sensitization promoting allergic reactions. Pholcodine is one such drug, which in a recent pilot study was shown to induce a remarkable increase in serum IgE levels in two IgE-sensitized individuals. The present study explores the effect of pholcodine exposure on IgE in a population with previously diagnosed IgE-mediated anaphylaxis towards NMBAs. Seventeen patients were randomized to 1 week's exposure with cough syrup containing either pholcodine or guaifenesin. The primary variables serum IgE and IgE antibodies towards pholcodine, morphine and suxamethonium were measured before and 4 and 8 weeks after start of exposure. Patients exposed to pholcodine had a sharp rise in levels of IgE antibodies towards pholcodine, morphine and suxamethonium, the median proportional increases 4 weeks after exposure reaching 39.0, 38.6 and 93.0 times that of the base levels respectively. Median proportional increase of IgE was 19.0. No changes were observed in the guaifenesin group. Serum levels of IgE antibodies associated with allergy towards NMBAs increase significantly in sensitized patients after exposure to cough syrup containing pholcodine. Availability of pholcodine should be restricted by medical authorities because of the potential risk of future allergic reactions to muscle relaxants.
Winkelman, Tyler N A; Choi, HwaJung; Davis, Matthew M
2017-05-01
To estimate health insurance and health care utilization patterns among previously incarcerated men following implementation of the Affordable Care Act's (ACA's) Medicaid expansion and Marketplace plans in 2014. We performed serial cross-sectional analyses using data from the National Survey of Family Growth between 2008 and 2015. Our sample included men aged 18 to 44 years with (n = 3476) and without (n = 8702) a history of incarceration. Uninsurance declined significantly among previously incarcerated men after ACA implementation (-5.9 percentage points; 95% confidence interval [CI] = -11.5, -0.4), primarily because of an increase in private insurance (6.8 percentage points; 95% CI = 0.1, 13.3). Previously incarcerated men accounted for a large proportion of the remaining uninsured (38.6%) in 2014 to 2015. Following ACA implementation, previously incarcerated men continued to be significantly less likely to report a regular source of primary care and more likely to report emergency department use than were never-incarcerated peers. Health insurance coverage improved among previously incarcerated men following ACA implementation. However, these men account for a substantial proportion of the remaining uninsured. Previously incarcerated men continue to lack primary care and frequently utilize acute care services.
On Short-Time Estimation of Vocal Tract Length from Formant Frequencies.
Adam C Lammert
Full Text Available Vocal tract length is highly variable across speakers and determines many aspects of the acoustic speech signal, making it an essential parameter to consider for explaining behavioral variability. A method for accurate estimation of vocal tract length from formant frequencies would afford normalization of interspeaker variability and facilitate acoustic comparisons across speakers. A framework for considering estimation methods is developed from the basic principles of vocal tract acoustics, and an estimation method is proposed that follows naturally from this framework. The proposed method is evaluated using acoustic characteristics of simulated vocal tracts ranging from 14 to 19 cm in length, as well as real-time magnetic resonance imaging data with synchronous audio from five speakers whose vocal tracts range from 14.5 to 18.0 cm in length. Evaluations show improvements in accuracy over previously proposed methods, with 0.631 and 1.277 cm root mean square error on simulated and human speech data, respectively. Empirical results show that the effectiveness of the proposed method is based on emphasizing higher formant frequencies, which seem less affected by speech articulation. Theoretical predictions of formant sensitivity reinforce this empirical finding. Moreover, theoretical insights are explained regarding the reason for differences in formant sensitivity.
McWilliams, J Michael; Meara, Ellen; Zaslavsky, Alan M; Ayanian, John Z
2010-01-01
In contrast to a previous study we conducted and other evidence, a recent study found no significant effects of Medicare coverage after age 65 on overall health for previously uninsured adults and significant adverse effects on survival for some of these adults. We discuss explanations for these inconsistent findings, particularly the different ways in which deaths were handled, a key methodological challenge in longitudinal analyses of health. We demonstrate that analytic approaches suitable for examining effects of coverage on health measures may not be suitable for effects on mortality. Thus, estimates may be misleading when these different outcomes are jointly modeled. We also present new survival analyses that suggest Medicare coverage significantly attenuated the rising risk of death for previously uninsured adults. PMID:20337735
2013-06-17
... Corporation (Type Certificate Previously Held by Raytheon Aircraft Company) Airplanes AGENCY: Federal Aviation... directive (AD) for certain Hawker Beechcraft Corporation (Type Certificate Previously Held by Raytheon... (Type Certificate Previously Held by Raytheon Aircraft Company): Amendment 39-17476; Docket No....
2012-10-23
... Aerospace LP (Type Certificate Previously Held by Israel Aircraft Industries, Ltd.) Airplanes AGENCY... airworthiness directive (AD) for certain Gulfstream Aerospace LP (Type Certificate previously held by Israel... Certificate previously held by Israel Aircraft Industries, Ltd.) Model Galaxy and Gulfstream 200...
2012-11-05
... Corporation (Type Certificate Previously Held by Raytheon Aircraft Company; Beech Aircraft Corporation... (Type Certificate previously held by Raytheon Aircraft Company; Beech Aircraft Corporation) Model 400A... directive (AD): Hawker Beechcraft Corporation (Type Certificate Previously Held by Raytheon Aircraft...
2011-02-07
... Corporation (Type Certificate Previously Held by Raytheon Aircraft Company; Beech Aircraft Corporation) Model... (Type Certificate Previously Held by Raytheon Aircraft Company; Beech Aircraft Corporation): Amendment... Beechcraft Corporation (Type Certificate previously held by Raytheon Aircraft Company; Beech...
马俊海; 张强
2012-01-01
本文首先基于诸多Libor市场模型改进方法的基础之上，在标准市场模型中加入Heston随机波动率过程，建立随机波动率假设的新型Libor市场模型；其次，运用Black逆推参数校正方法和MCMC参数估计方法对该Libor利率市场模型中的局部波动率和随机波动率过程中的参数进行校正和估计；最后是实证模拟。研究结论认为，在构建Libor利率动态模型时，若在单因子Libor利率市场模型基础上引入随机波动率过程，可大大提高利率模型的解释力。%In this paper, firstly, combining Heston stochastic volatility into standard market models, a new Libor market model is set up. Secondly, by using of Black inverse parameters calibrating methods and Markov Chain Monte Carlo simulation, we calibrate and estimate parameters of the new Libor market models. Lastly, we make an empirical analysis. The research conclusion is that Libor market model corporating into Heston stochastic volatility can improve greatly the explaination ability to interest rate models.
Bayesian integer frequency offset estimator for MIMO-OFDM systems
无
2008-01-01
Carrier frequency offset (CFO) in MIMO-OFDM systems can be decoupled into two parts: fraction frequency offset (FFO) and integer frequency offset (IFO). The problem of IFO estimation is addressed and a new IFO estimator based on the Bayesian philosophy is proposed. Also, it is shown that the Bayesian IFO estimator is optimal among all the IFO estimators. Furthermore, the Bayesian estimator can take advantage of oversampling so that better performance can be obtained. Finally, numerical results show the optimality of the Bayesian estimator and validate the theoretical analysis.
Liu Estimator Based on An M Estimator
Hatice ŞAMKAR
2010-01-01
Full Text Available Objective: In multiple linear regression analysis, multicollinearity and outliers are two main problems. In the presence of multicollinearity, biased estimation methods like ridge regression, Stein estimator, principal component regression and Liu estimator are used. On the other hand, when outliers exist in the data, the use of robust estimators reducing the effect of outliers is prefered. Material and Methods: In this study, to cope with this combined problem of multicollinearity and outliers, it is studied Liu estimator based on M estimator (Liu M estimator. In addition, mean square error (MSE criterion has been used to compare Liu M estimator with Liu estimator based on ordinary least squares (OLS estimator. Results: OLS, Huber M, Liu and Liu M estimates and MSEs of these estimates have been calculated for a data set which has been taken form a study of determinants of physical fitness. Liu M estimator has given the best performance in the data set. It is found as both MSE (?LM = 0.0078< MSE (?M = 0.0508 and MSE (?LM = 0.0078< MSE (?L= 0.0085. Conclusion: When there is both outliers and multicollinearity in a dataset, while using of robust estimators reduces the effect of outliers, it could not solve problem of multicollinearity. On the other hand, using of biased methods could solve the problem of multicollinearity, but there is still the effect of outliers on the estimates. In the occurence of both multicollinearity and outliers in a dataset, it has been shown that combining of the methods designed to deal with this problems is better than using them individually.
Improving statistical reasoning theoretical models and practical implications
Sedlmeier, Peter
1999-01-01
This book focuses on how statistical reasoning works and on training programs that can exploit people''s natural cognitive capabilities to improve their statistical reasoning. Training programs that take into account findings from evolutionary psychology and instructional theory are shown to have substantially larger effects that are more stable over time than previous training regimens. The theoretical implications are traced in a neural network model of human performance on statistical reasoning problems. This book apppeals to judgment and decision making researchers and other cognitive scientists, as well as to teachers of statistics and probabilistic reasoning.
Typology of Digital News Media: Theoretical Bases for their Classification
Ramón SALAVERRÍA
2017-01-01
Full Text Available Since their beginnings in the 1990s, digital news media have undergone a process of settlement and diversification. As a result, the prolific classification of online media has become increasingly rich and complex. Based on a review of media typologies, this article proposes some theoretical bases for the distinction of the online media from previous media and, above all, for the differentiation of the various types of online media among then. With that purpose, nine typologic criteria are proposed: 1 platform, 2 temporality, 3 topic, 4 reach, 5 ownership, 6 authorship, 7 focus, 8 economic purpose, and 9 dynamism.
Mesoscopic structure prediction of nanoparticle assembly and coassembly: Theoretical foundation
Hur, Kahyun
2010-01-01
In this work, we present a theoretical framework that unifies polymer field theory and density functional theory in order to efficiently predict ordered nanostructure formation of systems having considerable complexity in terms of molecular structures and interactions. We validate our approach by comparing its predictions with previous simulation results for model systems. We illustrate the flexibility of our approach by applying it to hybrid systems composed of block copolymers and ligand coated nanoparticles. We expect that our approach will enable the treatment of multicomponent self-assembly with a level of molecular complexity that approaches experimental systems. © 2010 American Institute of Physics.
Theoretical study of the thermochemistry of chlorine oxyfluorides
Sánchez, Hernán R.; Del Pla, Julián
2016-10-01
There is a lack of experimental thermochemical values for most chlorine oxyfluorides. Previous high level theoretical, CCSD(T), results showed uncommonly large errors in the standard heats of formation calculated through the atomization method. We propose that the differences are due to unusually large contributions to energy from higher excitations within the coupled cluster framework, and we tackle the problem by using a calculation scheme based on isodesmic reactions. Our suspicions are supported by results of static correlation diagnostics. Our final recommended values are in better agreement with the experimental data available. Other thermodynamic properties are also calculated.
2012-03-19
... Aviation Concept Limited (Type Certificate Previously Held by Alpha Aviation Design Limited) Airplanes... Concept Limited (Type Certificate previously held by Alpha Aviation Design Limited): Docket No....
Information-Theoretic Methods for Identifying Relationships among Climate Variables
Knuth, Kevin H; Rossow, William B
2014-01-01
Information-theoretic quantities, such as entropy, are used to quantify the amount of information a given variable provides. Entropies can be used together to compute the mutual information, which quantifies the amount of information two variables share. However, accurately estimating these quantities from data is extremely challenging. We have developed a set of computational techniques that allow one to accurately compute marginal and joint entropies. These algorithms are probabilistic in nature and thus provide information on the uncertainty in our estimates, which enable us to establish statistical significance of our findings. We demonstrate these methods by identifying relations between cloud data from the International Satellite Cloud Climatology Project (ISCCP) and data from other sources, such as equatorial pacific sea surface temperatures (SST).
Xiong Li
2011-12-01
Full Text Available Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA has been proposed to quantitatively depict II-type sensors’ mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors’ monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency.
Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong
2012-01-01
Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors' mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors' monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency.
无
2008-01-01
On the basis of previous work,we develop a middle and low latitude theoretical ionospheric model in this paper,named Theoretical lonospheric Model of the Earth in the InstRute of Geology and Geophysics,Chinese Academy of Sciences(TIME-IGGCAS).TIME-IGGCAS solves the equations of mass continuity,motion and energy of electron and ions self-consistently and uses an eccentric dipole field approximation to the Earth's magnetic field.We combine the Eulerian and Lagrangian approaches in the model and take account of the plasma ExB drift velocity.Calculation results reveal that the model is steady and credible and can reproduce most large-scale features of ionosphere.By using TIME-IGGCAS,we carried out an observation system data assimilation experiment.Assimilation results show that the ExB drift velocity can be accurately estimated by ingesting the observed foF2 and hmF2 into the model applying nonlinear least-square fit method.We suggest that this work is of great significance in the development of ionospheric data assimilation model to give better nowcast and forecast of ionosphere.
Is the Universe More Transparent to Gamma Rays than Previously Thought?
Stecker, Floyd W.; Scully, Sean T.
2009-01-01
The MAGIC collaboration has recently reported the detection of the strong gamma-ray blazar 3C279 during a 1-2 day flare. They have used their spectral observations to draw conclusions regarding upper limits on the opacity of the Universe to high energy gamma-rays and, by implication, upper limits on the extragalactic mid-infrared background radiation. In this paper we examine the effect of gamma-ray absorption by the extragalactic infrared radiation on intrinsic spectra for this blazar and compare our results with the observational data on 3C279. We find agreement with our previous results, contrary to the recent assertion of the MAGIC group that the Universe is more transparent to gamma-rays than our calculations indicate. Our analysis indicates that in the energy range between approx. 80 and approx. 500 GeV, 3C279 has a best-fit intrinsic spectrum with a spectral index approx. 1.78 using our fast evolution model and approx. 2.19 using our baseline model. However, we also find that spectral indices in the range of 1.0 to 3.0 are almost as equally acceptable as the best fit spectral indices. Assuming the same intrinsic spectral index for this flare as for the 1991 flare from 3C279 observed by EGRET, viz., 2.02, which lies between our best fit indeces, we estimate that the MAGIC flare was approx.3 times brighter than the EGRET flare observed 15 years earlier.
Are all the previously reported genetic variants in limb girdle muscular dystrophy genes pathogenic?
Di Fruscio, Giuseppina; Garofalo, Arcomaria; Mutarelli, Margherita; Savarese, Marco; Nigro, Vincenzo
2016-01-01
Hundreds of variants in autosomal genes associated with the limb girdle muscular dystrophies (LGMDs) have been reported as being causative. However, in most cases the proof of pathogenicity derives from their non-occurrence in hundreds of healthy controls and/or from segregation studies in small families. The limited statistics of the genetic variations in the general population may hamper a correct interpretation of the effect of variants on the protein. To clarify the meaning of low-frequency variants in LGMD genes, we have selected all variants described as causative in the Leiden Open Variation Database and the Human Gene Mutation Database. We have systematically searched for their frequency in the NHLBI GO Exome Sequencing Project (ESP) and in our internal database. Surprisingly, the ESP contains about 4% of the variants previously associated with a dominant inheritance and about 9% of those associated with a recessive inheritance. The putative disease alleles are much more frequent than those estimated considering the disease prevalence. In conclusion, we hypothesize that a number of disease-associated variants are non-pathogenic and that other variations are not fully penetrant, even if they affect the protein function, suggesting a more complex genetic mechanisms for such heterogeneous disorders.