WorldWideScience

Sample records for semi-empirical method oniom

  1. Application of particle-mesh Ewald summation to ONIOM theory

    International Nuclear Information System (INIS)

    Kobayashi, Osamu; Nanbu, Shinkoh

    2015-01-01

    Highlights: • Particle-mesh Ewald sum is extended to ONIOM scheme. • Non-adiabatic MD simulation in solution is performed. • The behavior of excited (Z)-penta-2,4-dieniminium cation in methanol is simulated. • The difference between gas phase and solution is predicted. - Abstract: We extended a particle mesh Ewald (PME) summation method to the ONIOM (our Own N-layered Integrated molecular Orbitals and molecular Mechanics) scheme (PME-ONIOM) to validate the simulation in solution. This took the form of a nonadiabatic ab initio molecular dynamics (MD) simulation in which the Zhu-Nakamura trajectory surface hopping (ZN-TSH) method was performed for the photoisomerization of a (Z)-penta-2,4-dieniminium cation (protonated Schiff base, PSB3) electronically excited to the S 1 state in a methanol solution. We also calculated a nonadiabatic ab initio MD simulation with only minimum image convention (MI-ONIOM). The lifetime determined by PME-ONIOM-MD was 3.483 ps. The MI-ONIOM-MD lifetime of 0.4642 ps was much shorter than those of PME-ONIOM-MD and the experimentally determined excited state lifetime. The difference eminently illustrated the accurate treatment of the long-range solvation effect, which destines the electronically excited PSB3 for staying in S 1 at the pico-second or the femto-second time scale.

  2. An Efficient Method to Evaluate Intermolecular Interaction Energies in Large Systems Using Overlapping Multicenter ONIOM and the Fragment Molecular Orbital Method

    Science.gov (United States)

    Asada, Naoya; Fedorov, Dmitri G.; Kitaura, Kazuo; Nakanishi, Isao; Merz, Kenneth M.

    2012-01-01

    We propose an approach based on the overlapping multicenter ONIOM to evaluate intermolecular interaction energies in large systems and demonstrate its accuracy on several representative systems in the complete basis set limit at the MP2 and CCSD(T) level of theory. In the application to the intermolecular interaction energy between insulin dimer and 4′-hydroxyacetanilide at the MP2/CBS level, we use the fragment molecular orbital method for the calculation of the entire complex assigned to the lowest layer in three-layer ONIOM. The developed method is shown to be efficient and accurate in the evaluation of the protein-ligand interaction energies. PMID:23050059

  3. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program

    DEFF Research Database (Denmark)

    Svendsen, Casper Steinmann; Blædel, Kristoffer L.; Christensen, Anders Steen

    2013-01-01

    An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules such as ubiq......An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules...

  4. Semi-empirical Determination of Detection Efficiency for Voluminous Source by Effective Solid Angle Method

    Energy Technology Data Exchange (ETDEWEB)

    Kang, M. Y.; Kim, J. H.; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of); Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    In the field of γ-ray measurements, the determination of full energy (FE) absorption peak efficiency for a voluminous sample is difficult, because the preparation of the certified radiation source with the same chemical composition and geometry for the original voluminous sample is not easy. In order to solve this inconvenience, simulation or semi-empirical methods are preferred in many cases. Effective Solid Angle (ESA) Code which includes semi-empirical approach has been developed by the Applied Nuclear Physics Group in Seoul National University. In this study, we validated ESA code by using Marinelli type voluminous KRISS (Korea Research Institute of Standards and Science) CRM (Certified Reference Materials) sources and IAEA standard γ-ray point sources. And semi-empirically determined efficiency curve for voluminous source by using the ESA code is compared with the experimental value. We calculated the efficiency curve of voluminous source from the measured efficiency of standard point source by using the ESA code. We will carry out the ESA code validation by measurement of various CRM volume sources with detector of different efficiency.

  5. Aircraft directional stability and vertical tail design: A review of semi-empirical methods

    Science.gov (United States)

    Ciliberti, Danilo; Della Vecchia, Pierluigi; Nicolosi, Fabrizio; De Marco, Agostino

    2017-11-01

    Aircraft directional stability and control are related to vertical tail design. The safety, performance, and flight qualities of an aircraft also depend on a correct empennage sizing. Specifically, the vertical tail is responsible for the aircraft yaw stability and control. If these characteristics are not well balanced, the entire aircraft design may fail. Stability and control are often evaluated, especially in the preliminary design phase, with semi-empirical methods, which are based on the results of experimental investigations performed in the past decades, and occasionally are merged with data provided by theoretical assumptions. This paper reviews the standard semi-empirical methods usually applied in the estimation of airplane directional stability derivatives in preliminary design, highlighting the advantages and drawbacks of these approaches that were developed from wind tunnel tests performed mainly on fighter airplane configurations of the first decades of the past century, and discussing their applicability on current transport aircraft configurations. Recent investigations made by the authors have shown the limit of these methods, proving the existence of aerodynamic interference effects in sideslip conditions which are not adequately considered in classical formulations. The article continues with a concise review of the numerical methods for aerodynamics and their applicability in aircraft design, highlighting how Reynolds-Averaged Navier-Stokes (RANS) solvers are well-suited to attain reliable results in attached flow conditions, with reasonable computational times. From the results of RANS simulations on a modular model of a representative regional turboprop airplane layout, the authors have developed a modern method to evaluate the vertical tail and fuselage contributions to aircraft directional stability. The investigation on the modular model has permitted an effective analysis of the aerodynamic interference effects by moving, changing, and

  6. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program

    DEFF Research Database (Denmark)

    Svendsen, Casper Steinmann; Blædel, Kristoffer; Christensen, Anders S

    2013-01-01

    An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules such as ubiq......An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules...... such as ubiquitin a reasonable speedup (up to a factor of six) is observed for up to 16 cores. The SCF convergence is greatly improved by PCM for proteins compared to the gas phase....

  7. Comparison of a semi-empirical method with some model codes for gamma-ray spectrum calculation

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, Fan; Zhixiang, Zhao [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    Gamma-ray spectra calculated by a semi-empirical method are compared with those calculated by the model codes such as GNASH, TNG, UNF and NDCP-1. The results of the calculations are discussed. (2 tabs., 3 figs.).

  8. Cálculos teóricos de afinidades por próton de n-alquilaminas usando o método ONIOM

    Directory of Open Access Journals (Sweden)

    Braga Ataualpa A. C.

    2006-01-01

    Full Text Available The ONIOM method was used to calculate the proton affinities (PA of n-alkylamines (CnH2n+1NH2, n = 3 to 6, 8, 10, 12, 14, 16 and 18. The calculations were carried out at several levels (HF, MP2, B3LYP, QCISD(T, ... using Pople basis sets and at the QCISD(T level using basis sets developed by the generator coordinate method (GCM and adapted to effective core potentials. PAs were also obtained through the GCM and high level methods, like ONIOM[QCISD(T/6-31+G(2df,p:MP2/6-31G+G(d,p//ONIOM[MP2/6-31+G(d,p:HF/6-31G]. The average error using the GCM, with respect to experimental data, was 3.4 kJ mol-1.

  9. Theoretical analysis of geometry and NMR isotope shift in hydrogen-bonding center of photoactive yellow protein by combination of multicomponent quantum mechanics and ONIOM scheme

    Energy Technology Data Exchange (ETDEWEB)

    Kanematsu, Yusuke; Tachikawa, Masanori [Quantum Chemistry Division, Yokohama City University, Seto 22-2, Kanazawa-ku, Yokohama 236-0027 (Japan)

    2014-11-14

    Multicomponent quantum mechanical (MC-QM) calculation has been extended with ONIOM (our own N-layered integrated molecular orbital + molecular mechanics) scheme [ONIOM(MC-QM:MM)] to take account of both the nuclear quantum effect and the surrounding environment effect. The authors have demonstrated the first implementation and application of ONIOM(MC-QM:MM) method for the analysis of the geometry and the isotope shift in hydrogen-bonding center of photoactive yellow protein. ONIOM(MC-QM:MM) calculation for a model with deprotonated Arg52 reproduced the elongation of O–H bond of Glu46 observed by neutron diffraction crystallography. Among the unique isotope shifts in different conditions, the model with protonated Arg52 with solvent effect reasonably provided the best agreement with the corresponding experimental values from liquid NMR measurement. Our results implied the availability of ONIOM(MC-QM:MM) to distinguish the local environment around hydrogen bonds in a biomolecule.

  10. A semi-empirical method for measuring thickness of pipe-wall using gamma scattering technique

    International Nuclear Information System (INIS)

    Vo Hoang Nguyen; Hua Tuyet Le; Le Dinh Minh Quan; Hoang Duc Tam; Le Bao Tran; Tran Thien Thanh; Tran Nguyen Thuy Ngan; Chau Van Tao; VNUHCM-University of Science, Ho Chi Minh City; Huynh Dinh Chuong

    2016-01-01

    In this work, we propose a semi-empirical method for determining the thickness of pipe-wall, of which the determination is performed by combining the experimental and Monte Carlo simulation data. The testing measurements show that this is an efficient method to measure the thickness of pipe-wall. In addition, this work also shows that it could use a NaI(Tl) scintillation detector and a low activity source to measure the thickness of pipe-wall, which is simple, quick and high accuracy method. (author)

  11. Prediction of Physicochemical Properties of Organic Molecules Using Semi-Empirical Methods

    International Nuclear Information System (INIS)

    Kim, Chan Kyung; Kim, Chang Kon; Kim, Miri; Lee, Hai Whang; Cho, Soo Gyeong

    2013-01-01

    Prediction of physicochemical properties of organic molecules is an important process in chemistry and chemical engineering. The MSEP approach developed in our lab calculates the molecular surface electrostatic potential (ESP) on van der Waals (vdW) surfaces of molecules. This approach includes geometry optimization and frequency calculation using hybrid density functional theory, B3LYP, at the 6-31G(d) basis set to find minima on the potential energy surface, and is known to give satisfactory QSPR results for various properties of organic molecules. However, this MSEP method is not applicable to screen large database because geometry optimization and frequency calculation require considerable computing time. To develop a fast but yet reliable approach, we have re-examined our previous work on organic molecules using two semi-empirical methods, AM1 and PM3. This new approach can be an efficient protocol in designing new molecules with improved properties

  12. The performance of selected semi-empirical and DFT methods in studying C60 fullerene derivatives

    Science.gov (United States)

    Sikorska, Celina; Puzyn, Tomasz

    2015-11-01

    The capability of reproducing the open circuit voltages (V oc) of 15 representative C60 fullerene derivatives was tested using the selected quantum mechanical methods (B3LYP, PM6, and PM7) together with the two one-electron basis sets. Certain theoretical treatments (e.g. PM6) were found to be satisfactory for preliminary estimates of the open circuit voltages (V oc), whereas the use of the B3LYP/6-31G(d) approach has been proven to assure highly accurate results. We also examined the structural similarity of 19 fullerene derivatives by employing principle component analysis (PCA). In order to express the structural features of the studied compounds we used molecular descriptors calculated with semi-empirical (PM6 and PM7) and density functional (B3LYP/6-31G(d)) methods separately. In performing PCA, we noticed that semi-empirical methods (i.e. PM6 and PM7) seem satisfactory for molecules, in which one can distinguish the aromatic and the aliphatic parts in the cyclopropane ring of PCBM (phenyl-C61-buteric acid methyl ester) and they significantly overestimate the energy of the highest occupied molecular orbital (E HOMO). The use of the B3LYP functional, however, is recommended for studying methanofullerenes, which closely resemble the structure of PCBM, and for their modifications.

  13. The performance of selected semi-empirical and DFT methods in studying C60 fullerene derivatives

    International Nuclear Information System (INIS)

    Sikorska, Celina; Puzyn, Tomasz

    2015-01-01

    The capability of reproducing the open circuit voltages (V oc ) of 15 representative C 60 fullerene derivatives was tested using the selected quantum mechanical methods (B3LYP, PM6, and PM7) together with the two one-electron basis sets. Certain theoretical treatments (e.g. PM6) were found to be satisfactory for preliminary estimates of the open circuit voltages (V oc ), whereas the use of the B3LYP/6-31G(d) approach has been proven to assure highly accurate results. We also examined the structural similarity of 19 fullerene derivatives by employing principle component analysis (PCA). In order to express the structural features of the studied compounds we used molecular descriptors calculated with semi-empirical (PM6 and PM7) and density functional (B3LYP/6-31G(d)) methods separately. In performing PCA, we noticed that semi-empirical methods (i.e. PM6 and PM7) seem satisfactory for molecules, in which one can distinguish the aromatic and the aliphatic parts in the cyclopropane ring of PCBM (phenyl-C 61 -buteric acid methyl ester) and they significantly overestimate the energy of the highest occupied molecular orbital (E HOMO ). The use of the B3LYP functional, however, is recommended for studying methanofullerenes, which closely resemble the structure of PCBM, and for their modifications. (paper)

  14. The performance of selected semi-empirical and DFT methods in studying C₆₀ fullerene derivatives.

    Science.gov (United States)

    Sikorska, Celina; Puzyn, Tomasz

    2015-11-13

    The capability of reproducing the open circuit voltages (V(oc)) of 15 representative C60 fullerene derivatives was tested using the selected quantum mechanical methods (B3LYP, PM6, and PM7) together with the two one-electron basis sets. Certain theoretical treatments (e.g. PM6) were found to be satisfactory for preliminary estimates of the open circuit voltages (V(oc)), whereas the use of the B3LYP/6-31G(d) approach has been proven to assure highly accurate results. We also examined the structural similarity of 19 fullerene derivatives by employing principle component analysis (PCA). In order to express the structural features of the studied compounds we used molecular descriptors calculated with semi-empirical (PM6 and PM7) and density functional (B3LYP/6-31G(d)) methods separately. In performing PCA, we noticed that semi-empirical methods (i.e. PM6 and PM7) seem satisfactory for molecules, in which one can distinguish the aromatic and the aliphatic parts in the cyclopropane ring of PCBM (phenyl-C61-buteric acid methyl ester) and they significantly overestimate the energy of the highest occupied molecular orbital (E(HOMO)). The use of the B3LYP functional, however, is recommended for studying methanofullerenes, which closely resemble the structure of PCBM, and for their modifications.

  15. Predicting acid dew point with a semi-empirical model

    International Nuclear Information System (INIS)

    Xiang, Baixiang; Tang, Bin; Wu, Yuxin; Yang, Hairui; Zhang, Man; Lu, Junfu

    2016-01-01

    Highlights: • The previous semi-empirical models are systematically studied. • An improved thermodynamic correlation is derived. • A semi-empirical prediction model is proposed. • The proposed semi-empirical model is validated. - Abstract: Decreasing the temperature of exhaust flue gas in boilers is one of the most effective ways to further improve the thermal efficiency, electrostatic precipitator efficiency and to decrease the water consumption of desulfurization tower, while, when this temperature is below the acid dew point, the fouling and corrosion will occur on the heating surfaces in the second pass of boilers. So, the knowledge on accurately predicting the acid dew point is essential. By investigating the previous models on acid dew point prediction, an improved thermodynamic correlation formula between the acid dew point and its influencing factors is derived first. And then, a semi-empirical prediction model is proposed, which is validated with the data both in field test and experiment, and comparing with the previous models.

  16. ONIOM Investigation of the Second-Order Nonlinear Optical Responses of Fluorescent Proteins.

    Science.gov (United States)

    de Wergifosse, Marc; Botek, Edith; De Meulenaere, Evelien; Clays, Koen; Champagne, Benoît

    2018-05-17

    The first hyperpolarizability (β) of six fluorescent proteins (FPs), namely, enhanced green fluorescent protein, enhanced yellow fluorescent protein, SHardonnay, ZsYellow, DsRed, and mCherry, has been calculated to unravel the structure-property relationships on their second-order nonlinear optical properties, owing to their potential for multidimensional biomedical imaging. The ONIOM scheme has been employed and several of its refinements have been addressed to incorporate efficiently the effects of the microenvironment on the nonlinear optical responses of the FP chromophore that is embedded in a protective β-barrel protein cage. In the ONIOM scheme, the system is decomposed into several layers (here two) treated at different levels of approximation (method1/method2), from the most elaborated method (method1) for its core (called the high layer) to the most approximate one (method2) for the outer surrounding (called the low layer). We observe that a small high layer can already account for the variations of β as a function of the nature of the FP, provided the low layer is treated at an ab initio level to describe properly the effects of key H-bonds. Then, for semiquantitative reproduction of the experimental values obtained from hyper-Rayleigh scattering experiments, it is necessary to incorporate electron correlation as described at the second-order Møller-Plesset perturbation theory (MP2) level as well as implicit solvent effects accounted for using the polarizable continuum model (PCM). This led us to define the MP2/6-31+G(d):HF/6-31+G(d)/IEFPCM scheme as an efficient ONIOM approach and the MP2/6-31+G(d):HF/6-31G(d)/IEFPCM as a better compromise between accuracy and computational needs. Using these methods, we demonstrate that many parameters play a role on the β response of FPs, including the length of the π-conjugated segment, the variation of the bond length alternation, and the presence of π-stacking interactions. Then, noticing the small diversity

  17. Semi-empirical neutron tool calibration (one and two-group approximation)

    International Nuclear Information System (INIS)

    Czubek, J.A.

    1988-01-01

    The physical principles of the new method of calibration of neutron tools for the rock porosity determination are given. A short description of the physics of neutron transport in the matter is presented together with some remarks on the elementary interactions of neutrons with nuclei (cross sections, group cross sections etc.). The definitions of the main integral parameters characterizing the neutron transport in the rock media are given. The three main approaches to the calibration problem: empirical, theoretical and semi-empirical are presented with some more detailed description of the latter one. The new semi-empirical approach is described. The method is based on the definition of the apparent slowing down or migration length for neutrons sensed by the neutron tool situated in the real borehole-rock conditions. To calculate this apparent slowing down or migration lengths the ratio of the proper space moments of the neutron distribution along the borehole axis is used. Theoretical results are given for one- and two-group diffusion approximations in the rock-borehole geometrical conditions when the tool is in the sidewall position. The physical and chemical parameters are given for the calibration blocks of the Logging Company in Zielona Gora. Using these data the neutron parameters of the calibration blocks have been calculated. An example, how to determine the calibration curve for the dual detector tool applying this new method and using the neutron parameters mentioned above together with the measurements performed in the calibration blocks, is given. The most important advantage of the new semi-empirical method of calibration is the possibility of setting on the unique calibration curve all experimental calibration data obtained for a given neutron tool for different porosities, lithologies and borehole diameters. 52 refs., 21 figs., 21 tabs. (author)

  18. Synthesis, characterization and biological application of four novel metal-Schiff base complexes derived from allylamine and their interactions with human serum albumin: Experimental, molecular docking and ONIOM computational study.

    Science.gov (United States)

    Kazemi, Zahra; Rudbari, Hadi Amiri; Sahihi, Mehdi; Mirkhani, Valiollah; Moghadam, Majid; Tangestaninejad, Shahram; Mohammadpoor-Baltork, Iraj; Gharaghani, Sajjad

    2016-09-01

    Novel metal-based drug candidate including VOL2, NiL2, CuL2 and PdL2 have been synthesized from 2-hydroxy-1-allyliminomethyl-naphthalen ligand and have been characterized by means of elemental analysis (CHN), FT-IR and UV-vis spectroscopies. In addition, (1)H and (13)C NMR techniques were employed for characterization of the PdL2 complex. Single-crystal X-ray diffraction technique was utilized to characterise the structure of the complexes. The Cu(II), Ni(II) and Pd(II) complexes show a square planar trans-coordination geometry, while in the VOL2, the vanadium center has a distorted tetragonal pyramidal N2O3 coordination sphere. The HSA-binding was also determined, using fluorescence quenching, UV-vis spectroscopy, and circular dichroism (CD) titration method. The obtained results revealed that the HSA affinity for binding the synthesized compounds follows as PdL2>CuL2>VOL2>NiL2, indicating the effect of metal ion on binding constant. The distance between these compounds and HSA was obtained based on the Förster's theory of non-radiative energy transfer. Furthermore, computational methods including molecular docking and our Own N-layered Integrated molecular Orbital and molecular Mechanics (ONIOM) were carried out to investigate the HSA-binding of the compounds. Molecular docking calculation indicated the existence of hydrogen bond between amino acid residues of HSA and all synthesized compounds. The formation of the hydrogen bond in the HSA-compound systems leads to their stabilization. The ONIOM method was utilized in order to investigate HSA binding of compounds more precisely in which molecular mechanics method (UFF) and semi empirical method (PM6) were selected for the low layer and the high layer, respectively. The results show that the structural parameters of the compounds changed along with binding to HSA, indicating the strong interaction between the compounds and HSA. The value of binding constant depends on the extent of the resultant changes. This

  19. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program.

    Directory of Open Access Journals (Sweden)

    Casper Steinmann

    Full Text Available An interface between semi-empirical methods and the polarized continuum model (PCM of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41. The interface includes energy gradients and is parallelized. For large molecules such as ubiquitin a reasonable speedup (up to a factor of six is observed for up to 16 cores. The SCF convergence is greatly improved by PCM for proteins compared to the gas phase.

  20. A simple semi-empirical approximation for bond energy

    International Nuclear Information System (INIS)

    Jorge, F.E.; Giambiagi, M.; Giambiagi, M.S. de.

    1985-01-01

    A simple semi-empirical expression for bond energy, related with a generalized bond index, is proposed and applied within the IEH framework. The correlation with experimental data is good for the intermolecular bond energy of base pairs of nucleic acids and other hydrogen bonded systems. The intramolecular bond energies for a sample of molecules containing typical bonds and for hydrides are discussed. The results are compared with those obtained by other methods. (Author) [pt

  1. Semi-empirical corrosion model for Zircaloy-4 cladding

    International Nuclear Information System (INIS)

    Nadeem Elahi, Waseem; Atif Rana, Muhammad

    2015-01-01

    The Zircaloy-4 cladding tube in Pressurize Water Reactors (PWRs) bears corrosion due to fast neutron flux, coolant temperature, and water chemistry. The thickness of Zircaloy-4 cladding tube may be decreased due to the increase in corrosion penetration which may affect the integrity of the fuel rod. The tin content and inter-metallic particles sizes has been found significantly in the magnitude of oxide thickness. In present study we have developed a Semiempirical corrosion model by modifying the Arrhenius equation for corrosion as a function of acceleration factor for tin content and accumulative annealing. This developed model has been incorporated into fuel performance computer code. The cladding oxide thickness data obtained from the Semi-empirical corrosion model has been compared with the experimental results i.e., numerous cases of measured cladding oxide thickness from UO 2 fuel rods, irradiated in various PWRs. The results of the both studies lie within the error band of 20μm, which confirms the validity of the developed Semi-empirical corrosion model. Key words: Corrosion, Zircaloy-4, tin content, accumulative annealing factor, Semi-empirical, PWR. (author)

  2. Verification of supersonic and hypersonic semi-empirical predictions using CFD

    International Nuclear Information System (INIS)

    McIlwain, S.; Khalid, M.

    2004-01-01

    CFD was used to verify the accuracy of the axial force, normal force, and pitching moment predictions of two semi-empirical codes. This analysis considered the flow around the forebody of four different aerodynamic shapes. These included geometries with equal-volume straight or tapered bodies, with either standard or double-angle nose cones. The flow was tested at freestream Mach numbers of M = 1.5, 4.0, and 7.0. The CFD results gave the expected flow pressure contours for each geometry. The geometries with straight bodies produced larger axial forces, smaller normal forces, and larger pitching moments compared to the geometries with tapered bodies. The double-angle nose cones introduced a shock into the flow, but affected the straight-body geometries more than the tapered-body geometries. Both semi-empirical codes predicted axial forces that were consistent with the CFD data. The agreement between the normal forces and pitching moments was not as good, particularly for the straight-body geometries. But even though the semi-empirical results were not exactly the same as the CFD data, the semi-empirical codes provided rough estimates of the aerodynamic parameters in a fraction of the time required to perform a CFD analysis. (author)

  3. Relationships between moment magnitude and fault parameters: theoretical and semi-empirical relationships

    Science.gov (United States)

    Wang, Haiyun; Tao, Xiaxin

    2003-12-01

    Fault parameters are important in earthquake hazard analysis. In this paper, theoretical relationships between moment magnitude and fault parameters including subsurface rupture length, downdip rupture width, rupture area, and average slip over the fault surface are deduced based on seismological theory. These theoretical relationships are further simplified by applying similarity conditions and an unique form is established. Then, combining the simplified theoretical relationships between moment magnitude and fault parameters with seismic source data selected in this study, a practical semi-empirical relationship is established. The seismic source data selected is also to used to derive empirical relationships between moment magnitude and fault parameters by the ordinary least square regression method. Comparisons between semi-empirical relationships and empirical relationships show that the former depict distribution trends of data better than the latter. It is also observed that downdip rupture widths of strike slip faults are saturated when moment magnitude is more than 7.0, but downdip rupture widths of dip slip faults are not saturated in the moment magnitude ranges of this study.

  4. Semi-empirical spectrophotometric (SESp) method for the indirect determination of the ratio of cationic micellar binding constants of counterions X⁻ and Br⁻(K(X)/K(Br)).

    Science.gov (United States)

    Khan, Mohammad Niyaz; Yusof, Nor Saadah Mohd; Razak, Norazizah Abdul

    2013-01-01

    The semi-empirical spectrophotometric (SESp) method, for the indirect determination of ion exchange constants (K(X)(Br)) of ion exchange processes occurring between counterions (X⁻ and Br⁻) at the cationic micellar surface, is described in this article. The method uses an anionic spectrophotometric probe molecule, N-(2-methoxyphenyl)phthalamate ion (1⁻), which measures the effects of varying concentrations of inert inorganic or organic salt (Na(v)X, v = 1, 2) on absorbance, (A(ob)) at 310 nm, of samples containing constant concentrations of 1⁻, NaOH and cationic micelles. The observed data fit satisfactorily to an empirical equation which gives the values of two empirical constants. These empirical constants lead to the determination of K(X)(Br) (= K(X)/K(Br) with K(X) and K(Br) representing cationic micellar binding constants of counterions X and Br⁻). This method gives values of K(X)(Br) for both moderately hydrophobic and hydrophilic X⁻. The values of K(X)(Br), obtained by using this method, are comparable with the corresponding values of K(X)(Br), obtained by the use of semi-empirical kinetic (SEK) method, for different moderately hydrophobic X. The values of K(X)(Br) for X = Cl⁻ and 2,6-Cl₂C6H₃CO₂⁻, obtained by the use of SESp and SEK methods, are similar to those obtained by the use of other different conventional methods.

  5. The semi-empirical low-level background statistics

    International Nuclear Information System (INIS)

    Tran Manh Toan; Nguyen Trieu Tu

    1992-01-01

    A semi-empirical low-level background statistics was proposed. The one can be applied to evaluated the sensitivity of low background systems, and to analyse the statistical error, the 'Rejection' and 'Accordance' criteria for processing of low-level experimental data. (author). 5 refs, 1 figs

  6. Semi-empirical calculations for the ranges of fast ions in silicon

    Science.gov (United States)

    Belkova, Yu. A.; Teplova, Ya. A.

    2018-04-01

    A semi-empirical method is proposed to calculate the ion ranges in energy region E = 0.025-10 MeV/nucleon. The dependence of ion ranges on the projectile nuclear charge, mass and velocity is analysed. The calculations presented for ranges of ions with nuclear charges Z = 2-10 in silicon are compared with SRIM results and experimental data.

  7. Experimental and semi-empirical and DFT calculational studies on (E)-2-((2-morpholinoethyliminio)methyl)-4-nitrophenolate

    International Nuclear Information System (INIS)

    Alpaslan, Y. B.; Agar, E.; Ersahin, F.; Iskeleli, N. O.; Oeztekin, E.

    2010-01-01

    The molecular and crystal structure of the title compound, C 1 3H 1 7N 3 O 4 , has been determined by X-ray single crystal diffraction technique. The compound crystallizes in the triclinic, space group P-1 with unit cell dimensions a=5.3520(4), b=10.9011(8), c=12.4537(9)A 0 , Mr=279.30, V=675.91(9)A 03 , Z=2, R1=0.037 and wR 2 =0.097. The molecule adopts a zwitterionic form, stabilized by an intramolecular N + -H 2 O- type ionic weak hydrogen bond. The molecule pack via intermolecular N-H 2 O hydrogen bonds which, together with an intramolecular N + -H 2 O- bond. Calculational studies were performed by using AM1, PM3, semi-empirical and DFT methods. Geometry optimizations of compound have been carried out by using three semi-empirical methods and DFT method and bond lengths, bond and torsion angles of title compound have been determined. Atomic charge distribution have been obtained from DFT. In order to determine conformational flexibility on the molecule, molecular energy profile of the title compound was obtained with respect to the selected torsion angle T(C2-C1-C7-N1), which is varied from -180 0 degree to +180 0 degree in every 10 via PM3 semi-empirical method.

  8. Semi-empirical formulas for sputtering yield

    International Nuclear Information System (INIS)

    Yamamura, Yasumichi

    1994-01-01

    When charged particles, electrons, light and so on are irradiated on solid surfaces, the materials are lost from the surfaces, and this phenomenon is called sputtering. In order to understand sputtering phenomenon, the bond energy of atoms on surfaces, the energy given to the vicinity of surfaces and the process of converting the given energy to the energy for releasing atoms must be known. The theories of sputtering and the semi-empirical formulas for evaluating the dependence of sputtering yield on incident energy are explained. The mechanisms of sputtering are that due to collision cascade in the case of heavy ion incidence and that due to surface atom recoil in the case of light ion incidence. The formulas for the sputtering yield of low energy heavy ion sputtering, high energy light ion sputtering and the general case between these extreme cases, and the Matsunami formula are shown. At the stage of the publication of Atomic Data and Nuclear Data Tables in 1984, the data up to 1983 were collected, and about 30 papers published thereafter were added. The experimental data for low Z materials, for example Be, B and C and light ion sputtering data were reported. The combination of ions and target atoms in the collected sputtering data is shown. The new semi-empirical formula by slightly adjusting the Matsunami formula was decided. (K.I.)

  9. Electronic structure prediction via data-mining the empirical pseudopotential method

    Energy Technology Data Exchange (ETDEWEB)

    Zenasni, H; Aourag, H [LEPM, URMER, Departement of Physics, University Abou Bakr Belkaid, Tlemcen 13000 (Algeria); Broderick, S R; Rajan, K [Department of Materials Science and Engineering, Iowa State University, Ames, Iowa 50011-2230 (United States)

    2010-01-15

    We introduce a new approach for accelerating the calculation of the electronic structure of new materials by utilizing the empirical pseudopotential method combined with data mining tools. Combining data mining with the empirical pseudopotential method allows us to convert an empirical approach to a predictive approach. Here we consider tetrahedrally bounded III-V Bi semiconductors, and through the prediction of form factors based on basic elemental properties we can model the band structure and charge density for these semi-conductors, for which limited results exist. This work represents a unique approach to modeling the electronic structure of a material which may be used to identify new promising semi-conductors and is one of the few efforts utilizing data mining at an electronic level. (Abstract Copyright [2010], Wiley Periodicals, Inc.)

  10. ONIOM Studies of Esterification at Oxidized Carbon Nanotube Tips

    Energy Technology Data Exchange (ETDEWEB)

    Contreras-Torres, F F; Basiuk, V A [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, Circuito Exterior C.U., A. Postal 70-543, 04510 Mexico D. F. (Mexico)

    2007-03-15

    Esterification of oxidized carbon nanotubes (CNTs) can open a new route for the separation of zigzag and armchair nanotubes. We studied theoretically (by using hybrid DFT within the ONIOM embedding protocol) the reactions of monocarboxy-substituted oxidized tips of zigzag and armchair single-walled CNTs (SWCNTs) with methanol. According to the calculated values of activation energy, Gibbs free-activation barriers, and enthalpies of formation for the SWCNT-(COOH)H5 models, the zigzag nanotube isomer is more reactive as compared to its armchair counterpart. For other models we obtained variable results.

  11. A semi-empirical two phase model for rocks

    International Nuclear Information System (INIS)

    Fogel, M.B.

    1993-01-01

    This article presents data from an experiment simulating a spherically symmetric tamped nuclear explosion. A semi-empirical two-phase model of the measured response in tuff is presented. A comparison is made of the computed peak stress and velocity versus scaled range and that measured on several recent tuff events

  12. Investigation of the binding free energies of FDA approved drugs against subtype B and C-SA HIV PR: ONIOM approach.

    Science.gov (United States)

    Sanusi, Z K; Govender, T; Maguire, G E M; Maseko, S B; Lin, J; Kruger, H G; Honarparvar, B

    2017-09-01

    Human immune virus subtype C is the most widely spread HIV subtype in Sub-Sahara Africa and South Africa. A profound structural insight on finding potential lead compounds is therefore necessary for drug discovery. The focus of this study is to rationalize the nine Food and Drugs Administration (FDA) HIV antiviral drugs complexed to subtype B and C-SA PR using ONIOM approach. To achieve this, an integrated two-layered ONIOM model was used to optimize the geometrics of the FDA approved HIV-1 PR inhibitors for subtype B. In our hybrid ONIOM model, the HIV-1 PR inhibitors as well as the ASP 25/25' catalytic active residues were treated at high level quantum mechanics (QM) theory using B3LYP/6-31G(d), and the remaining HIV PR residues were considered using the AMBER force field. The experimental binding energies of the PR inhibitors were compared to the ONIOM calculated results. The theoretical binding free energies (?G bind ) for subtype B follow a similar trend to the experimental results, with one exemption. The computational model was less suitable for C-SA PR. Analysis of the results provided valuable information about the shortcomings of this approach. Future studies will focus on the improvement of the computational model by considering explicit water molecules in the active pocket. We believe that this approach has the potential to provide much improved binding energies for complex enzyme drug interactions. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Intermolecular interactions in the condensed phase: Evaluation of semi-empirical quantum mechanical methods.

    Science.gov (United States)

    Christensen, Anders S; Kromann, Jimmy C; Jensen, Jan H; Cui, Qiang

    2017-10-28

    To facilitate further development of approximate quantum mechanical methods for condensed phase applications, we present a new benchmark dataset of intermolecular interaction energies in the solution phase for a set of 15 dimers, each containing one charged monomer. The reference interaction energy in solution is computed via a thermodynamic cycle that integrates dimer binding energy in the gas phase at the coupled cluster level and solute-solvent interaction with density functional theory; the estimated uncertainty of such calculated interaction energy is ±1.5 kcal/mol. The dataset is used to benchmark the performance of a set of semi-empirical quantum mechanical (SQM) methods that include DFTB3-D3, DFTB3/CPE-D3, OM2-D3, PM6-D3, PM6-D3H+, and PM7 as well as the HF-3c method. We find that while all tested SQM methods tend to underestimate binding energies in the gas phase with a root-mean-squared error (RMSE) of 2-5 kcal/mol, they overestimate binding energies in the solution phase with an RMSE of 3-4 kcal/mol, with the exception of DFTB3/CPE-D3 and OM2-D3, for which the systematic deviation is less pronounced. In addition, we find that HF-3c systematically overestimates binding energies in both gas and solution phases. As most approximate QM methods are parametrized and evaluated using data measured or calculated in the gas phase, the dataset represents an important first step toward calibrating QM based methods for application in the condensed phase where polarization and exchange repulsion need to be treated in a balanced fashion.

  14. Intermolecular interactions in the condensed phase: Evaluation of semi-empirical quantum mechanical methods

    Science.gov (United States)

    Christensen, Anders S.; Kromann, Jimmy C.; Jensen, Jan H.; Cui, Qiang

    2017-10-01

    To facilitate further development of approximate quantum mechanical methods for condensed phase applications, we present a new benchmark dataset of intermolecular interaction energies in the solution phase for a set of 15 dimers, each containing one charged monomer. The reference interaction energy in solution is computed via a thermodynamic cycle that integrates dimer binding energy in the gas phase at the coupled cluster level and solute-solvent interaction with density functional theory; the estimated uncertainty of such calculated interaction energy is ±1.5 kcal/mol. The dataset is used to benchmark the performance of a set of semi-empirical quantum mechanical (SQM) methods that include DFTB3-D3, DFTB3/CPE-D3, OM2-D3, PM6-D3, PM6-D3H+, and PM7 as well as the HF-3c method. We find that while all tested SQM methods tend to underestimate binding energies in the gas phase with a root-mean-squared error (RMSE) of 2-5 kcal/mol, they overestimate binding energies in the solution phase with an RMSE of 3-4 kcal/mol, with the exception of DFTB3/CPE-D3 and OM2-D3, for which the systematic deviation is less pronounced. In addition, we find that HF-3c systematically overestimates binding energies in both gas and solution phases. As most approximate QM methods are parametrized and evaluated using data measured or calculated in the gas phase, the dataset represents an important first step toward calibrating QM based methods for application in the condensed phase where polarization and exchange repulsion need to be treated in a balanced fashion.

  15. Applicability of special quasi-random structure models in thermodynamic calculations using semi-empirical Debye–Grüneisen theory

    International Nuclear Information System (INIS)

    Kim, Jiwoong

    2015-01-01

    In theoretical calculations, expressing the random distribution of atoms in a certain crystal structure is still challenging. The special quasi-random structure (SQS) model is effective for depicting such random distributions. The SQS model has not been applied to semi-empirical thermodynamic calculations; however, Debye–Grüneisen theory (DGT), a semi-empirical method, was used here for that purpose. The model reliability was obtained by comparing supercell models of various sizes. The results for chemical bonds, pair correlation, and elastic properties demonstrated the reliability of the SQS models. Thermodynamic calculations using density functional perturbation theory (DFPT) and DGT assessed the applicability of the SQS models. DGT and DFPT led to similar variations of the mixing and formation energies. This study provides guidelines for theoretical assessments to obtain the reliable SQS models and to calculate the thermodynamic properties of numerous materials with a random atomic distribution. - Highlights: • Various material properties are used to examine reliability of special quasi-random structures. • SQS models are applied to thermodynamic calculations by semi-empirical methods. • Basic calculation guidelines for materials with random atomic distribution are given.

  16. A comparative study of semi-empirical interionic potentials for alkali halides - II

    International Nuclear Information System (INIS)

    Khwaja, F.A.; Naqvi, S.H.

    1985-08-01

    A comprehensive study of some semi-empirical interionic potentials is carried out through the calculation of the cohesive energy, relative stability and pressure induced solid-solid phase transformations in alkali halides. The theoretical values of these properties of the alkali halides are obtained using a new set of van der Waals coefficients and zero-point energy in the expression for interionic potential. From the comparison of the present calculations with some previous sophisticated ab-initio quantum-mechanical calculations and other semi-empirical approaches, it is concluded that the present calculations in the simplest central pairwise interaction description with the new values of the van der Waals coefficients and zero-point energy are in better agreement with the experimental data than the previous calculations. It is also concluded that in some cases the better choice of the interionic potential alone in the simplest semi-empirical picture of interaction gives an agreement of the theoretical predictions with the experimental data much superior to the ab-initio quantum mechanical approaches. (author)

  17. Optical absorption spectra and g factor of MgO: Mn2+explored by ab initio and semi empirical methods

    Science.gov (United States)

    Andreici Eftimie, E.-L.; Avram, C. N.; Brik, M. G.; Avram, N. M.

    2018-02-01

    In this paper we present a methodology for calculations of the optical absorption spectra, ligand field parameters and g factor for the Mn2+ (3d5) ions doped in MgO host crystal. The proposed technique combines two methods: the ab initio multireference (MR) and the semi empirical ligand field (LF) in the framework of the exchange charge model (ECM) respectively. Both methods of calculations are applied to the [MnO6]10-cluster embedded in an extended point charge field of host matrix ligands based on Gellé-Lepetit procedure. The first step of such investigations was the full optimization of the cubic structure of perfect MgO crystal, followed by the structural optimization of the doped of MgO:Mn2+ system, using periodic density functional theory (DFT). The ab initio MR wave functions approaches, such as complete active space self-consistent field (CASSCF), N-electron valence second order perturbation theory (NEVPT2) and spectroscopy oriented configuration interaction (SORCI), are used for the calculations. The scalar relativistic effects have also been taken into account through the second order Douglas-Kroll-Hess (DKH2) procedure. Ab initio ligand field theory (AILFT) allows to extract all LF parameters and spin-orbit coupling constant from such calculations. In addition, the ECM of ligand field theory (LFT) has been used for modelling theoptical absorption spectra. The perturbation theory (PT) was employed for the g factor calculation in the semi empirical LFT. The results of each of the aforementioned types of calculations are discussed and the comparisons between the results obtained and the experimental results show a reasonable agreement, which justifies this new methodology based on the simultaneous use of both methods. This study establishes fundamental principles for the further modelling of larger embedded cluster models of doped metal oxides.

  18. Experimental and semi-empirical and DFT calculational studies on (e)-2-(1-(2-(4-methylphenylsulfonamido) ethyliminio) ethyl) phenolate

    International Nuclear Information System (INIS)

    Alpaslan, G.; Agar, E.; Ersahin, F.; Isik, S.; Erdoenmez, A.

    2010-01-01

    The molecular and crystal structure of the title compound, C 1 7H 2 0N 2 O 3 S, has been determined by X-ray single crystal diffraction technique. The compound crystallizes in the monoclinic, space group P2 1 /n with unit cell dimensions a=11.4472(6), b=11.1176(4), c=13.4873(7)A 0 , M r =332.41, V=1639.36(13)A 03 , Z=4, R 1 =0.034 and wR 2 =0.097. The molecule adopts a zwitterionic form, stabilized by an intramolecular N + -H 2 O - type ionic weak hydrogen bond. The molecule pack via intermolecular N-H 2 O hydrogen bonds which, together with an intramolecular N + -H 2 O - bond, form an S(6)R 2 4 (4)S(6) motif. Calculational studies were performed by using AM1, PM3 semi-empirical and DFT methods. Geometry optimizations of compound have been carried out by using three semi-empirical methods and DFT method and bond lengths, bond and torsion angles of title compound have been determined. Atomic charge distribution have been obtained from AM1, PM3 and DFT. In order to determine conformational flexibility on the molecule, molecular energy profile of the title compound was obtained with respect to the selected torsion angle T(N1-C9-C10-N2), which is varied from -180 degrees to +180 degrees in every 10 via PM3 semi-empirical method.

  19. Theoretical Semi-Empirical AM1 studies of Schiff Bases

    International Nuclear Information System (INIS)

    Arora, K.; Burman, K.

    2005-01-01

    The present communication reports the theoretical semi-empirical studies of schiff bases of 2-amino pyridine along with their comparison with their parent compounds. Theoretical studies reveal that it is the azomethine group, in the schiff bases under study, that acts as site for coordination to metals as it is reported by many coordination chemists. (author)

  20. Semi-empirical calculations on the structure of the uronium ion

    NARCIS (Netherlands)

    Harkema, Sybolt

    1972-01-01

    Semi-empirical calculations (CNDO/2) on the structure of the uronium ion are presented. Assuming a planar ion with fixed bond lengths, the bond angles involving the heavy atoms can be calculated with fair accuracy. Changes in bond length and angles, which occur upon protonation of the urea molecule,

  1. Semi-empirical quantum evaluation of peptide - MHC class II binding

    Science.gov (United States)

    González, Ronald; Suárez, Carlos F.; Bohórquez, Hugo J.; Patarroyo, Manuel A.; Patarroyo, Manuel E.

    2017-01-01

    Peptide presentation by the major histocompatibility complex (MHC) is a key process for triggering a specific immune response. Studying peptide-MHC (pMHC) binding from a structural-based approach has potential for reducing the costs of investigation into vaccine development. This study involved using two semi-empirical quantum chemistry methods (PM7 and FMO-DFTB) for computing the binding energies of peptides bonded to HLA-DR1 and HLA-DR2. We found that key stabilising water molecules involved in the peptide binding mechanism were required for finding high correlation with IC50 experimental values. Our proposal is computationally non-intensive, and is a reliable alternative for studying pMHC binding interactions.

  2. An Improved Semi-Empirical Model for Radar Backscattering from Rough Sea Surfaces at X-Band

    Directory of Open Access Journals (Sweden)

    Taekyeong Jin

    2018-04-01

    Full Text Available We propose an improved semi-empirical scattering model for X-band radar backscattering from rough sea surfaces. This new model has a wider validity range of wind speeds than does the existing semi-empirical sea spectrum (SESS model. First, we retrieved the small-roughness parameters from the sea surfaces, which were numerically generated using the Pierson-Moskowitz spectrum and measurement datasets for various wind speeds. Then, we computed the backscattering coefficients of the small-roughness surfaces for various wind speeds using the integral equation method model. Finally, the large-roughness characteristics were taken into account by integrating the small-roughness backscattering coefficients multiplying them with the surface slope probability density function for all possible surface slopes. The new model includes a wind speed range below 3.46 m/s, which was not covered by the existing SESS model. The accuracy of the new model was verified with two measurement datasets for various wind speeds from 0.5 m/s to 14 m/s.

  3. A semi-empirical approach to calculate gamma activities in environmental samples

    International Nuclear Information System (INIS)

    Palacios, D.; Barros, H.; Alfonso, J.; Perez, K.; Trujillo, M.; Losada, M.

    2006-01-01

    We propose a semi-empirical method to calculate radionuclide concentrations in environmental samples without the use of reference material and avoiding the typical complexity of Monte-Carlo codes. The calculation of total efficiencies was carried out from a relative efficiency curve (obtained from the gamma spectra data), and the geometric (simulated by Monte-Carlo), absorption, sample and intrinsic efficiencies at energies between 130 and 3000 keV. The absorption and sample efficiencies were determined from the mass absorption coefficients, obtained by the web program XCOM. Deviations between computed results and measured efficiencies for the RGTh-1 reference material are mostly within 10%. Radionuclide activities in marine sediment samples calculated by the proposed method and by the experimental relative method were in satisfactory agreement. The developed method can be used for routine environmental monitoring when efficiency uncertainties of 10% can be sufficient.(Author)

  4. Semi-empirical formula for large pore-size estimation from o-Ps annihilation lifetime

    International Nuclear Information System (INIS)

    Nguyen Duc Thanh; Tran Quoc Dung; Luu Anh Tuyen; Khuong Thanh Tuan

    2007-01-01

    The o-Ps annihilation rate in large pore was investigated by the semi-classical approach. The semi-empirical formula that simply correlates between the pore size and the o-Ps lifetime was proposed. The calculated results agree well with experiment in the range from some angstroms to several ten nanometers size of pore. (author)

  5. A semi-empirical model for predicting crown diameter of cedrela ...

    African Journals Online (AJOL)

    A semi-empirical model relating age and breast height has been developed to predict individual tree crown diameter for Cedrela odorata (L) plantation in the moist evergreen forest zones of Ghana. The model was based on field records of 269 trees, and could determine the crown cover dynamics, forecast time of canopy ...

  6. Semi-empirical and empirical L X-ray production cross sections for elements with 50 ≤ Z ≤ 92 for protons of 0.5-3.0 MeV

    International Nuclear Information System (INIS)

    Nekab, M.; Kahoul, A.

    2006-01-01

    We present in this contribution, semi-empirical production cross sections of the main X-ray lines Lα, Lβ and Lγ for elements from Sn to U and for protons with energies varying from 0.5 to 3.0 MeV. The theoretical X-ray production cross sections are firstly calculated from the theoretical ionization cross sections of the Li (i = 1, 2, 3) subshell within the ECPSSR theory. The semi-empirical Lα, Lβ and Lγ cross sections are then deduced by fitting the available experimental data normalized to their corresponding theoretical values and give the better representation of the experimental data in some cases. On the other hand, the experimental data are directly fitted to deduce the empirical L X-ray production cross sections. A comparison is made between the semi-empirical cross sections, the empirical cross sections reported in this work and the empirical ones reported by Reis and Jesus [M.A. Reis, A.P. Jesus, Atom. Data Nucl. Data Tables 63 (1996) 1] and those of Strivay and Weber [Strivay, G. Weber, Nucl. Instr. and Meth. B 190 (2002) 112

  7. An Insight into the Environmental Effects of the Pocket of the Active Site of the Enzyme. Ab initio ONIOM-Molecular Dynamics (MD) Study on Cytosine Deaminase

    International Nuclear Information System (INIS)

    Matsubara, Toshiaki; Dupuis, Michel; Aida, Misako

    2008-01-01

    We applied the ONIOM-molecular dynamics (MD) method to cytosine deaminase to examine the environmental effects of the amino acid residues in the pocket of the active site on the substrate taking account of their thermal motion. The ab initio ONIOM-MD simulations show that the substrate uracil is strongly perturbed by the amino acid residue Ile33, which sandwiches the uracil with His62, through the steric contact due to the thermal motion. As a result, the magnitude of the thermal oscillation of the potential energy and structure of the substrate uracil significantly increases. TM and MA were partly supported by grants from the Ministry of Education, Culture, Sports, Science and Technology of Japan.MD was supported by the Division of Chemical Sciences, Office of Basic Energy Sciences, and by the Office of Biological and Environmental Research of the U.S. Department of Energy DOE. Battelle operates Pacific Northwest National Laboratory for DOE

  8. Full energy peak efficiency of NaI(Tl) gamma detectors and its analytical and semi-empirical representations

    International Nuclear Information System (INIS)

    Sudarshan, M.; Joseph, J.; Singh, R.

    1992-01-01

    The validity of various analytical functions and semi-empirical formulae proposed for representing the full energy peak efficiency (FEPE) curves of Ge(Li) and HPGe detectors has been tested for the FEPE of 7.6 cm x 7.6 cm and 5 cm x 5 cm Nal(Tl) detectors in the gamma energy range from 59.5 to 1408.03 keV. The functions proposed by East, and McNelles and Campbell provide by far the best representations of the present data. The semi-empirical formula of Mowatt describes the present data very well. The present investigation shows that some of the analytical functions and semi-empirical formulae, which represent the FEPE of the Ge(Li) and HPGe detectors very well, can be quite fruitfully used for Nal(Tl) detectors. (Author)

  9. Semi-empirical atom-atom interaction models and X-ray crystallography

    International Nuclear Information System (INIS)

    Braam, A.W.M.

    1981-01-01

    Several aspects of semi-empirical energy calculations in crystallography are considered. Solid modifications of ethane have been studied using energy calculations and a fast summation technique has been evaluated. The structure of tetramethylpyrazine has been determined at room temperature and at 100K and accurate structure factors have been derived from measured Bragg intensities. Finally electrostatic properties have been deduced from X-ray structure factors. (C.F.)

  10. Pharmacological Classification and Activity Evaluation of Furan and Thiophene Amide Derivatives Applying Semi-Empirical ab initio Molecular Modeling Methods

    Directory of Open Access Journals (Sweden)

    Leszek Bober

    2012-05-01

    Full Text Available Pharmacological and physicochemical classification of the furan and thiophene amide derivatives by multiple regression analysis and partial least square (PLS based on semi-empirical ab initio molecular modeling studies and high-performance liquid chromatography (HPLC retention data is proposed. Structural parameters obtained from the PCM (Polarizable Continuum Model method and the literature values of biological activity (antiproliferative for the A431 cells expressed as LD50 of the examined furan and thiophene derivatives was used to search for relationships. It was tested how variable molecular modeling conditions considered together, with or without HPLC retention data, allow evaluation of the structural recognition of furan and thiophene derivatives with respect to their pharmacological properties.

  11. ONIOM DFT/PM3 calculations on the interaction between dapivirine and HIV-1 reverse transcriptase, a theoretical study.

    Science.gov (United States)

    Liang, Y H; Chen, F E

    2007-08-01

    Theoretical investigations of the interaction between dapivirine and the HIV-1 RT binding site have been performed by the ONIOM2 (B3LYP/6-31G (d,p): PM3) and B3LYP/6-31G (d,p) methods. The results derived from this study indicate that this inhibitor dapivirine forms two hydrogen bonds with Lys101 and exhibits strong π-π stacking or H…π interaction with Tyr181 and Tyr188. These interactions play a vital role in stabilizing the NNIBP/dapivirine complex. Additionally, the predicted binding energy of the BBF optimized structure for this complex system is -18.20 kcal/mol.

  12. Theoretical investigation on the bond dissociation enthalpies of phenolic compounds extracted from Artocarpus altilis using ONIOM(ROB3LYP/6-311++G(2df,2p):PM6) method

    Science.gov (United States)

    Thong, Nguyen Minh; Duong, Tran; Pham, Linh Thuy; Nam, Pham Cam

    2014-10-01

    Theoretical calculations have been performed to predict the antioxidant property of phenolic compounds extracted from Artocarpus altilis. The Osbnd H bond dissociation enthalpy (BDE), ionization energy (IE), and proton dissociation enthalpy (PDE) of the phenolic compounds have been computed. The ONIOM(ROB3LYP/6-311++G(2df,2p):PM6) method is able to provide reliable evaluation for the BDE(Osbnd H) in phenolic compounds. An important property of antioxidants is determined via the BDE(Osbnd H) of those compounds extracted from A. altilis. Based on the BDE(Osbnd H), compound 12 is considered as a potential antioxidant with the estimated BDE value of 77.3 kcal/mol in the gas phase.

  13. Permeability-driven selection in a semi-empirical protocell model

    DEFF Research Database (Denmark)

    Piedrafita, Gabriel; Monnard, Pierre-Alain; Mavelli, Fabio

    2017-01-01

    to prebiotic systems evolution more intricate, but were surely essential for sustaining far-from-equilibrium chemical dynamics, given their functional relevance in all modern cells. Here we explore a protocellular scenario in which some of those additional constraints/mechanisms are addressed, demonstrating...... their 'system-level' implications. In particular, an experimental study on the permeability of prebiotic vesicle membranes composed of binary lipid mixtures allows us to construct a semi-empirical model where protocells are able to reproduce and undergo an evolutionary process based on their coupling...

  14. Assessment of semi-empirical potentials for the U-Si system

    Energy Technology Data Exchange (ETDEWEB)

    Baskes, Michael I. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Andersson, Anders David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-03

    Accident tolerant fuels (ATF) are being developed in response to the Fukushima Daiichi accident in Japan. One of the options being pursued is U-Si fuels, such as the U3Si2 and U3Si5 compounds, which benefit from high thermal conductivity (metallic) compared to the UO2 fuel (semi-conductor) used in current Light Water Reactors (LWRs). The U-Si fuels also have higher fissile density. In order to perform meaningful engineering scale nuclear fuel performance simulations, the material properties of the fuel, including the response to irradiation environments, must be known. Unfortunately, the data available for U-Si fuels are rather limited, in particular for the temperature range where LWRs would operate. The ATF HIP is using multi-scale modeling and simulations to address this knowledge gap. Even though Density Functional Theory (DFT) calculations can provide useful answers to a subset of problems, they are computationally too costly for many others, including properties governing microstructure evolution and irradiation effects. For the latter, semi-empirical potentials are typically used. Unfortunately, there is currently no potential for the U-Si system. In this brief report we present initial results from the development of a U-Si semi-empirical potential based on the Modified Embedded Atom Method (MEAM). The potential should reproduce relevant parts of the U-Si phase diagram as well as defect properties important in irradiation environments. This work also serves as an assessment of the general challenges associated with the U-Si system, which will be valuable for the efforts to develop a U-Si Tersoff potential undertaken by Idaho National Laboratory (also part of the ATF HIP). Going forward the main potential development activity will reside at INL and the work presented here is meant to provide input data and guidelines for that activity. The main focus of our work is on the U3Si2 and U3Si5

  15. Benchmarking DFT and semi-empirical methods for a reliable and cost-efficient computational screening of benzofulvene derivatives as donor materials for small-molecule organic solar cells.

    Science.gov (United States)

    Tortorella, Sara; Talamo, Maurizio Mastropasqua; Cardone, Antonio; Pastore, Mariachiara; De Angelis, Filippo

    2016-02-24

    A systematic computational investigation on the optical properties of a group of novel benzofulvene derivatives (Martinelli 2014 Org. Lett. 16 3424-7), proposed as possible donor materials in small molecule organic photovoltaic (smOPV) devices, is presented. A benchmark evaluation against experimental results on the accuracy of different exchange and correlation functionals and semi-empirical methods in predicting both reliable ground state equilibrium geometries and electronic absorption spectra is carried out. The benchmark of the geometry optimization level indicated that the best agreement with x-ray data is achieved by using the B3LYP functional. Concerning the optical gap prediction, we found that, among the employed functionals, MPW1K provides the most accurate excitation energies over the entire set of benzofulvenes. Similarly reliable results were also obtained for range-separated hybrid functionals (CAM-B3LYP and wB97XD) and for global hybrid methods incorporating a large amount of non-local exchange (M06-2X and M06-HF). Density functional theory (DFT) hybrids with a moderate (about 20-30%) extent of Hartree-Fock exchange (HFexc) (PBE0, B3LYP and M06) were also found to deliver HOMO-LUMO energy gaps which compare well with the experimental absorption maxima, thus representing a valuable alternative for a prompt and predictive estimation of the optical gap. The possibility of using completely semi-empirical approaches (AM1/ZINDO) is also discussed.

  16. Benchmarking DFT and semi-empirical methods for a reliable and cost-efficient computational screening of benzofulvene derivatives as donor materials for small-molecule organic solar cells

    International Nuclear Information System (INIS)

    Tortorella, Sara; Talamo, Maurizio Mastropasqua; Cardone, Antonio; Pastore, Mariachiara; De Angelis, Filippo

    2016-01-01

    A systematic computational investigation on the optical properties of a group of novel benzofulvene derivatives (Martinelli 2014 Org. Lett. 16 3424–7), proposed as possible donor materials in small molecule organic photovoltaic (smOPV) devices, is presented. A benchmark evaluation against experimental results on the accuracy of different exchange and correlation functionals and semi-empirical methods in predicting both reliable ground state equilibrium geometries and electronic absorption spectra is carried out. The benchmark of the geometry optimization level indicated that the best agreement with x-ray data is achieved by using the B3LYP functional. Concerning the optical gap prediction, we found that, among the employed functionals, MPW1K provides the most accurate excitation energies over the entire set of benzofulvenes. Similarly reliable results were also obtained for range-separated hybrid functionals (CAM-B3LYP and wB97XD) and for global hybrid methods incorporating a large amount of non-local exchange (M06-2X and M06-HF). Density functional theory (DFT) hybrids with a moderate (about 20–30%) extent of Hartree–Fock exchange (HFexc) (PBE0, B3LYP and M06) were also found to deliver HOMO–LUMO energy gaps which compare well with the experimental absorption maxima, thus representing a valuable alternative for a prompt and predictive estimation of the optical gap. The possibility of using completely semi-empirical approaches (AM1/ZINDO) is also discussed. (paper)

  17. Data mining of Ti-Al semi-empirical parameters for developing reduced order models

    Energy Technology Data Exchange (ETDEWEB)

    Broderick, Scott R [Department of Materials Science and Engineering and Institute for Combinatorial Discovery, Iowa State University, Ames, IA 50011 (United States); Aourag, Hafid [Department of Physics, University Abou Bakr Belkaid, Tlemcen 13000 (Algeria); Rajan, Krishna [Department of Materials Science and Engineering and Institute for Combinatorial Discovery, Iowa State University, Ames, IA 50011 (United States)

    2011-05-15

    A focus of materials design is determining the minimum amount of information necessary to fully describe a system, thus reducing the number of empirical results required and simplifying the data analysis. Screening descriptors calculated through a semi-empirical model, we demonstrate how an informatics-based analysis can be used to address this issue with no prior assumptions. We have developed a unique approach for identifying the minimum number of descriptors necessary to capture all the information of a system. Using Ti-Al alloys of varying compositions and crystal chemistries as the test bed, 5 of the 21 original descriptors from electronic structure calculations are found to capture all the information from the calculation, thereby reducing the structure-chemistry-property search space. Additionally, by combining electronic structure calculations with data mining, we classify the systems by chemistries and structures, based on the electronic structure inputs, and thereby rank the impact of change in chemistry and crystal structure on the electronic structure. -- Research Highlights: {yields} We developed an informatics-based methodology to minimize the necessary information. {yields} We applied this methodology to descriptors from semi-empirical calculations. {yields} We developed a validation approach for maintaining information from screening. {yields} We classified intermetallics and identified patterns of composition and structure.

  18. Data mining of Ti-Al semi-empirical parameters for developing reduced order models

    International Nuclear Information System (INIS)

    Broderick, Scott R.; Aourag, Hafid; Rajan, Krishna

    2011-01-01

    A focus of materials design is determining the minimum amount of information necessary to fully describe a system, thus reducing the number of empirical results required and simplifying the data analysis. Screening descriptors calculated through a semi-empirical model, we demonstrate how an informatics-based analysis can be used to address this issue with no prior assumptions. We have developed a unique approach for identifying the minimum number of descriptors necessary to capture all the information of a system. Using Ti-Al alloys of varying compositions and crystal chemistries as the test bed, 5 of the 21 original descriptors from electronic structure calculations are found to capture all the information from the calculation, thereby reducing the structure-chemistry-property search space. Additionally, by combining electronic structure calculations with data mining, we classify the systems by chemistries and structures, based on the electronic structure inputs, and thereby rank the impact of change in chemistry and crystal structure on the electronic structure. -- Research Highlights: → We developed an informatics-based methodology to minimize the necessary information. → We applied this methodology to descriptors from semi-empirical calculations. → We developed a validation approach for maintaining information from screening. → We classified intermetallics and identified patterns of composition and structure.

  19. Semi-empirical modelization of charge funneling in a NP diode

    International Nuclear Information System (INIS)

    Musseau, O.

    1991-01-01

    Heavy ion interaction with a semiconductor generates a high density of electrons and holes pairs along the trajectory and in a space charge zone the collected charge is considerably increased. The chronology of this charge funneling is described in a semi-empirical model. From initial conditions characterizing the incident ion and the studied structure, it is possible to evaluate directly the transient current, the collected charge and the length of funneling with a good agreement. The model can be extrapolated to more complex structures

  20. Holocene sea level, a semi-empirical contemplation

    Science.gov (United States)

    Bittermann, K.; Kemp, A.; Vermeer, M.; Rahmstorf, S.

    2017-12-01

    Holocene eustatic sea level from approximately -10,000-1800 CE was characterized by an increase of about 60m, with the rate progressively slowing down until sea level almost stabilizes between 500-1800 CE. Global and northern-hemisphere temperatures rose from the last glacial termination until the `Holocene Optimum'. From ­­there, up to the start of the recent anthropogenic rise, they almost steadily decline. How are the sea-level and temperature evolutions linked? We investigate this with semi-empirical sea-level models. We found that, due to the nature of Milankovitch forcing, northern-hemisphere temperature (we used the Greenland temperature by Vinther et al., 2009) is a better model driver than global mean temperature because the evolving mass of northern-hemisphere land ice was the dominant cause of Holocene global sea-level trends. The adjustment timescale for this contribution is 1200 years (900-1500 years; 90% confidence interval). To fit the observed sea-level history, the model requires a small additional constant rate (Bittermann 2016). This rate turns out to be of the same order of magnitude as reconstructions of Antarctic sea-level contributions (Briggs et al. 2014, Golledge et al. 2014). In reality this contribution is unlikely to be constant but rather has a dominant timescale that is large compared to the time considered. We thus propose that Holocene sea level can be described by a linear combination of a temperature driven rate, which becomes negative in the late Holocene (as Northern Hemisphere ice masses are diminished), and a positive, approximately constant term (possibly from Antarctica), which starts to dominate from the middle of the Holocene until the start of industrialization. Bibliography: Bittermann, K. 2016. Semi-empirical sea-level modelling. PhD Thesis University of Potsdam. Briggs, R.D., et al. 2014. A data-constrained large ensemble analysis of Antarctic evolution since the Eemian. Quaternary science reviews, 103, 91

  1. Rock models at Zielona Gora, Poland applied to the semi-empirical neutron tool calibration

    International Nuclear Information System (INIS)

    Czubek, J.A.; Ossowski, A.; Zorski, T.; Massalski, T.

    1995-01-01

    The semi-empirical calibration method applied to the neutron porosity tool is presented in this paper. It was used with the ODSN-102 tool of 70 mm diameter and equipped with an Am-Be neutron source at the calibration facility of Zielona Gora, Poland, inside natural and artificial rocks: four sandstone, four limestone and one dolomite block with borehole diameters of 143 and 216 mm, and three artificial ceramic blocks with borehole diameters of 90 and 180 mm. All blocks were saturated with fresh water, and fresh water was also inside all boreholes. In five blocks mineralized water (200,000 ppm NaCl) was introduced inside the boreholes. All neutron characteristics of the calibration blocks are given in this paper. The semi-empirical method of calibration correlates the tool readings observed experimentally with the general neutron parameter (GNP). This results in a general calibration curve, where the tool readings (TR) vs GNP are situated at one curve irrespective of their origin, i.e. of the formation lithology, borehole diameter, tool stand-off, brine salinity, etc. The n and m power coefficients are obtained experimentally during the calibration procedure. The apparent neutron parameters are defined as those sensed by a neutron tool situated inside the borehole and in real environmental conditions. When they are known, the GNP parameter can be computed analytically for the whole range of porosity at any kind of borehole diameter, formation lithology (including variable rock matrix absorption cross-section and density), borehole and formation salinity, tool stand-off and drilling fluid physical parameters. By this approach all porosity corrections with respect to the standard (e.g. limestone) calibration curve can be generated. (author)

  2. Semi-empirical evaluation studies on PCMI for the Fugen fuel rod

    International Nuclear Information System (INIS)

    Domoto, Kazushige; Kaneko, Mitsunobu; Takeuchi, Kiyoshi.

    1980-03-01

    Fugen, 165 MWe prototype of a heavy water moderated boiling water cooled reactor, has been well operated since March 1979. In order to establish PCIOMR for Fugen fuels semi-empirical evaluation code to analyze PCMI during power transient of the fuel rod has been developed. In this paper, followings are described 1) general scope of the development work 2) description of the modelling 3) some results of analysis on out pile and in pile tests. (author)

  3. A semi-empirical analysis of strong-motion peaks in terms of seismic source, propagation path, and local site conditions

    Science.gov (United States)

    Kamiyama, M.; Orourke, M. J.; Flores-Berrones, R.

    1992-09-01

    A new type of semi-empirical expression for scaling strong-motion peaks in terms of seismic source, propagation path, and local site conditions is derived. Peak acceleration, peak velocity, and peak displacement are analyzed in a similar fashion because they are interrelated. However, emphasis is placed on the peak velocity which is a key ground motion parameter for lifeline earthquake engineering studies. With the help of seismic source theories, the semi-empirical model is derived using strong motions obtained in Japan. In the derivation, statistical considerations are used in the selection of the model itself and the model parameters. Earthquake magnitude M and hypocentral distance r are selected as independent variables and the dummy variables are introduced to identify the amplification factor due to individual local site conditions. The resulting semi-empirical expressions for the peak acceleration, velocity, and displacement are then compared with strong-motion data observed during three earthquakes in the U.S. and Mexico.

  4. Newly developed semi-empirical formulas for (p, α) at 17.9 MeV and ...

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 74; Issue 6. Newly developed semi-empirical formulas for (, ) at 17.9 MeV and (, ) at 22.3 MeV reaction cross-sections. Eyyup Tel Abdullah Aydin E Gamze Aydin Abdullah Kaplan Ömer Yavaş İskender A Reyhancan. Research Articles Volume 74 Issue 6 June ...

  5. Semi-empirical neural network models of controlled dynamical systems

    Directory of Open Access Journals (Sweden)

    Mihail V. Egorchev

    2017-12-01

    Full Text Available A simulation approach is discussed for maneuverable aircraft motion as nonlinear controlled dynamical system under multiple and diverse uncertainties including knowledge imperfection concerning simulated plant and its environment exposure. The suggested approach is based on a merging of theoretical knowledge for the plant with training tools of artificial neural network field. The efficiency of this approach is demonstrated using the example of motion modeling and the identification of the aerodynamic characteristics of a maneuverable aircraft. A semi-empirical recurrent neural network based model learning algorithm is proposed for multi-step ahead prediction problem. This algorithm sequentially states and solves numerical optimization subproblems of increasing complexity, using each solution as initial guess for subsequent subproblem. We also consider a procedure for representative training set acquisition that utilizes multisine control signals.

  6. A one-dimensional semi-empirical model considering transition boiling effect for dispersed flow film boiling

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yu-Jou [Institute of Nuclear Engineering and Science, National Tsing Hua University, Hsinchu 30013, Taiwan, ROC (China); Pan, Chin, E-mail: cpan@ess.nthu.edu.tw [Institute of Nuclear Engineering and Science, National Tsing Hua University, Hsinchu 30013, Taiwan, ROC (China); Department of Engineering and System Science, National Tsing Hua University, Hsinchu 30013, Taiwan, ROC (China); Low Carbon Energy Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan, ROC (China)

    2017-05-15

    Highlights: • Seven heat transfer mechanisms are studied numerically by the model. • A semi-empirical method is proposed to account for the transition boiling effect. • The parametric effects on the heat transfer mechanisms are investigated. • The thermal non-equilibrium phenomenon between vapor and droplets is investigated. - Abstract: The objective of this paper is to develop a one-dimensional semi-empirical model for the dispersed flow film boiling considering transition boiling effects. The proposed model consists of conservation equations, i.e., vapor mass, vapor energy, droplet mass and droplet momentum conservation, and a set of closure relations to address the interactions among wall, vapor and droplets. The results show that the transition boiling effect is of vital importance in the dispersed flow film boiling regime, since the flowing situation in the downstream would be influenced by the conditions in the upstream. In addition, the present paper, through evaluating the vapor temperature and the amount of heat transferred to droplets, investigates the thermal non-equilibrium phenomenon under different flowing conditions. Comparison of the wall temperature predictions with the 1394 experimental data in the literature, the present model ranging from system pressure of 30–140 bar, heat flux of 204–1837 kW/m{sup 2} and mass flux of 380–5180 kg/m{sup 2} s, shows very good agreement with RMS of 8.80% and standard deviation of 8.81%. Moreover, the model well depicts the thermal non-equilibrium phenomenon for the dispersed flow film boiling.

  7. Estimating the octanol/water partition coefficient for aliphatic organic compounds using semi-empirical electrotopological index.

    Science.gov (United States)

    Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca

    2011-01-01

    A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (I(SET)). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the I(SET) in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P.

  8. Performance evaluation of 3D vision-based semi-autonomous control method for assistive robotic manipulator.

    Science.gov (United States)

    Ka, Hyun W; Chung, Cheng-Shiu; Ding, Dan; James, Khara; Cooper, Rory

    2018-02-01

    We developed a 3D vision-based semi-autonomous control interface for assistive robotic manipulators. It was implemented based on one of the most popular commercially available assistive robotic manipulator combined with a low-cost depth-sensing camera mounted on the robot base. To perform a manipulation task with the 3D vision-based semi-autonomous control interface, a user starts operating with a manual control method available to him/her. When detecting objects within a set range, the control interface automatically stops the robot, and provides the user with possible manipulation options through audible text output, based on the detected object characteristics. Then, the system waits until the user states a voice command. Once the user command is given, the control interface drives the robot autonomously until the given command is completed. In the empirical evaluations conducted with human subjects from two different groups, it was shown that the semi-autonomous control can be used as an alternative control method to enable individuals with impaired motor control to more efficiently operate the robot arms by facilitating their fine motion control. The advantage of semi-autonomous control was not so obvious for the simple tasks. But, for the relatively complex real-life tasks, the 3D vision-based semi-autonomous control showed significantly faster performance. Implications for Rehabilitation A 3D vision-based semi-autonomous control interface will improve clinical practice by providing an alternative control method that is less demanding physically as well cognitively. A 3D vision-based semi-autonomous control provides the user with task specific intelligent semiautonomous manipulation assistances. A 3D vision-based semi-autonomous control gives the user the feeling that he or she is still in control at any moment. A 3D vision-based semi-autonomous control is compatible with different types of new and existing manual control methods for ARMs.

  9. Modelling of proton exchange membrane fuel cell performance based on semi-empirical equations

    Energy Technology Data Exchange (ETDEWEB)

    Al-Baghdadi, Maher A.R. Sadiq [Babylon Univ., Dept. of Mechanical Engineering, Babylon (Iraq)

    2005-08-01

    Using semi-empirical equations for modeling a proton exchange membrane fuel cell is proposed for providing a tool for the design and analysis of fuel cell total systems. The focus of this study is to derive an empirical model including process variations to estimate the performance of fuel cell without extensive calculations. The model take into account not only the current density but also the process variations, such as the gas pressure, temperature, humidity, and utilization to cover operating processes, which are important factors in determining the real performance of fuel cell. The modelling results are compared well with known experimental results. The comparison shows good agreements between the modeling results and the experimental data. The model can be used to investigate the influence of process variables for design optimization of fuel cells, stacks, and complete fuel cell power system. (Author)

  10. Semi-supervised clustering methods.

    Science.gov (United States)

    Bair, Eric

    2013-01-01

    Cluster analysis methods seek to partition a data set into homogeneous subgroups. It is useful in a wide variety of applications, including document processing and modern genetics. Conventional clustering methods are unsupervised, meaning that there is no outcome variable nor is anything known about the relationship between the observations in the data set. In many situations, however, information about the clusters is available in addition to the values of the features. For example, the cluster labels of some observations may be known, or certain observations may be known to belong to the same cluster. In other cases, one may wish to identify clusters that are associated with a particular outcome variable. This review describes several clustering algorithms (known as "semi-supervised clustering" methods) that can be applied in these situations. The majority of these methods are modifications of the popular k-means clustering method, and several of them will be described in detail. A brief description of some other semi-supervised clustering algorithms is also provided.

  11. A semi-empirical formula on the pre-neutron-emission fragment mass distribution in nuclear fission

    International Nuclear Information System (INIS)

    Wang Fucheng; Hu Jimin

    1988-03-01

    A 5-Gauss semi-empirical formula on the pre-neutron-emission fragment mass distribution is given. The absolute standard deviation and maximum departure between calculated values and experimental data for (n,f) and (n,n'f) fission reactions from 232 Th to 245 Cm are approximately 0.4% and 0.8%, respectively. The error will get bigger if the formula is used at higher excitation energies

  12. Semi-empirical master curve concept describing the rate capability of lithium insertion electrodes

    Science.gov (United States)

    Heubner, C.; Seeba, J.; Liebmann, T.; Nickol, A.; Börner, S.; Fritsch, M.; Nikolowski, K.; Wolter, M.; Schneider, M.; Michaelis, A.

    2018-03-01

    A simple semi-empirical master curve concept, describing the rate capability of porous insertion electrodes for lithium-ion batteries, is proposed. The model is based on the evaluation of the time constants of lithium diffusion in the liquid electrolyte and the solid active material. This theoretical approach is successfully verified by comprehensive experimental investigations of the rate capability of a large number of porous insertion electrodes with various active materials and design parameters. It turns out, that the rate capability of all investigated electrodes follows a simple master curve governed by the time constant of the rate limiting process. We demonstrate that the master curve concept can be used to determine optimum design criteria meeting specific requirements in terms of maximum gravimetric capacity for a desired rate capability. The model further reveals practical limits of the electrode design, attesting the empirically well-known and inevitable tradeoff between energy and power density.

  13. Semi-supervised clustering methods

    Science.gov (United States)

    Bair, Eric

    2013-01-01

    Cluster analysis methods seek to partition a data set into homogeneous subgroups. It is useful in a wide variety of applications, including document processing and modern genetics. Conventional clustering methods are unsupervised, meaning that there is no outcome variable nor is anything known about the relationship between the observations in the data set. In many situations, however, information about the clusters is available in addition to the values of the features. For example, the cluster labels of some observations may be known, or certain observations may be known to belong to the same cluster. In other cases, one may wish to identify clusters that are associated with a particular outcome variable. This review describes several clustering algorithms (known as “semi-supervised clustering” methods) that can be applied in these situations. The majority of these methods are modifications of the popular k-means clustering method, and several of them will be described in detail. A brief description of some other semi-supervised clustering algorithms is also provided. PMID:24729830

  14. Semi-Lagrangian methods in air pollution models

    Directory of Open Access Journals (Sweden)

    A. B. Hansen

    2011-06-01

    Full Text Available Various semi-Lagrangian methods are tested with respect to advection in air pollution modeling. The aim is to find a method fulfilling as many of the desirable properties by Rasch andWilliamson (1990 and Machenhauer et al. (2008 as possible. The focus in this study is on accuracy and local mass conservation.

    The methods tested are, first, classical semi-Lagrangian cubic interpolation, see e.g. Durran (1999, second, semi-Lagrangian cubic cascade interpolation, by Nair et al. (2002, third, semi-Lagrangian cubic interpolation with the modified interpolation weights, Locally Mass Conserving Semi-Lagrangian (LMCSL, by Kaas (2008, and last, semi-Lagrangian cubic interpolation with a locally mass conserving monotonic filter by Kaas and Nielsen (2010.

    Semi-Lagrangian (SL interpolation is a classical method for atmospheric modeling, cascade interpolation is more efficient computationally, modified interpolation weights assure mass conservation and the locally mass conserving monotonic filter imposes monotonicity.

    All schemes are tested with advection alone or with advection and chemistry together under both typical rural and urban conditions using different temporal and spatial resolution. The methods are compared with a current state-of-the-art scheme, Accurate Space Derivatives (ASD, see Frohn et al. (2002, presently used at the National Environmental Research Institute (NERI in Denmark. To enable a consistent comparison only non-divergent flow configurations are tested.

    The test cases are based either on the traditional slotted cylinder or the rotating cone, where the schemes' ability to model both steep gradients and slopes are challenged.

    The tests showed that the locally mass conserving monotonic filter improved the results significantly for some of the test cases, however, not for all. It was found that the semi-Lagrangian schemes, in almost every case, were not able to outperform the current ASD scheme

  15. Semi-Empirical Calibration of the Integral Equation Model for Co-Polarized L-Band Backscattering

    Directory of Open Access Journals (Sweden)

    Nicolas Baghdadi

    2015-10-01

    Full Text Available The objective of this paper is to extend the semi-empirical calibration of the backscattering Integral Equation Model (IEM initially proposed for Synthetic Aperture Radar (SAR data at C- and X-bands to SAR data at L-band. A large dataset of radar signal and in situ measurements (soil moisture and surface roughness over bare soil surfaces were used. This dataset was collected over numerous agricultural study sites in France, Luxembourg, Belgium, Germany and Italy using various SAR sensors (AIRSAR, SIR-C, JERS-1, PALSAR-1, ESAR. Results showed slightly better simulations with exponential autocorrelation function than with Gaussian function and with HH than with VV. Using the exponential autocorrelation function, the mean difference between experimental data and Integral Equation Model (IEM simulations is +0.4 dB in HH and −1.2 dB in VV with a Root Mean Square Error (RMSE about 3.5 dB. In order to improve the modeling results of the IEM for a better use in the inversion of SAR data, a semi-empirical calibration of the IEM was performed at L-band in replacing the correlation length derived from field experiments by a fitting parameter. Better agreement was observed between the backscattering coefficient provided by the SAR and that simulated by the calibrated version of the IEM (RMSE about 2.2 dB.

  16. Semi-empirical proton binding constants for natural organic matter

    Science.gov (United States)

    Matynia, Anthony; Lenoir, Thomas; Causse, Benjamin; Spadini, Lorenzo; Jacquet, Thierry; Manceau, Alain

    2010-03-01

    Average proton binding constants ( KH,i) for structure models of humic (HA) and fulvic (FA) acids were estimated semi-empirically by breaking down the macromolecules into reactive structural units (RSUs), and calculating KH,i values of the RSUs using linear free energy relationships (LFER) of Hammett. Predicted log KH,COOH and log KH,Ph-OH are 3.73 ± 0.13 and 9.83 ± 0.23 for HA, and 3.80 ± 0.20 and 9.87 ± 0.31 for FA. The predicted constants for phenolic-type sites (Ph-OH) are generally higher than those derived from potentiometric titrations, but the difference may not be significant in view of the considerable uncertainty of the acidity constants determined from acid-base measurements at high pH. The predicted constants for carboxylic-type sites agree well with titration data analyzed with Model VI (4.10 ± 0.16 for HA, 3.20 ± 0.13 for FA; Tipping, 1998), the Impermeable Sphere model (3.50-4.50 for HA; Avena et al., 1999), and the Stockholm Humic Model (4.10 ± 0.20 for HA, 3.50 ± 0.40 for FA; Gustafsson, 2001), but differ by about one log unit from those obtained by Milne et al. (2001) with the NICA-Donnan model (3.09 ± 0.51 for HA, 2.65 ± 0.43 for FA), and used to derive recommended generic values. To clarify this ambiguity, 10 high-quality titration data from Milne et al. (2001) were re-analyzed with the new predicted equilibrium constants. The data are described equally well with the previous and new sets of values ( R2 ⩾ 0.98), not necessarily because the NICA-Donnan model is overparametrized, but because titration lacks the sensitivity needed to quantify the full binding properties of humic substances. Correlations between NICA-Donnan parameters are discussed, but general progress is impeded by the unknown number of independent parameters that can be varied during regression of a model fit to titration data. The high consistency between predicted and experimental KH,COOH values, excluding those of Milne et al. (2001), gives faith in the proposed

  17. Calculation of bulk etch rate’s semi-empirical equation for polymer track membranes in stationary and dynamic modes

    Directory of Open Access Journals (Sweden)

    A. Mashentseva

    2013-05-01

    Full Text Available One of the most urgent and extremely social problems in environmental safeties area in Kazakhstan is providing the population of all regions of the country with quality drinking water. Development of filter elements based on nuclear track-etch membranes may be considered as one of best solutions this problem. The values of bulk etch rate and activation energy were calculated in view the effect of temperature, alkaline solution concentration as well as stirring effect. The semi-empirical equation of the bulk etch rate for PET track membranes was calculated. As a result of theoretical and experimental studies a semi-empirical equation of the bulk etch rate VB=3.4∙1012∙C2.07∙exp(-0.825/kT for 12 microns PET film, irradiated by ions 84Kr15+ (energy of 1.75 MeV/nucleon at the heavy ion accelerator DC-60 in Astana branch of the INP NNC RK, was obtained. 

  18. Comparison of physical and semi-empirical hydraulic models for flood inundation mapping

    Science.gov (United States)

    Tavakoly, A. A.; Afshari, S.; Omranian, E.; Feng, D.; Rajib, A.; Snow, A.; Cohen, S.; Merwade, V.; Fekete, B. M.; Sharif, H. O.; Beighley, E.

    2016-12-01

    Various hydraulic/GIS-based tools can be used for illustrating spatial extent of flooding for first-responders, policy makers and the general public. The objective of this study is to compare four flood inundation modeling tools: HEC-RAS-2D, Gridded Surface Subsurface Hydrologic Analysis (GSSHA), AutoRoute and Height Above the Nearest Drainage (HAND). There is a trade-off among accuracy, workability and computational demand in detailed, physics-based flood inundation models (e.g. HEC-RAS-2D and GSSHA) in contrast with semi-empirical, topography-based, computationally less expensive approaches (e.g. AutoRoute and HAND). The motivation for this study is to evaluate this trade-off and offer guidance to potential large-scale application in an operational prediction system. The models were assessed and contrasted via comparability analysis (e.g. overlapping statistics) by using three case studies in the states of Alabama, Texas, and West Virginia. The sensitivity and accuracy of physical and semi-eimpirical models in producing inundation extent were evaluated for the following attributes: geophysical characteristics (e.g. high topographic variability vs. flat natural terrain, urbanized vs. rural zones, effect of surface roughness paratermer value), influence of hydraulic structures such as dams and levees compared to unobstructed flow condition, accuracy in large vs. small study domain, effect of spatial resolution in topographic data (e.g. 10m National Elevation Dataset vs. 0.3m LiDAR). Preliminary results suggest that semi-empericial models tend to underestimate in a flat, urbanized area with controlled/managed river channel around 40% of the inundation extent compared to the physical models, regardless of topographic resolution. However, in places where there are topographic undulations, semi-empericial models attain relatively higher level of accuracy than they do in flat non-urbanized terrain.

  19. Semi-empirical long-term cycle life model coupled with an electrolyte depletion function for large-format graphite/LiFePO4 lithium-ion batteries

    Science.gov (United States)

    Park, Joonam; Appiah, Williams Agyei; Byun, Seoungwoo; Jin, Dahee; Ryou, Myung-Hyun; Lee, Yong Min

    2017-10-01

    To overcome the limitation of simple empirical cycle life models based on only equivalent circuits, we attempt to couple a conventional empirical capacity loss model with Newman's porous composite electrode model, which contains both electrochemical reaction kinetics and material/charge balances. In addition, an electrolyte depletion function is newly introduced to simulate a sudden capacity drop at the end of cycling, which is frequently observed in real lithium-ion batteries (LIBs). When simulated electrochemical properties are compared with experimental data obtained with 20 Ah-level graphite/LiFePO4 LIB cells, our semi-empirical model is sufficiently accurate to predict a voltage profile having a low standard deviation of 0.0035 V, even at 5C. Additionally, our model can provide broad cycle life color maps under different c-rate and depth-of-discharge operating conditions. Thus, this semi-empirical model with an electrolyte depletion function will be a promising platform to predict long-term cycle lives of large-format LIB cells under various operating conditions.

  20. Semi-empirical model for the calculation of flow friction factors in wire-wrapped rod bundles

    International Nuclear Information System (INIS)

    Carajilescov, P.; Fernandez y Fernandez, E.

    1981-08-01

    LMFBR fuel elements consist of wire-wrapped rod bundles, with triangular array, with the fluid flowing parallel to the rods. A semi-empirical model is developed in order to obtain the average bundle friction factor, as well as the friction factor for each subchannel. The model also calculates the flow distribution factors. The results are compared to experimental data for geometrical parameters in the range: P(div)D = 1.063 - 1.417, H(div)D = 4 - 50, and are considered satisfactory. (Author) [pt

  1. Electron momentum density and Compton profile by a semi-empirical approach

    Science.gov (United States)

    Aguiar, Julio C.; Mitnik, Darío; Di Rocco, Héctor O.

    2015-08-01

    Here we propose a semi-empirical approach to describe with good accuracy the electron momentum densities and Compton profiles for a wide range of pure crystalline metals. In the present approach, we use an experimental Compton profile to fit an analytical expression for the momentum densities of the valence electrons. This expression is similar to a Fermi-Dirac distribution function with two parameters, one of which coincides with the ground state kinetic energy of the free-electron gas and the other resembles the electron-electron interaction energy. In the proposed scheme conduction electrons are neither completely free nor completely bound to the atomic nucleus. This procedure allows us to include correlation effects. We tested the approach for all metals with Z=3-50 and showed the results for three representative elements: Li, Be and Al from high-resolution experiments.

  2. A semi-empirical molecular orbital model of silica, application to radiation compaction

    International Nuclear Information System (INIS)

    Tasker, P.W.

    1978-11-01

    Semi-empirical molecular-orbital theory is used to calculate the bonding in a cluster of two SiO 4 tetrahedra, with the outer bonds saturated with pseudo-hydrogen atoms. The basic properties of the cluster, bond energies and band gap are calculated using a very simple parameterisation scheme. The resulting cluster is used to study the rebonding that occurs when an oxygen vacancy is created. It is suggested that a vacancy model is capable of producing the observed differences between quartz and vitreous silica, and the calculations show that the compaction effect observed in the glass is of a magnitude compatible with the relaxations around the vacancy. More detailed lattice models will be needed to examine this mechanism further. (author)

  3. Methods for Calculating Empires in Quasicrystals

    Directory of Open Access Journals (Sweden)

    Fang Fang

    2017-10-01

    Full Text Available This paper reviews the empire problem for quasiperiodic tilings and the existing methods for generating the empires of the vertex configurations in quasicrystals, while introducing a new and more efficient method based on the cut-and-project technique. Using Penrose tiling as an example, this method finds the forced tiles with the restrictions in the high dimensional lattice (the mother lattice that can be cut-and-projected into the lower dimensional quasicrystal. We compare our method to the two existing methods, namely one method that uses the algorithm of the Fibonacci chain to force the Ammann bars in order to find the forced tiles of an empire and the method that follows the work of N.G. de Bruijn on constructing a Penrose tiling as the dual to a pentagrid. This new method is not only conceptually simple and clear, but it also allows us to calculate the empires of the vertex configurations in a defected quasicrystal by reversing the configuration of the quasicrystal to its higher dimensional lattice, where we then apply the restrictions. These advantages may provide a key guiding principle for phason dynamics and an important tool for self error-correction in quasicrystal growth.

  4. Estimation of Aboveground Biomass in Alpine Forests: A Semi-Empirical Approach Considering Canopy Transparency Derived from Airborne LiDAR Data

    Directory of Open Access Journals (Sweden)

    Martin Rutzinger

    2010-12-01

    Full Text Available In this study, a semi-empirical model that was originally developed for stem volume estimation is used for aboveground biomass (AGB estimation of a spruce dominated alpine forest. The reference AGB of the available sample plots is calculated from forest inventory data by means of biomass expansion factors. Furthermore, the semi-empirical model is extended by three different canopy transparency parameters derived from airborne LiDAR data. These parameters have not been considered for stem volume estimation until now and are introduced in order to investigate the behavior of the model concerning AGB estimation. The developed additional input parameters are based on the assumption that transparency of vegetation can be measured by determining the penetration of the laser beams through the canopy. These parameters are calculated for every single point within the 3D point cloud in order to consider the varying properties of the vegetation in an appropriate way. Exploratory Data Analysis (EDA is performed to evaluate the influence of the additional LiDAR derived canopy transparency parameters for AGB estimation. The study is carried out in a 560 km2 alpine area in Austria, where reference forest inventory data and LiDAR data are available. The investigations show that the introduction of the canopy transparency parameters does not change the results significantly according to R2 (R2 = 0.70 to R2 = 0.71 in comparison to the results derived from, the semi-empirical model, which was originally developed for stem volume estimation.

  5. Method and apparatus for semi-solid material processing

    Science.gov (United States)

    Han, Qingyou [Knoxville, TN; Jian, Xiaogang [Knoxville, TN; Xu, Hanbing [Knoxville, TN; Meek, Thomas T [Knoxville, TN

    2009-02-24

    A method of forming a material includes the steps of: vibrating a molten material at an ultrasonic frequency while cooling the material to a semi-solid state to form non-dendritic grains therein; forming the semi-solid material into a desired shape; and cooling the material to a solid state. The method makes semi-solid castings directly from molten materials (usually a metal), produces grain size usually in the range of smaller than 50 .mu.m, and can be easily retrofitted into existing conventional forming machine.

  6. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  7. Semi-empirical determination of the diffusion coefficient of the Fricke Xylenol Gel dosimeter through finite difference methods; Determinacao semi-empirica do coeficiente de difusao do dosimetro Fricke Xilenol Gel atraves do metodo de diferencas finitas

    Energy Technology Data Exchange (ETDEWEB)

    Nascimento, E.O.; Oliveira, L.N., E-mail: lucas@ifg.edu.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Goias (IFG), Goiania, GO (Brazil)

    2014-11-01

    Partial Differential Equations (PDE) can model natural phenomena, such as related to physics, chemistry and engineering. For these classes of equations, analytical solutions are difficult to be obtained, so a computational approach is indicted. In this context, the Finite Difference Method (FDM) can provide useful tools for the field of Medical Physics. In this study, is described the implementation of a computational mesh, in order to be used in determining the Diffusion Coefficient (DC) of the Fricke Xylenol Gel dosimeter (FXG). The initial and boundary conditions both referred by experimental factors are modelled in FDM, thus making a semi-empirical study in determining the DC. Together, the method of Reflection and Superposition (SRM) and the analysis of experimental data, served as first validation for the simulation. Such methodologies interface generated concordant results for a range of error of 3% in concentration lines for small times when compared to the analytical solution. The result for the DC was 0.43 mm{sup 2} /h. This value is in concordance with measures parameters range found in polymer gels dosimeters: 0.3-2.0 mm{sup 2} /h. Therefore, the application of computer simulation methodology supported by the FDM may be used in determining the diffusion coefficient in FXG dosimeter. (author)

  8. Determination of the semi-empiric relationship among the physical density, the concentration and rate between hydrogen and manganese atoms, and a manganese sulfate solution; Determinacao da relacao semi-empirica entre a densidade fisica, concentracao e razao entre atomos de hidrogenio e manganes em uma solucao de sulfato de manganes

    Energy Technology Data Exchange (ETDEWEB)

    Bittencourt, Guilherme Rodrigues [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil). PIBIC; Castro, Leonardo Curvello de; Pereira, Walsan W.; Patrao, Karla C. de Souza; Fonseca, Evaldo S. da; Dantas, Maria Leticia [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. Nacional de Metrologia das Radiacoes Ionizantes (LNMRI). Lab. de Neutrons

    2009-07-01

    The bath of a manganese sulfate (BMS) is a system for absolute standardization of the neutron sources. This work establishes a functional relationship based on semi-empirical methods for the theoretical prediction of physical density values, concentration and rate between the hydrogen and manganese atoms presents in the solution of the BMS

  9. The semi-Lagrangian method on curvilinear grids

    Directory of Open Access Journals (Sweden)

    Hamiaz Adnane

    2016-09-01

    Full Text Available We study the semi-Lagrangian method on curvilinear grids. The classical backward semi-Lagrangian method [1] preserves constant states but is not mass conservative. Natural reconstruction of the field permits nevertheless to have at least first order in time conservation of mass, even if the spatial error is large. Interpolation is performed with classical cubic splines and also cubic Hermite interpolation with arbitrary reconstruction order of the derivatives. High odd order reconstruction of the derivatives is shown to be a good ersatz of cubic splines which do not behave very well as time step tends to zero. A conservative semi-Lagrangian scheme along the lines of [2] is then described; here conservation of mass is automatically satisfied and constant states are shown to be preserved up to first order in time.

  10. Flux form Semi-Lagrangian methods for parabolic problems

    Directory of Open Access Journals (Sweden)

    Bonaventura Luca

    2016-09-01

    Full Text Available A semi-Lagrangian method for parabolic problems is proposed, that extends previous work by the authors to achieve a fully conservative, flux-form discretization of linear and nonlinear diffusion equations. A basic consistency and stability analysis is proposed. Numerical examples validate the proposed method and display its potential for consistent semi-Lagrangian discretization of advection diffusion and nonlinear parabolic problems.

  11. Development of semi-empirical equations for In-water dose distribution using Co-60 beams

    International Nuclear Information System (INIS)

    Abdalla, Siddig Abdalla Talha

    2001-08-01

    Knowledge of absorbed dose distribution is essential for the management of cancer using Co-60 teletherapy. Since direct measurement of dose in patient is impossible, indirect assessments are always carried. In this study direct assessments in phantoms were taken for dose distribution data. Mainly we concentrated on central axis dose and isodose curves data, which are essential for treatment planning. We started by development of a semi-empirical method which uses a more restricted number of measurements and uses graphical relation to develop the dose distribution. This method was based on the decrement lines method which was introduced by Orchard (1964) to develop isodose curve. In the beginning the already developed percent depth dose, Pdd, equation was modified and used to plot the Pdd lines for randomly selected field sizes. After that the dose profiles at depths 5, 10, 15 and 20 cm for randomly selected field sizes were plotted from the direct measurement. Then with the help of the PDD's equation, an equation for the slope of decrement lines is developed. From this slope equation a relation that gives the off axial distance was found. Making use of these relations, the iso lines 80%, 50% and 20% were plotted for the field sizes: 6*6 cm 2 , 10*10 cm 2 and 18*18 cm 2 . Finally these plotted lines were compared to their correspondents from the manufacturer and those used in the hospital (Rick). (Author)

  12. Semi-empirical model for prediction of unsteady forces on an airfoil with application to flutter

    Science.gov (United States)

    Mahajan, A. J.; Kaza, K. R. V.; Dowell, E. H.

    1993-01-01

    A semi-empirical model is described for predicting unsteady aerodynamic forces on arbitrary airfoils under mildly stalled and unstalled conditions. Aerodynamic forces are modeled using second order ordinary differential equations for lift and moment with airfoil motion as the input. This model is simultaneously integrated with structural dynamics equations to determine flutter characteristics for a two degrees-of-freedom system. Results for a number of cases are presented to demonstrate the suitability of this model to predict flutter. Comparison is made to the flutter characteristics determined by a Navier-Stokes solver and also the classical incompressible potential flow theory.

  13. Stationary semi-solid battery module and method of manufacture

    Science.gov (United States)

    Slocum, Alexander; Doherty, Tristan; Bazzarella, Ricardo; Cross, III, James C.; Limthongkul, Pimpa; Duduta, Mihai; Disko, Jeffry; Yang, Allen; Wilder, Throop; Carter, William Craig; Chiang, Yet-Ming

    2015-12-01

    A method of manufacturing an electrochemical cell includes transferring an anode semi-solid suspension to an anode compartment defined at least in part by an anode current collector and an separator spaced apart from the anode collector. The method also includes transferring a cathode semi-solid suspension to a cathode compartment defined at least in part by a cathode current collector and the separator spaced apart from the cathode collector. The transferring of the anode semi-solid suspension to the anode compartment and the cathode semi-solid to the cathode compartment is such that a difference between a minimum distance and a maximum distance between the anode current collector and the separator is maintained within a predetermined tolerance. The method includes sealing the anode compartment and the cathode compartment.

  14. Semi-coarsening multigrid methods for parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    Jones, J.E.

    1996-12-31

    Standard multigrid methods are not well suited for problems with anisotropic coefficients which can occur, for example, on grids that are stretched to resolve a boundary layer. There are several different modifications of the standard multigrid algorithm that yield efficient methods for anisotropic problems. In the paper, we investigate the parallel performance of these multigrid algorithms. Multigrid algorithms which work well for anisotropic problems are based on line relaxation and/or semi-coarsening. In semi-coarsening multigrid algorithms a grid is coarsened in only one of the coordinate directions unlike standard or full-coarsening multigrid algorithms where a grid is coarsened in each of the coordinate directions. When both semi-coarsening and line relaxation are used, the resulting multigrid algorithm is robust and automatic in that it requires no knowledge of the nature of the anisotropy. This is the basic multigrid algorithm whose parallel performance we investigate in the paper. The algorithm is currently being implemented on an IBM SP2 and its performance is being analyzed. In addition to looking at the parallel performance of the basic semi-coarsening algorithm, we present algorithmic modifications with potentially better parallel efficiency. One modification reduces the amount of computational work done in relaxation at the expense of using multiple coarse grids. This modification is also being implemented with the aim of comparing its performance to that of the basic semi-coarsening algorithm.

  15. A semi empirical formula for the angular differential number albedo of low-energy photons

    Directory of Open Access Journals (Sweden)

    Marković Srpko

    2005-01-01

    Full Text Available Low-energy photon reflection from water, aluminum, and iron is simulated by the MCNP code and results are com pared with similar Monte Carlo calculations. For the energy range from 60 to 150 keV and for the normal incidence of initial photons, a universal shape of the normalized angular differential number albedo is observed and after that fitted by the curve fit ting procedure in form of a second order polynomial over the polar angle. Finally, a one-parameter formula for the angular differential number albedo is developed and verified for water through the comparison of results with the semi empirical formulae and Monte Carlo calculations of other authors.

  16. Normalization of time-series satellite reflectance data to a standard sun-target-sensor geometry using a semi-empirical model

    Science.gov (United States)

    Zhao, Yongguang; Li, Chuanrong; Ma, Lingling; Tang, Lingli; Wang, Ning; Zhou, Chuncheng; Qian, Yonggang

    2017-10-01

    Time series of satellite reflectance data have been widely used to characterize environmental phenomena, describe trends in vegetation dynamics and study climate change. However, several sensors with wide spatial coverage and high observation frequency are usually designed to have large field of view (FOV), which cause variations in the sun-targetsensor geometry in time-series reflectance data. In this study, on the basis of semiempirical kernel-driven BRDF model, a new semi-empirical model was proposed to normalize the sun-target-sensor geometry of remote sensing image. To evaluate the proposed model, bidirectional reflectance under different canopy growth conditions simulated by Discrete Anisotropic Radiative Transfer (DART) model were used. The semi-empirical model was first fitted by using all simulated bidirectional reflectance. Experimental result showed a good fit between the bidirectional reflectance estimated by the proposed model and the simulated value. Then, MODIS time-series reflectance data was normalized to a common sun-target-sensor geometry by the proposed model. The experimental results showed the proposed model yielded good fits between the observed and estimated values. The noise-like fluctuations in time-series reflectance data was also reduced after the sun-target-sensor normalization process.

  17. Semi-empirical fragmentation model of meteoroid motion and radiation during atmospheric penetration

    Science.gov (United States)

    Revelle, D. O.; Ceplecha, Z.

    2002-11-01

    A semi-empirical fragmentation model (FM) of meteoroid motion, ablation, and radiation including two types of fragmentation is outlined. The FM was applied to observational data (height as function of time and the light curve) of Lost City, Innisfree and Benešov bolides. For the Lost City bolide we were able to fit the FM to the observed height as function of time with ±13 m and to the observed light curve with ±0.17 magnitude. Corresponding numbers for Innisfree are ±25 m and ±0.14 magnitude, and for Benešov ±46 m and ±0.19 magnitude. We also define apparent and intrinsic values of σ, K, and τ. Using older results and our fit of FM to the Lost City bolide we derived corrections to intrinsic luminous efficiencies expressed as functions of velocity, mass, and normalized air density.

  18. The effect of electrodes on 11 acene molecular spin valve: Semi-empirical study

    Science.gov (United States)

    Aadhityan, A.; Preferencial Kala, C.; John Thiruvadigal, D.

    2017-10-01

    A new revolution in electronics is molecular spintronics, with the contemporary evolution of the two novel disciplines of spintronics and molecular electronics. The key point is the creation of molecular spin valve which consists of a diamagnetic molecule in between two magnetic leads. In this paper, non-equilibrium Green's function (NEGF) combined with Extended Huckel Theory (EHT); a semi-empirical approach is used to analyse the electron transport characteristics of 11 acene molecular spin valve. We examine the spin-dependence transport on 11 acene molecular junction with various semi-infinite electrodes as Iron, Cobalt and Nickel. To analyse the spin-dependence transport properties the left and right electrodes are joined to the central region in parallel and anti-parallel configurations. We computed spin polarised device density of states, projected device density of states of carbon and the electrode element, and transmission of these devices. The results demonstrate that the effect of electrodes modifying the spin-dependence behaviours of these systems in a controlled way. In Parallel and anti-parallel configuration the separation of spin up and spin down is lager in the case of iron electrode than nickel and cobalt electrodes. It shows that iron is the best electrode for 11 acene spin valve device. Our theoretical results are reasonably impressive and trigger our motivation for comprehending the transport properties of these molecular-sized contacts.

  19. Semi-empirical models for the estimation of clear sky solar global and direct normal irradiances in the tropics

    International Nuclear Information System (INIS)

    Janjai, S.; Sricharoen, K.; Pattarapanitchai, S.

    2011-01-01

    Highlights: → New semi-empirical models for predicting clear sky irradiance were developed. → The proposed models compare favorably with other empirical models. → Performance of proposed models is comparable with that of widely used physical models. → The proposed models have advantage over the physical models in terms of simplicity. -- Abstract: This paper presents semi-empirical models for estimating global and direct normal solar irradiances under clear sky conditions in the tropics. The models are based on a one-year period of clear sky global and direct normal irradiances data collected at three solar radiation monitoring stations in Thailand: Chiang Mai (18.78 o N, 98.98 o E) located in the North of the country, Nakhon Pathom (13.82 o N, 100.04 o E) in the Centre and Songkhla (7.20 o N, 100.60 o E) in the South. The models describe global and direct normal irradiances as functions of the Angstrom turbidity coefficient, the Angstrom wavelength exponent, precipitable water and total column ozone. The data of Angstrom turbidity coefficient, wavelength exponent and precipitable water were obtained from AERONET sunphotometers, and column ozone was retrieved from the OMI/AURA satellite. Model validation was accomplished using data from these three stations for the data periods which were not included in the model formulation. The models were also validated against an independent data set collected at Ubon Ratchathani (15.25 o N, 104.87 o E) in the Northeast. The global and direct normal irradiances calculated from the models and those obtained from measurements are in good agreement, with the root mean square difference (RMSD) of 7.5% for both global and direct normal irradiances. The performance of the models was also compared with that of other models. The performance of the models compared favorably with that of empirical models. Additionally, the accuracy of irradiances predicted from the proposed model are comparable with that obtained from some

  20. Discriminative semi-supervised feature selection via manifold regularization.

    Science.gov (United States)

    Xu, Zenglin; King, Irwin; Lyu, Michael Rung-Tsong; Jin, Rong

    2010-07-01

    Feature selection has attracted a huge amount of interest in both research and application communities of data mining. We consider the problem of semi-supervised feature selection, where we are given a small amount of labeled examples and a large amount of unlabeled examples. Since a small number of labeled samples are usually insufficient for identifying the relevant features, the critical problem arising from semi-supervised feature selection is how to take advantage of the information underneath the unlabeled data. To address this problem, we propose a novel discriminative semi-supervised feature selection method based on the idea of manifold regularization. The proposed approach selects features through maximizing the classification margin between different classes and simultaneously exploiting the geometry of the probability distribution that generates both labeled and unlabeled data. In comparison with previous semi-supervised feature selection algorithms, our proposed semi-supervised feature selection method is an embedded feature selection method and is able to find more discriminative features. We formulate the proposed feature selection method into a convex-concave optimization problem, where the saddle point corresponds to the optimal solution. To find the optimal solution, the level method, a fairly recent optimization method, is employed. We also present a theoretic proof of the convergence rate for the application of the level method to our problem. Empirical evaluation on several benchmark data sets demonstrates the effectiveness of the proposed semi-supervised feature selection method.

  1. Semi-empirical model for retrieval of soil moisture using RISAT-1 C-Band SAR data over a sub-tropical semi-arid area of Rewari district, Haryana (India)

    Science.gov (United States)

    Rawat, Kishan Singh; Sehgal, Vinay Kumar; Pradhan, Sanatan; Ray, Shibendu S.

    2018-03-01

    We have estimated soil moisture (SM) by using circular horizontal polarization backscattering coefficient (σ o_{RH}), differences of circular vertical and horizontal σ o (σ o_{RV} {-} σ o_{RH}) from FRS-1 data of Radar Imaging Satellite (RISAT-1) and surface roughness in terms of RMS height ({RMS}_{height}). We examined the performance of FRS-1 in retrieving SM under wheat crop at tillering stage. Results revealed that it is possible to develop a good semi-empirical model (SEM) to estimate SM of the upper soil layer using RISAT-1 SAR data rather than using existing empirical model based on only single parameter, i.e., σ o. Near surface SM measurements were related to σ o_{RH}, σ o_{RV} {-} σ o_{RH} derived using 5.35 GHz (C-band) image of RISAT-1 and {RMS}_{height}. The roughness component derived in terms of {RMS}_{height} showed a good positive correlation with σ o_{RV} {-} σ o_{RH} (R2 = 0.65). By considering all the major influencing factors (σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}), an SEM was developed where SM (volumetric) predicted values depend on σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}. This SEM showed R2 of 0.87 and adjusted R2 of 0.85, multiple R=0.94 and with standard error of 0.05 at 95% confidence level. Validation of the SM derived from semi-empirical model with observed measurement ({SM}_{Observed}) showed root mean square error (RMSE) = 0.06, relative-RMSE (R-RMSE) = 0.18, mean absolute error (MAE) = 0.04, normalized RMSE (NRMSE) = 0.17, Nash-Sutcliffe efficiency (NSE) = 0.91 ({≈ } 1), index of agreement (d) = 1, coefficient of determination (R2) = 0.87, mean bias error (MBE) = 0.04, standard error of estimate (SEE) = 0.10, volume error (VE) = 0.15, variance of the distribution of differences ({S}d2) = 0.004. The developed SEM showed better performance in estimating SM than Topp empirical model which is based only on σ o. By using the developed SEM, top soil SM can be estimated with low mean absolute

  2. An empirical method for approximating stream baseflow time series using groundwater table fluctuations

    Science.gov (United States)

    Meshgi, Ali; Schmitter, Petra; Babovic, Vladan; Chui, Ting Fong May

    2014-11-01

    Developing reliable methods to estimate stream baseflow has been a subject of interest due to its importance in catchment response and sustainable watershed management. However, to date, in the absence of complex numerical models, baseflow is most commonly estimated using statistically derived empirical approaches that do not directly incorporate physically-meaningful information. On the other hand, Artificial Intelligence (AI) tools such as Genetic Programming (GP) offer unique capabilities to reduce the complexities of hydrological systems without losing relevant physical information. This study presents a simple-to-use empirical equation to estimate baseflow time series using GP so that minimal data is required and physical information is preserved. A groundwater numerical model was first adopted to simulate baseflow for a small semi-urban catchment (0.043 km2) located in Singapore. GP was then used to derive an empirical equation relating baseflow time series to time series of groundwater table fluctuations, which are relatively easily measured and are physically related to baseflow generation. The equation was then generalized for approximating baseflow in other catchments and validated for a larger vegetation-dominated basin located in the US (24 km2). Overall, this study used GP to propose a simple-to-use equation to predict baseflow time series based on only three parameters: minimum daily baseflow of the entire period, area of the catchment and groundwater table fluctuations. It serves as an alternative approach for baseflow estimation in un-gauged systems when only groundwater table and soil information is available, and is thus complementary to other methods that require discharge measurements.

  3. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    Science.gov (United States)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  4. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.

    2010-02-16

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks theorem for the limiting distributions of the empirical likelihood ratios is derived. It is shown that one of the proposed methods is locally efficient among a class of within-subject variance-covariance matrices. A simulation study is conducted to investigate the finite sample properties of the proposed methods and compare them with the block empirical likelihood method by You et al. (2006) and the normal approximation with a correctly estimated variance-covariance. The results suggest that the proposed methods are generally more efficient than existing methods which ignore the correlation structure, and better in coverage compared to the normal approximation with correctly specified within-subject correlation. An application illustrating our methods and supporting the simulation study results is also presented.

  5. New semi-empirical formula for α-decay half-lives of the heavy and superheavy nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Manjunatha, H.C. [Government College for Women, Department of Physics, Kolar, Karnataka (India); Sridhar, K.N. [Government First Grade College, Department of Physics, Kolar, Karnataka (India)

    2017-07-15

    We have succesfully formulated the semi-empirical formula for α-decay half-lives of heavy and superheavy nuclei for different isotopes of the wide atomic-number range 94 < Z < 136. We have considered 2627 isotopes of heavy and superheavy nuclei for the fitting. The value produced by the present formula is compared with that of experiments and other eleven models, i.e. ImSahu, Sahu, Royer10, VS2, UNIV2, SemFIS2, WKB. Sahu16, Densov, VSS and Royer formula. This formula is exclusively for heavy and superheavy nuclei. α-decay is one of the dominant decay mode of superheavy nucleus. By identifying the α-decay mode superheavy nuclei can be detected. This formula helps in predicting the α-decay chains of superheavy nuclei. (orig.)

  6. Empirical Bayes Estimation of Semi-parametric Hierarchical Mixture Models for Unbiased Characterization of Polygenic Disease Architectures

    Directory of Open Access Journals (Sweden)

    Jo Nishino

    2018-04-01

    Full Text Available Genome-wide association studies (GWAS suggest that the genetic architecture of complex diseases consists of unexpectedly numerous variants with small effect sizes. However, the polygenic architectures of many diseases have not been well characterized due to lack of simple and fast methods for unbiased estimation of the underlying proportion of disease-associated variants and their effect-size distribution. Applying empirical Bayes estimation of semi-parametric hierarchical mixture models to GWAS summary statistics, we confirmed that schizophrenia was extremely polygenic [~40% of independent genome-wide SNPs are risk variants, most within odds ratio (OR = 1.03], whereas rheumatoid arthritis was less polygenic (~4 to 8% risk variants, significant portion reaching OR = 1.05 to 1.1. For rheumatoid arthritis, stratified estimations revealed that expression quantitative loci in blood explained large genetic variance, and low- and high-frequency derived alleles were prone to be risk and protective, respectively, suggesting a predominance of deleterious-risk and advantageous-protective mutations. Despite genetic correlation, effect-size distributions for schizophrenia and bipolar disorder differed across allele frequency. These analyses distinguished disease polygenic architectures and provided clues for etiological differences in complex diseases.

  7. A semi-empirical formula for total cross sections of electron scattering from diatomic molecules

    International Nuclear Information System (INIS)

    Liu Yufang; Sun Jinfeng; Henan Normal Univ., Xinxiang

    1996-01-01

    A fitting formula based on the Born approximation is used to fit the total cross sections for electron scattering by diatomic molecules (CO, N 2 , NO, O 2 and HCl) in the intermediate- and high-energy range. By analyzing the fitted parameters and the total cross sections, we found that the internuclear distance of the constituent atoms plays an important role in the e-diatomic molecule collision process. Thus a new semi-empirical formula has been obtained. There is no free parameter in the formula, and the dependence of the total cross sections on the internuclear distance has been reflected clearly. The total cross sections for electron scattering by CO, N 2 , NO, O 2 and HCl have been calculated over an incident energy range of 10-4000 eV. The results agree well with other available experimental and calculation data. (orig.)

  8. Semi-Empirical Predictions on the Structure and Properties of ent-Kaurenoic Acid and Derivatives

    Directory of Open Access Journals (Sweden)

    Jose Isagani B. Janairo

    2011-01-01

    Full Text Available The physicochemical properties of ent- kaurenoic acid model derivatives, which possibly influence its therapeutic application, were calculated. Results revealed that the molecule possess favourable attributes which renders it possible to be considered as a drug lead only that its very hydrophobic nature can result to poor bioavailabilty, low absorption and poor systemic circulation. In silico simulations revealed that this setback can be overcome by introduction of hydroxyl group to the tertiary carbon of ent-kaurenoic acid employing m-CPBA catalyzed hydroxylation, thus, unleashing its full drug potency. Moreover, molecular similarity analyses derived from semi-empirical calculations between ent-kaurenoic acid and a set of kaurane diterpenoids showed differences in hydrophobic complementarity, size and electronic properties despite possessing nearly identical molecular frameworks, thus, arriving in a generalization for their observed mechanistic differences on acting on different targets.

  9. Basis material decomposition in spectral CT using a semi-empirical, polychromatic adaption of the Beer-Lambert model

    Science.gov (United States)

    Ehn, S.; Sellerer, T.; Mechlem, K.; Fehringer, A.; Epple, M.; Herzen, J.; Pfeiffer, F.; Noël, P. B.

    2017-01-01

    Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.

  10. A simple semi-empirical way of accounting for the contribution of pair production process to the efficiency of Ge detectors

    International Nuclear Information System (INIS)

    Sudarshan, M.; Singh, R.

    1991-01-01

    By considering the data for a 38cm 3 Ge(Li) detector from E γ = 319.80 to 2598.80 keV, and for a 68 cm 3 HPGe detector from E γ = 223.430 to 3253.610 keV, it has been demonstrated that the contribution of the pair production process to the full energy peak efficiency (FEPE) of germanium detectors can be quite adequately accounted for in a semi-empirical way. (author)

  11. Soil Moisture Estimate under Forest using a Semi-empirical Model at P-Band

    Science.gov (United States)

    Truong-Loi, M.; Saatchi, S.; Jaruwatanadilok, S.

    2013-12-01

    In this paper we show the potential of a semi-empirical algorithm to retrieve soil moisture under forests using P-band polarimetric SAR data. In past decades, several remote sensing techniques have been developed to estimate the surface soil moisture. In most studies associated with radar sensing of soil moisture, the proposed algorithms are focused on bare or sparsely vegetated surfaces where the effect of vegetation can be ignored. At long wavelengths such as L-band, empirical or physical models such as the Small Perturbation Model (SPM) provide reasonable estimates of surface soil moisture at depths of 0-5cm. However for densely covered vegetated surfaces such as forests, the problem becomes more challenging because the vegetation canopy is a complex scattering environment. For this reason there have been only few studies focusing on retrieving soil moisture under vegetation canopy in the literature. Moghaddam et al. developed an algorithm to estimate soil moisture under a boreal forest using L- and P-band SAR data. For their studied area, double-bounce between trunks and ground appear to be the most important scattering mechanism. Thereby, they implemented parametric models of radar backscatter for double-bounce using simulations of a numerical forest scattering model. Hajnsek et al. showed the potential of estimating the soil moisture under agricultural vegetation using L-band polarimetric SAR data and using polarimetric-decomposition techniques to remove the vegetation layer. Here we use an approach based on physical formulation of dominant scattering mechanisms and three parameters that integrates the vegetation and soil effects at long wavelengths. The algorithm is a simplification of a 3-D coherent model of forest canopy based on the Distorted Born Approximation (DBA). The simplified model has three equations and three unknowns, preserving the three dominant scattering mechanisms of volume, double-bounce and surface for three polarized backscattering

  12. Semi-convergence properties of Kaczmarz’s method

    International Nuclear Information System (INIS)

    Elfving, Tommy; Hansen, Per Christian; Nikazad, Touraj

    2014-01-01

    Kaczmarz’s method—sometimes referred to as the algebraic reconstruction technique—is an iterative method that is widely used in tomographic imaging due to its favorable semi-convergence properties. Specifically, when applied to a problem with noisy data, during the early iterations it converges very quickly toward a good approximation of the exact solution, and thus produces a regularized solution. While this property is generally accepted and utilized, there is surprisingly little theoretical justification for it. The purpose of this paper is to present insight into the semi-convergence of Kaczmarz’s method as well as its projected counterpart (and their block versions). To do this we study how the data errors propagate into the iteration vectors and we derive upper bounds for this noise propagation. Our bounds are compared with numerical results obtained from tomographic imaging. (paper)

  13. SemiBoost: boosting for semi-supervised learning.

    Science.gov (United States)

    Mallapragada, Pavan Kumar; Jin, Rong; Jain, Anil K; Liu, Yi

    2009-11-01

    Semi-supervised learning has attracted a significant amount of attention in pattern recognition and machine learning. Most previous studies have focused on designing special algorithms to effectively exploit the unlabeled data in conjunction with labeled data. Our goal is to improve the classification accuracy of any given supervised learning algorithm by using the available unlabeled examples. We call this as the Semi-supervised improvement problem, to distinguish the proposed approach from the existing approaches. We design a metasemi-supervised learning algorithm that wraps around the underlying supervised algorithm and improves its performance using unlabeled data. This problem is particularly important when we need to train a supervised learning algorithm with a limited number of labeled examples and a multitude of unlabeled examples. We present a boosting framework for semi-supervised learning, termed as SemiBoost. The key advantages of the proposed semi-supervised learning approach are: 1) performance improvement of any supervised learning algorithm with a multitude of unlabeled data, 2) efficient computation by the iterative boosting algorithm, and 3) exploiting both manifold and cluster assumption in training classification models. An empirical study on 16 different data sets and text categorization demonstrates that the proposed framework improves the performance of several commonly used supervised learning algorithms, given a large number of unlabeled examples. We also show that the performance of the proposed algorithm, SemiBoost, is comparable to the state-of-the-art semi-supervised learning algorithms.

  14. Design of semi-rigid type of flexible pavements

    Directory of Open Access Journals (Sweden)

    Pranshoo Solanki

    2017-03-01

    Full Text Available The primary objective of the study presented in this paper is to develop design curves for performance prediction of stabilized layers and to compare semi-rigid flexible pavement designs between the empirical AASHTO 1993 and the mechanistic-empirical pavement design methodologies. Specifically, comparisons were made for a range of different sections consisting of cementitious layers stabilized with different types and percentages of additives. It is found that the design thickness is influenced by the type of soil, additive, selection of material property and design method. Cost comparisons of sections stabilized with different percentage and type of additives showed that CKD-stabilization provides economically low cost sections as compared to lime- and CFA-stabilized sections. Knowledge gained from the parametric analysis of different sections using AASHTO 1993 and MEPDG is expected to be useful to pavement designers and others in implementation of the new MEPDG for future pavement design. Keywords: Semi-rigid, Mechanistic, Resilient modulus, Fatigue life, Reliability, Traffic

  15. Advanced Semi-Implicit Method (ASIM) for hyperbolic two-fluid model

    International Nuclear Information System (INIS)

    Lee, Sung Jae; Chung, Moon Sun

    2003-01-01

    Introducing the interfacial pressure jump terms based on the surface tension into the momentum equations of two-phase two-fluid model, the system of governing equations is turned mathematically into the hyperbolic system. The eigenvalues of the equation system become always real representing the void wave and the pressure wave propagation speeds as shown in the previous manuscript. To solve the interfacial pressure jump terms with void fraction gradients implicitly, the conventional semi-implicit method should be modified as an intermediate iteration method for void fraction at fractional time step. This Advanced Semi-Implicit Method (ASIM) then becomes stable without conventional additive terms. As a consequence, including the interfacial pressure jump terms with the advanced semi-implicit method, the numerical solutions of typical two-phase problems can be more stable and sound than those calculated exclusively by using any other terms like virtual mass, or artificial viscosity

  16. Semi-empirical modelling of radiation exposure of humans to naturally occurring radioactive materials in a goldmine in Ghana

    International Nuclear Information System (INIS)

    Darko, E. O.; Tetteh, G.K.; Akaho, E.H.K.

    2005-01-01

    A semi-empirical analytical model has been developed and used to assess the radiation doses to workers in a gold mine in Ghana. The gamma dose rates from naturally occurring radioactive materials (uranium-thorium series, potassium-40 and radon concentrations) were related to the annual effective doses for surface and underground mining operations. The calculated effective doses were verified by comparison with field measurements and correlation ratios of 0.94 and 0.93 were obtained, respectively, between calculated and measured data of surface and underground mining. The results agreed with the approved international levels for normal radiation exposure in the mining environment. (au)

  17. Semi-automated potentiometric titration method for uranium characterization

    Energy Technology Data Exchange (ETDEWEB)

    Cristiano, B.F.G., E-mail: barbara@ird.gov.br [Comissao Nacional de Energia Nuclear (CNEN), Instituto de Radioprotecao e Dosimetria (IRD), Avenida Salvador Allende s/n Recreio dos Bandeirantes, PO Box 37750, Rio de Janeiro, 22780-160 RJ (Brazil); Delgado, J.U.; Silva, J.W.S. da; Barros, P.D. de; Araujo, R.M.S. de [Comissao Nacional de Energia Nuclear (CNEN), Instituto de Radioprotecao e Dosimetria (IRD), Avenida Salvador Allende s/n Recreio dos Bandeirantes, PO Box 37750, Rio de Janeiro, 22780-160 RJ (Brazil); Lopes, R.T. [Programa de Engenharia Nuclear (PEN/COPPE), Universidade Federal do Rio de Janeiro (UFRJ), Ilha do Fundao, PO Box 68509, Rio de Janeiro, 21945-970 RJ (Brazil)

    2012-07-15

    The manual version of the potentiometric titration method has been used for certification and characterization of uranium compounds. In order to reduce the analysis time and the influence of the analyst, a semi-automatic version of the method was developed in the Brazilian Nuclear Energy Commission. The method was applied with traceability assured by using a potassium dichromate primary standard. The combined standard uncertainty in determining the total concentration of uranium was around 0.01%, which is suitable for uranium characterization. - Highlights: Black-Right-Pointing-Pointer We developed a semi-automatic version of potentiometric titration method. Black-Right-Pointing-Pointer The method is used for certification and characterization of uranium compounds. Black-Right-Pointing-Pointer The traceability of the method was assured by a K{sub 2}Cr{sub 2}O{sub 7} primary standard. Black-Right-Pointing-Pointer The results of U{sub 3}O{sub 8} reference material analyzed was consistent with certified value. Black-Right-Pointing-Pointer The uncertainty obtained, near 0.01%, is useful for characterization purposes.

  18. Developmant of a Reparametrized Semi-Empirical Force Field to Compute the Rovibrational Structure of Large PAHs

    Science.gov (United States)

    Fortenberry, Ryan

    energy surface. QFFs can regularly predict the fundamental vibrational frequencies to within 5 cm-1 of experimentally measured values. This level of accuracy represents a reduction in discrepancies by an order of magnitude compared with harmonic frequencies calculated with density functional theory (DFT). The major limitation of the QFF strategy is that the level of electronic-structure theory required to develop a predictive force field is prohibitively time consuming for molecular systems larger than 5 atoms. Recent advances in QFF techniques utilizing informed DFT approaches have pushed the size of the systems studied up to 24 heavy atoms, but relevant PAHs can have up to hundreds of atoms. We have developed alternative electronic-structure methods that maintain the accuracy of the coupled-cluster calculations extrapolated to the complete basis set limit with relativistic and core correlation corrections applied: the CcCR QFF. These alternative methods are based on simplifications of Hartree—Fock theory in which the computationally intensive two-electron integrals are approximated using empirical parameters. These methods reduce computational time to orders of magnitude less than the CcCR calculations. We have derived a set of optimized empirical parameters to minimize the difference molecular ions of astrochemical significance. We have shown that it is possible to derive a set of empirical parameters that will produce RMS energy differences of less than 2 cm- 1 for our test systems. We are proposing to adopt this reparameterization strategy and some of the lessons learned from the informed DFT studies to create a semi-empirical method whose tremendous speed will allow us to study the rovibrational structure of large PAHs with up to 100s of carbon atoms.

  19. A control-oriented real-time semi-empirical model for the prediction of NOx emissions in diesel engines

    International Nuclear Information System (INIS)

    D’Ambrosio, Stefano; Finesso, Roberto; Fu, Lezhong; Mittica, Antonio; Spessa, Ezio

    2014-01-01

    Highlights: • New semi-empirical correlation to predict NOx emissions in diesel engines. • Based on a real-time three-zone diagnostic combustion model. • The model is of fast application, and is therefore suitable for control-oriented applications. - Abstract: The present work describes the development of a fast control-oriented semi-empirical model that is capable of predicting NOx emissions in diesel engines under steady state and transient conditions. The model takes into account the maximum in-cylinder burned gas temperature of the main injection, the ambient gas-to-fuel ratio, the mass of injected fuel, the engine speed and the injection pressure. The evaluation of the temperature of the burned gas is based on a three-zone real-time diagnostic thermodynamic model that has recently been developed by the authors. Two correlations have also been developed in the present study, in order to evaluate the maximum burned gas temperature during the main combustion phase (derived from the three-zone diagnostic model) on the basis of significant engine parameters. The model has been tuned and applied to two diesel engines that feature different injection systems of the indirect acting piezoelectric, direct acting piezoelectric and solenoid type, respectively, over a wide range of steady-state operating conditions. The model has also been validated in transient operation conditions, over the urban and extra-urban phases of an NEDC. It has been shown that the proposed approach is capable of improving the predictive capability of NOx emissions, compared to previous approaches, and is characterized by a very low computational effort, as it is based on a single-equation correlation. It is therefore suitable for real-time applications, and could also be integrated in the engine control unit for closed-loop or feed-forward control tasks

  20. A semi-empirical approach to analyze the activities of cylindrical radioactive samples using gamma energies from 185 to 1764 keV.

    Science.gov (United States)

    Huy, Ngo Quang; Binh, Do Quang

    2014-12-01

    This work suggests a method for determining the activities of cylindrical radioactive samples. The self-attenuation factor was applied for providing the self-absorption correction of gamma rays in the sample material. The experimental measurement of a (238)U reference sample and the calculation using the MCNP5 code allow obtaining the semi-empirical formulae of detecting efficiencies for the gamma energies ranged from 185 to 1764keV. These formulae were used to determine the activities of the (238)U, (226)Ra, (232)Th, (137)Cs and (40)K nuclides in the IAEA RGU-1, IAEA-434, IAEA RGTh-1, IAEA-152 and IAEA RGK-1 radioactive standards. The coincidence summing corrections for gamma rays in the (238)U and (232)Th series were applied. The activities obtained in this work were in good agreement with the reference values. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. A simple semi-empirical technique for apportioning the impact of roadways on air quality in an urban neighbourhood

    Science.gov (United States)

    Elangasinghe, M. A.; Dirks, K. N.; Singhal, N.; Costello, S. B.; Longley, I.; Salmond, J. A.

    2014-02-01

    Air pollution from the transport sector has a marked effect on human health, so isolating the pollutant contribution from a roadway is important in understanding its impact on the local neighbourhood. This paper proposes a novel technique based on a semi-empirical air pollution model to quantify the impact from a roadway on the air quality of a local neighbourhood using ambient records of a single air pollution monitor. We demonstrate the proposed technique using a case study, in which we quantify the contribution from a major highway with respect to the local background concentration in Auckland, New Zealand. Comparing the diurnal variation of the model-separated background contribution with real measurements from a site upwind of the highway shows that the model estimates are reliable. Amongst all of the pollutants considered, the best estimations of the background were achieved for nitrogen oxides. Although the multi-pronged approach worked well for predominantly vehicle-related pollutants, it could not be used effectively to isolate emissions of PM10 due to the complex and less predictable influence of natural sources (such as marine aerosols). The proposed approach is useful in situations where ambient records from an upwind background station are not available (as required by other techniques) and is potentially transferable to situations such as intersections and arterial roads. Applying this technique to longer time series could help to understand the changes in pollutant concentrations from the road and background sources for different emission scenarios, for different years or seasons. Modelling results also show the potential of such a hybrid semi-empirical models to contribute to our understanding of the physical parameters determining air quality and to validate emissions inventory data.

  2. Evaluation by fluorescence, STD-NMR, docking and semi-empirical calculations of the o-NBA photo-acid interaction with BSA

    Science.gov (United States)

    Chaves, Otávio A.; Jesus, Catarina S. H.; Cruz, Pedro F.; Sant'Anna, Carlos M. R.; Brito, Rui M. M.; Serpa, Carlos

    2016-12-01

    Serum albumins present reversible pH dependent conformational transitions. A sudden laser induced pH-jump is a methodology that can provide new insights on localized protein (un)folding processes that occur within the nanosecond to microsecond time scale. To generate the fast pH jump needed to fast-trigger a protein conformational event, a photo-triggered acid generator as o-nitrobenzaldehyde (o-NBA) can be conveniently used. In order to detect potential specific or nonspecific interactions between o-NBA and BSA, we have performed ligand-binding studies using fluorescence spectroscopy, saturation transfer difference (STD) NMR, molecular docking and semi-empirical calculations. Fluorescence quenching indicates the formation of a non-fluorescent complex in the ground-state between the fluorophore and the quencher, but o-NBA does not bind much effectively to the protein (Ka 4.34 × 103 M- 1) and thus can be considered a relatively weak binder. The corresponding thermodynamic parameters: ΔG°, ΔS° and ΔH° showed that the binding process is spontaneous and entropy driven. Results of 1H STD-NMR confirm that the photo-acid and BSA interact, and the relative intensities of the signals in the STD spectra show that all o-NBA protons are equally involved in the binding process, which should correspond to a nonspecific interaction. Molecular docking and semi-empirical calculations suggest that the o-NBA binds preferentially to the Trp-212-containing site of BSA (FA7), interacting via hydrogen bonds with Arg-217 and Tyr-149 residues.

  3. Semi-empirical model for the threshold voltage of a double implanted MOSFET and its temperature dependence

    Energy Technology Data Exchange (ETDEWEB)

    Arora, N D

    1987-05-01

    A simple and accurate semi-empirical model for the threshold voltage of a small geometry double implanted enhancement type MOSFET, especially useful in a circuit simulation program like SPICE, has been developed. The effect of short channel length and narrow width on the threshold voltage has been taken into account through a geometrical approximation, which involves parameters whose values can be determined from the curve fitting experimental data. A model for the temperature dependence of the threshold voltage for the implanted devices has also been presented. The temperature coefficient of the threshold voltage was found to change with decreasing channel length and width. Experimental results from various device sizes, both short and narrow, show very good agreement with the model. The model has been implemented in SPICE as part of the complete dc model.

  4. Empirical pillar design methods review report: Final report

    International Nuclear Information System (INIS)

    1988-02-01

    This report summarizes and evaluates empirical pillar design methods that may be of use during the conceptual design of a high-level nuclear waste repository in salt. The methods are discussed according to category (i.e, main, submain, and panel pillars; barrier pillars; and shaft pillars). Of the 21 identified for main, submain, and panel pillars, one method, the Confined Core Method, is evaluated as being most appropriate for conceptual design. Five methods are considered potentially applicable. Of six methods identified for barrier pillars, one method based on the Load Transfer Distance concept is considered most appropriate for design. Based on the evaluation of 25 methods identified for shaft pillars, an approximate sizing criterion is proposed for use in conceptual design. Aspects of pillar performance relating to creep, ground deformation, interaction with roof and floor rock, and response to high temperature environments are not adequately addressed by existing empirical design methods. 152 refs., 22 figs., 14 tabs

  5. Semi-empirical Algorithm for the Retrieval of Ecology-Relevant Water Constituents in Various Aquatic Environments

    Directory of Open Access Journals (Sweden)

    Robert Shuchman

    2009-03-01

    Full Text Available An advanced operational semi-empirical algorithm for processing satellite remote sensing data in the visible region is described. Based on the Levenberg-Marquardt multivariate optimization procedure, the algorithm is developed for retrieving major water colour producing agents: chlorophyll-a, suspended minerals and dissolved organics. Two assurance units incorporated by the algorithm are intended to flag pixels with inaccurate atmospheric correction and specific hydro-optical properties not covered by the applied hydro-optical model. The hydro-optical model is a set of spectral cross-sections of absorption and backscattering of the colour producing agents. The combination of the optimization procedure and a replaceable hydro-optical model makes the developed algorithm not specific to a particular satellite sensor or a water body. The algorithm performance efficiency is amply illustrated for SeaWiFS, MODIS and MERIS images over a variety of water bodies.

  6. Semi-empirical simulation of thermoluminescent response under different filter geometries

    International Nuclear Information System (INIS)

    Shammas, Gabriel Issa Jabra

    2006-01-01

    Many thermoluminescent materials has been developed and used for photon personal dosimetry but no one has all desired characteristics alone. These characteristics include robustness, high sensitivity, energy photon independence, large range of photon energy detection, good reproducibility, small fading and simple glow curve with peaks above 150 deg C. Calcium Sulfate Dysprosium doped (CaSO 4 :Dy) phosphor Thermoluminescent Dosimeter (TLD) has been used by many laboratories, mainly in Brazil and India. Another interesting phosphor is Calcium Fluoride (CaF 2 ). These phosphor advantages begin to be more required and its disadvantages have became more apparent, in a global market more and more competitive. These phosphors are used in environmental and area monitoring, once they present more sensibility than other phosphors, like LiF:Mg. Theirs mainly disadvantage is a strong energetic dependence response, which must be corrected for theirs application in the field, where photon radiation is unknown a priori. An interesting way do make this correction in orthogonal incidence of the radiation on the phosphor is to interject a plane leaked filter between the beam and the phosphor. In order to reduce the energetic dependence on any incidence angle, reducing the field dose measurement uncertainty too, this work presents a simulation study on spherical filter geometries. It was simulated photon irradiations with Gamma rays of 60 Co and x-rays of 33; 48 and 118 keV, on many incidence angles from zero to ninety degrees. These semi-empirical computational simulations using finite differences in three dimensions were done in spherical coordinates. The results pointed out the best filter thicknesses and widths, in order to optimize the correction on energetic dependence. (author)

  7. Filtration of human EEG recordings from physiological artifacts with empirical mode method

    Science.gov (United States)

    Grubov, Vadim V.; Runnova, Anastasiya E.; Khramova, Marina V.

    2017-03-01

    In the paper we propose the new method for dealing with noise and physiological artifacts in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We consider noises and physiological artifacts on EEG as specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from eye-moving artifacts and show high efficiency of the method.

  8. Active semi-supervised learning method with hybrid deep belief networks.

    Science.gov (United States)

    Zhou, Shusen; Chen, Qingcai; Wang, Xiaolong

    2014-01-01

    In this paper, we develop a novel semi-supervised learning algorithm called active hybrid deep belief networks (AHD), to address the semi-supervised sentiment classification problem with deep learning. First, we construct the previous several hidden layers using restricted Boltzmann machines (RBM), which can reduce the dimension and abstract the information of the reviews quickly. Second, we construct the following hidden layers using convolutional restricted Boltzmann machines (CRBM), which can abstract the information of reviews effectively. Third, the constructed deep architecture is fine-tuned by gradient-descent based supervised learning with an exponential loss function. Finally, active learning method is combined based on the proposed deep architecture. We did several experiments on five sentiment classification datasets, and show that AHD is competitive with previous semi-supervised learning algorithm. Experiments are also conducted to verify the effectiveness of our proposed method with different number of labeled reviews and unlabeled reviews respectively.

  9. Semi-empirical Calculation of Detection Efficiency for Voluminous Source Based on Effective Solid Angle Concept

    Energy Technology Data Exchange (ETDEWEB)

    Kang, M. Y.; Kim, J. H.; Choi, H. D.; Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    To calculate the full energy (FE) absorption peak efficiency for arbitrary volume sample, we developed and verified the Effective Solid Angle (ESA) Code. The procedure for semi-empirical determination of the FE efficiency for the arbitrary volume sources and the calculation principles and processes about ESA code is referred to, and the code was validated with a HPGe detector (relative efficiency 32%, n-type) in previous studies. In this study, we use different type and efficiency of HPGe detectors, in order to verify the performance of the ESA code for the various detectors. We calculated the efficiency curve of voluminous source and compared with experimental data. We will carry out additional validation by measurement of various medium, volume and shape of CRM volume sources with detector of different efficiency and type. And we will reflect the effect of the dead layer of p-type HPGe detector and coincidence summing correction technique in near future.

  10. Semi-convergence properties of Kaczmarz’s method

    DEFF Research Database (Denmark)

    Elfving, Tommy; Hansen, Per Christian; Nikazad, Touraj

    2014-01-01

    Kaczmarz’s method—sometimes referred to as the algebraic reconstruction technique—is an iterative method that is widely used in tomographic imaging due to its favorable semi-convergence properties. Specifically, when applied to a problem with noisy data, during the early iterations it converges......-convergence of Kaczmarz’s method as well as its projected counterpart (and their block versions). To do this we study how the data errors propagate into the iteration vectors and we derive upper bounds for this noise propagation. Our bounds are compared with numerical results obtained from tomographic imaging....

  11. An Empirical Study of Atmospheric Correction Procedures for Regional Infrasound Amplitudes with Ground Truth.

    Science.gov (United States)

    Howard, J. E.

    2014-12-01

    This study focusses on improving methods of accounting for atmospheric effects on infrasound amplitudes observed on arrays at regional distances in the southwestern United States. Recordings at ranges of 150 to nearly 300 km from a repeating ground truth source of small HE explosions are used. The explosions range in actual weight from approximately 2000-4000 lbs. and are detonated year-round which provides signals for a wide range of atmospheric conditions. Three methods of correcting the observed amplitudes for atmospheric effects are investigated with the data set. The first corrects amplitudes for upper stratospheric wind as developed by Mutschlecner and Whitaker (1999) and uses the average wind speed between 45-55 km altitudes in the direction of propagation to derive an empirical correction formula. This approach was developed using large chemical and nuclear explosions and is tested with the smaller explosions for which shorter wavelengths cause the energy to be scattered by the smaller scale structure of the atmosphere. The second approach isa semi-empirical method using ray tracing to determine wind speed at ray turning heights where the wind estimates replace the wind values in the existing formula. Finally, parabolic equation (PE) modeling is used to predict the amplitudes at the arrays at 1 Hz. The PE amplitudes are compared to the observed amplitudes with a narrow band filter centered at 1 Hz. An analysis is performed of the conditions under which the empirical and semi-empirical methods fail and full wave methods must be used.

  12. On the Reliability of Source Time Functions Estimated Using Empirical Green's Function Methods

    Science.gov (United States)

    Gallegos, A. C.; Xie, J.; Suarez Salas, L.

    2017-12-01

    The Empirical Green's Function (EGF) method (Hartzell, 1978) has been widely used to extract source time functions (STFs). In this method, seismograms generated by collocated events with different magnitudes are deconvolved. Under a fundamental assumption that the STF of the small event is a delta function, the deconvolved Relative Source Time Function (RSTF) yields the large event's STF. While this assumption can be empirically justified by examination of differences in event size and frequency content of the seismograms, there can be a lack of rigorous justification of the assumption. In practice, a small event might have a finite duration when the RSTF is retrieved and interpreted as the large event STF with a bias. In this study, we rigorously analyze this bias using synthetic waveforms generated by convolving a realistic Green's function waveform with pairs of finite-duration triangular or parabolic STFs. The RSTFs are found using a time-domain based matrix deconvolution. We find when the STFs of smaller events are finite, the RSTFs are a series of narrow non-physical spikes. Interpreting these RSTFs as a series of high-frequency source radiations would be very misleading. The only reliable and unambiguous information we can retrieve from these RSTFs is the difference in durations and the moment ratio of the two STFs. We can apply a Tikhonov smoothing to obtain a single-pulse RSTF, but its duration is dependent on the choice of weighting, which may be subjective. We then test the Multi-Channel Deconvolution (MCD) method (Plourde & Bostock, 2017) which assumes that both STFs have finite durations to be solved for. A concern about the MCD method is that the number of unknown parameters is larger, which would tend to make the problem rank-deficient. Because the kernel matrix is dependent on the STFs to be solved for under a positivity constraint, we can only estimate the rank-deficiency with a semi-empirical approach. Based on the results so far, we find that the

  13. Theoretical Research Progress in High-Velocity/Hypervelocity Impact on Semi-Infinite Targets

    Directory of Open Access Journals (Sweden)

    Yunhou Sun

    2015-01-01

    Full Text Available With the hypervelocity kinetic weapon and hypersonic cruise missiles research projects being carried out, the damage mechanism for high-velocity/hypervelocity projectile impact on semi-infinite targets has become the research keystone in impact dynamics. Theoretical research progress in high-velocity/hypervelocity impact on semi-infinite targets was reviewed in this paper. The evaluation methods for critical velocity of high-velocity and hypervelocity impact were summarized. The crater shape, crater scaling laws and empirical formulae, and simplified analysis models of crater parameters for spherical projectiles impact on semi-infinite targets were reviewed, so were the long rod penetration state differentiation, penetration depth calculation models for the semifluid, and deformed long rod projectiles. Finally, some research proposals were given for further study.

  14. Semi-automated potentiometric titration method for uranium characterization.

    Science.gov (United States)

    Cristiano, B F G; Delgado, J U; da Silva, J W S; de Barros, P D; de Araújo, R M S; Lopes, R T

    2012-07-01

    The manual version of the potentiometric titration method has been used for certification and characterization of uranium compounds. In order to reduce the analysis time and the influence of the analyst, a semi-automatic version of the method was developed in the Brazilian Nuclear Energy Commission. The method was applied with traceability assured by using a potassium dichromate primary standard. The combined standard uncertainty in determining the total concentration of uranium was around 0.01%, which is suitable for uranium characterization. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. A qualitative semi-classical treatment of an isolated semi-polar quantum dot

    International Nuclear Information System (INIS)

    Young, Toby D

    2011-01-01

    To qualitatively determine the behaviour of micro-macro properties of a quantum dot grown in a non-polar direction, we propose a simple semi-classical model based on well established ideas. We take into account the following empirical phenomena: (i) The displacement and induced strain at heterojunctions; (ii) The electrostatic potential arising from piezoelectric and spontaneous polarisation; and (iii) The localisation of excitons (particle-hole pairs) arising from quantum confinement. After some algebraic manipulation used to cast the formalism into an arbitrarily rotated frame, a numerical model is developed for the case of a semi-polar wurtzite GaN quantum dot buried in a wurtzite AlN matrix. This scheme is found to provide a satisfying qualitative description of an isolated semi-polar quantum dot in a way that is accessible to further physical interpretation and quantification.

  16. Comparison of ITER performance predicted by semi-empirical and theory-based transport models

    International Nuclear Information System (INIS)

    Mukhovatov, V.; Shimomura, Y.; Polevoi, A.

    2003-01-01

    The values of Q=(fusion power)/(auxiliary heating power) predicted for ITER by three different methods, i.e., transport model based on empirical confinement scaling, dimensionless scaling technique, and theory-based transport models are compared. The energy confinement time given by the ITERH-98(y,2) scaling for an inductive scenario with plasma current of 15 MA and plasma density 15% below the Greenwald value is 3.6 s with one technical standard deviation of ±14%. These data are translated into a Q interval of [7-13] at the auxiliary heating power P aux = 40 MW and [7-28] at the minimum heating power satisfying a good confinement ELMy H-mode. Predictions of dimensionless scalings and theory-based transport models such as Weiland, MMM and IFS/PPPL overlap with the empirical scaling predictions within the margins of uncertainty. (author)

  17. Semi-empirical model for the generation of dose distributions produced by a scanning electron beam

    International Nuclear Information System (INIS)

    Nath, R.; Gignac, C.E.; Agostinelli, A.G.; Rothberg, S.; Schulz, R.J.

    1980-01-01

    There are linear accelerators (Sagittaire and Saturne accelerators produced by Compagnie Generale de Radiologie (CGR/MeV) Corporation) which produce broad, flat electron fields by magnetically scanning the relatively narrow electron beam as it emerges from the accelerator vacuum system. A semi-empirical model, which mimics the scanning action of this type of accelerator, was developed for the generation of dose distributions in homogeneous media. The model employs the dose distributions of the scanning electron beams. These were measured with photographic film in a polystyrene phantom by turning off the magnetic scanning system. The mean deviation calculated from measured dose distributions is about 0.2%; a few points have deviations as large as 2 to 4% inside of the 50% isodose curve, but less than 8% outside of the 50% isodose curve. The model has been used to generate the electron beam library required by a modified version of a commercially-available computerized treatment-planning system. (The RAD-8 treatment planning system was purchased from the Digital Equipment Corporation. It is currently available from Electronic Music Industries

  18. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.; Qian, L.; Carroll, R. J.

    2010-01-01

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks

  19. Semi-implicit method for three-dimensional compressible MHD simulation

    International Nuclear Information System (INIS)

    Harned, D.S.; Kerner, W.

    1984-03-01

    A semi-implicit method for solving the full compressible MHD equations in three dimensions is presented. The method is unconditionally stable with respect to the fast compressional modes. The time step is instead limited by the slower shear Alfven motion. The computing time required for one time step is essentially the same as for explicit methods. Linear stability limits are derived and verified by three-dimensional tests on linear waves in slab geometry. (orig.)

  20. An unconditionally stable fully conservative semi-Lagrangian method

    KAUST Repository

    Lentine, Michael

    2011-04-01

    Semi-Lagrangian methods have been around for some time, dating back at least to [3]. Researchers have worked to increase their accuracy, and these schemes have gained newfound interest with the recent widespread use of adaptive grids where the CFL-based time step restriction of the smallest cell can be overwhelming. Since these schemes are based on characteristic tracing and interpolation, they do not readily lend themselves to a fully conservative implementation. However, we propose a novel technique that applies a conservative limiter to the typical semi-Lagrangian interpolation step in order to guarantee that the amount of the conservative quantity does not increase during this advection. In addition, we propose a new second step that forward advects any of the conserved quantity that was not accounted for in the typical semi-Lagrangian advection. We show that this new scheme can be used to conserve both mass and momentum for incompressible flows. For incompressible flows, we further explore properly conserving kinetic energy during the advection step, but note that the divergence free projection results in a velocity field which is inconsistent with conservation of kinetic energy (even for inviscid flows where it should be conserved). For compressible flows, we rely on a recently proposed splitting technique that eliminates the acoustic CFL time step restriction via an incompressible-style pressure solve. Then our new method can be applied to conservatively advect mass, momentum and total energy in order to exactly conserve these quantities, and remove the remaining time step restriction based on fluid velocity that the original scheme still had. © 2011 Elsevier Inc.

  1. White matter hyperintensities segmentation: a new semi-automated method

    Directory of Open Access Journals (Sweden)

    Mariangela eIorio

    2013-12-01

    Full Text Available White matter hyperintensities (WMH are brain areas of increased signal on T2-weighted or fluid attenuated inverse recovery magnetic resonance imaging (MRI scans. In this study we present a new semi-automated method to measure WMH load that is based on the segmentation of the intensity histogram of fluid-attenuated inversion recovery images. Thirty patients with Mild Cognitive Impairment with variable WMH load were enrolled. The semi-automated WMH segmentation included: removal of non-brain tissue, spatial normalization, removal of cerebellum and brain stem, spatial filtering, thresholding to segment probable WMH, manual editing for correction of false positives and negatives, generation of WMH map and volumetric estimation of the WMH load. Accuracy was quantitatively evaluated by comparing semi-automated and manual WMH segmentations performed by two independent raters. Differences between the two procedures were assessed using Student’s t tests and similarity was evaluated using linear regression model and Dice Similarity Coefficient (DSC. The volumes of the manual and semi-automated segmentations did not statistically differ (t-value= -1.79, DF=29, p= 0.839 for rater 1; t-value= 1.113, DF=29, p= 0.2749 for rater 2, were highly correlated (R²= 0.921, F (1,29 =155,54, p

  2. A Semi-empirical Model of the Stratosphere in the Climate System

    Science.gov (United States)

    Sodergren, A. H.; Bodeker, G. E.; Kremser, S.; Meinshausen, M.; McDonald, A.

    2014-12-01

    Chemistry climate models (CCMs) currently used to project changes in Antarctic ozone are extremely computationally demanding. CCM projections are uncertain due to lack of knowledge of future emissions of greenhouse gases (GHGs) and ozone depleting substances (ODSs), as well as parameterizations within the CCMs that have weakly constrained tuning parameters. While projections should be based on an ensemble of simulations, this is not currently possible due to the complexity of the CCMs. An inexpensive but realistic approach to simulate changes in stratospheric ozone, and its coupling to the climate system, is needed as a complement to CCMs. A simple climate model (SCM) can be used as a fast emulator of complex atmospheric-ocean climate models. If such an SCM includes a representation of stratospheric ozone, the evolution of the global ozone layer can be simulated for a wide range of GHG and ODS emissions scenarios. MAGICC is an SCM used in previous IPCC reports. In the current version of the MAGICC SCM, stratospheric ozone changes depend only on equivalent effective stratospheric chlorine (EESC). In this work, MAGICC is extended to include an interactive stratospheric ozone layer using a semi-empirical model of ozone responses to CO2and EESC, with changes in ozone affecting the radiative forcing in the SCM. To demonstrate the ability of our new, extended SCM to generate projections of global changes in ozone, tuning parameters from 19 coupled atmosphere-ocean general circulation models (AOGCMs) and 10 carbon cycle models (to create an ensemble of 190 simulations) have been used to generate probability density functions of the dates of return of stratospheric column ozone to 1960 and 1980 levels for different latitudes.

  3. Elasto-plastic strain analysis by a semi-analytical method

    Indian Academy of Sciences (India)

    deformation problems following a semi-analytical method, incorporating the com- ..... The set of equations in (8) are non-linear in nature, which is solved by direct ...... Here, [K] and [M] are stiffness matrix and mass matrix which are of the form ...

  4. Bayesian non- and semi-parametric methods and applications

    CERN Document Server

    Rossi, Peter

    2014-01-01

    This book reviews and develops Bayesian non-parametric and semi-parametric methods for applications in microeconometrics and quantitative marketing. Most econometric models used in microeconomics and marketing applications involve arbitrary distributional assumptions. As more data becomes available, a natural desire to provide methods that relax these assumptions arises. Peter Rossi advocates a Bayesian approach in which specific distributional assumptions are replaced with more flexible distributions based on mixtures of normals. The Bayesian approach can use either a large but fixed number

  5. Dealing with noise and physiological artifacts in human EEG recordings: empirical mode methods

    Science.gov (United States)

    Runnova, Anastasiya E.; Grubov, Vadim V.; Khramova, Marina V.; Hramov, Alexander E.

    2017-04-01

    In the paper we propose the new method for removing noise and physiological artifacts in human EEG recordings based on empirical mode decomposition (Hilbert-Huang transform). As physiological artifacts we consider specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the proposed method with steps including empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing these empirical modes and reconstructing of initial EEG signal. We show the efficiency of the method on the example of filtration of human EEG signal from eye-moving artifacts.

  6. DREAM: a method for semi-quantitative dermal exposure assessment

    NARCIS (Netherlands)

    Wendel de Joode, B. van; Brouwer, D.H.; Kromhout, H.; Hemmen, J.J. van

    2003-01-01

    This paper describes a new method (DREAM) for structured, semi-quantitative dermal exposure assessment for chemical or biological agents that can be used in occupational hygiene or epidemiology. It is anticipated that DREAM could serve as an initial assessment of dermal exposure, amongst others,

  7. A semi-automated method for bone age assessment using cervical vertebral maturation.

    Science.gov (United States)

    Baptista, Roberto S; Quaglio, Camila L; Mourad, Laila M E H; Hummel, Anderson D; Caetano, Cesar Augusto C; Ortolani, Cristina Lúcia F; Pisa, Ivan T

    2012-07-01

    To propose a semi-automated method for pattern classification to predict individuals' stage of growth based on morphologic characteristics that are described in the modified cervical vertebral maturation (CVM) method of Baccetti et al. A total of 188 lateral cephalograms were collected, digitized, evaluated manually, and grouped into cervical stages by two expert examiners. Landmarks were located on each image and measured. Three pattern classifiers based on the Naïve Bayes algorithm were built and assessed using a software program. The classifier with the greatest accuracy according to the weighted kappa test was considered best. The classifier showed a weighted kappa coefficient of 0.861 ± 0.020. If an adjacent estimated pre-stage or poststage value was taken to be acceptable, the classifier would show a weighted kappa coefficient of 0.992 ± 0.019. Results from this study show that the proposed semi-automated pattern classification method can help orthodontists identify the stage of CVM. However, additional studies are needed before this semi-automated classification method for CVM assessment can be implemented in clinical practice.

  8. Application of the step-wise regression procedure to the semi-empirical formulae of the nuclear binding energy

    International Nuclear Information System (INIS)

    Eissa, E.A.; Ayad, M.; Gashier, F.A.B.

    1984-01-01

    Most of the binding energy semi-empirical terms without the deformation corrections used by P.A. Seeger are arranged in a multiple linear regression form. The stepwise regression procedure with 95% confidence levels for acceptance and rejection of variables is applied for seeking a model for calculating binding energies of even-even (E-E) nuclei through a significance testing of each basic term. Partial F-values are taken as estimates for the significance of each term. The residual standard deviation and the overall F-value are used for selecting the best linear regression model. (E-E) nuclei are taken into sets lying between two successive proton and neutron magic numbers. The present work is in favour of the magic number 126 followed by 164 for the neutrons and indecisive in supporting the recently predicted proton magic number 114 rather than the previous one, 126. (author)

  9. Vinayaka : A Semi-Supervised Projected Clustering Method Using Differential Evolution

    OpenAIRE

    Satish Gajawada; Durga Toshniwal

    2012-01-01

    Differential Evolution (DE) is an algorithm for evolutionary optimization. Clustering problems have beensolved by using DE based clustering methods but these methods may fail to find clusters hidden insubspaces of high dimensional datasets. Subspace and projected clustering methods have been proposed inliterature to find subspace clusters that are present in subspaces of dataset. In this paper we proposeVINAYAKA, a semi-supervised projected clustering method based on DE. In this method DE opt...

  10. Human semi-supervised learning.

    Science.gov (United States)

    Gibson, Bryan R; Rogers, Timothy T; Zhu, Xiaojin

    2013-01-01

    Most empirical work in human categorization has studied learning in either fully supervised or fully unsupervised scenarios. Most real-world learning scenarios, however, are semi-supervised: Learners receive a great deal of unlabeled information from the world, coupled with occasional experiences in which items are directly labeled by a knowledgeable source. A large body of work in machine learning has investigated how learning can exploit both labeled and unlabeled data provided to a learner. Using equivalences between models found in human categorization and machine learning research, we explain how these semi-supervised techniques can be applied to human learning. A series of experiments are described which show that semi-supervised learning models prove useful for explaining human behavior when exposed to both labeled and unlabeled data. We then discuss some machine learning models that do not have familiar human categorization counterparts. Finally, we discuss some challenges yet to be addressed in the use of semi-supervised models for modeling human categorization. Copyright © 2013 Cognitive Science Society, Inc.

  11. Assessment of radiological parameters and patient dose audit using semi-empirical model

    International Nuclear Information System (INIS)

    Olowookere, C.J.; Onabiyi, B.; Ajumobi, S. A.; Obed, R.I.; Babalola, I. A.; Bamidele, L.

    2011-01-01

    Risk is associated with all human activities, medical imaging is no exception. The risk in medical imaging is quantified using effective dose. However, measurement of effective dose is rather difficult and time consuming, therefore, energy imparted and entrance surface dose are obtained and converted into effective dose using the appropriate conversion factors. In this study, data on exposure parameters and patient characteristics were obtained during the routine diagnostic examinations for four common types of X-ray procedures. A semi-empirical model involving computer software Xcomp5 was used to determine energy imparted per unit exposure-area product, entrance skin exposure(ESE) and incident air kerma which are radiation dose indices. The value of energy imparted per unit exposure-area product ranges between 0.60 and 1.21x 10 -3 JR -1 cm -2 and entrance skin exposure range from 5.07±1.25 to 36.62±27.79 mR, while the incident air kerma range between 43.93μGy and 265.5μGy. The filtrations of two of the three machines investigated were lower than the standard requirement of CEC for the machines used in conventional radiography. The values of and ESE obtained in the study were relatively lower compared to the published data, indicating that patients irradiated during the routine examinations in this study are at lower health risk. The energy imparted per unit exposure- area product could be used to determine the energy delivered to the patient during diagnostic examinations, and it is an approximate indicator of patient risk.

  12. Semi-empiric model of an air cooled cabinet air conditioner for the dynamic analysis of the building and acclimation systems integrated behaviour; Modelo semi-empirico de condicionador de gabinete resfriado a ar para analise dinamica do comportamento integrado de edificacoes e sistemas de climatizacao

    Energy Technology Data Exchange (ETDEWEB)

    Correa, Jorge E. [Para Univ., Belem (Brazil). Dept. de Engenharia Mecanica]. E-mail: jecorrea@amazon.com.br; Melo, Claudio. E-mail: melo@nrva.ufsc.br; Negrao, Cezar O. R. E-mail: negrao@energia.damec.cefetpr.br

    2000-07-01

    This work presents a semi-empirical model for a air cooled case air conditioner. This model is to be inserted in the EPS-r program (Environmental System Performance - research version) allowing the dynamic analysis of the integrated behaviour of buildings and acclimation systems using this equipment. Results obtained from simulations under the operation conditions existing in Brazil are analysed.

  13. Semi-definite Programming: methods and algorithms for energy management

    International Nuclear Information System (INIS)

    Gorge, Agnes

    2013-01-01

    The present thesis aims at exploring the potentialities of a powerful optimization technique, namely Semi-definite Programming, for addressing some difficult problems of energy management. We pursue two main objectives. The first one consists of using SDP to provide tight relaxations of combinatorial and quadratic problems. A first relaxation, called 'standard' can be derived in a generic way but it is generally desirable to reinforce them, by means of tailor-made tools or in a systematic fashion. These two approaches are implemented on different models of the Nuclear Outages Scheduling Problem, a famous combinatorial problem. We conclude this topic by experimenting the Lasserre's hierarchy on this problem, leading to a sequence of semi-definite relaxations whose optimal values tends to the optimal value of the initial problem. The second objective deals with the use of SDP for the treatment of uncertainty. We investigate an original approach called 'distributionally robust optimization', that can be seen as a compromise between stochastic and robust optimization and admits approximations under the form of a SDP. We compare the benefits of this method w.r.t classical approaches on a demand/supply equilibrium problem. Finally, we propose a scheme for deriving SDP relaxations of MISOCP and we report promising computational results indicating that the semi-definite relaxation improves significantly the continuous relaxation, while requiring a reasonable computational effort. SDP therefore proves to be a promising optimization method that offers great opportunities for innovation in energy management. (author)

  14. A Semi-Empirical SNR Model for Soil Moisture Retrieval Using GNSS SNR Data

    Directory of Open Access Journals (Sweden)

    Mutian Han

    2018-02-01

    Full Text Available The Global Navigation Satellite System-Interferometry and Reflectometry (GNSS-IR technique on soil moisture remote sensing was studied. A semi-empirical Signal-to-Noise Ratio (SNR model was proposed as a curve-fitting model for SNR data routinely collected by a GNSS receiver. This model aims at reconstructing the direct and reflected signal from SNR data and at the same time extracting frequency and phase information that is affected by soil moisture as proposed by K. M. Larson et al. This is achieved empirically through approximating the direct and reflected signal by a second-order and fourth-order polynomial, respectively, based on the well-established SNR model. Compared with other models (K. M. Larson et al., T. Yang et al., this model can improve the Quality of Fit (QoF with little prior knowledge needed and can allow soil permittivity to be estimated from the reconstructed signals. In developing this model, we showed how noise affects the receiver SNR estimation and thus the model performance through simulations under the bare soil assumption. Results showed that the reconstructed signals with a grazing angle of 5°–15° were better for soil moisture retrieval. The QoF was improved by around 45%, which resulted in better estimation of the frequency and phase information. However, we found that the improvement on phase estimation could be neglected. Experimental data collected at Lamasquère, France, were also used to validate the proposed model. The results were compared with the simulation and previous works. It was found that the model could ensure good fitting quality even in the case of irregular SNR variation. Additionally, the soil moisture calculated from the reconstructed signals was about 15% closer in relation to the ground truth measurements. A deeper insight into the Larson model and the proposed model was given at this stage, which formed a possible explanation of this fact. Furthermore, frequency and phase information

  15. What Happened to Remote Usability Testing? An Empirical Study of Three Methods

    DEFF Research Database (Denmark)

    Stage, Jan; Andreasen, M. S.; Nielsen, H. V.

    2007-01-01

    The idea of conducting usability tests remotely emerged ten years ago. Since then, it has been studied empirically, and some software organizations employ remote methods. Yet there are still few comparisons involving more than one remote method. This paper presents results from a systematic...... empirical comparison of three methods for remote usability testing and a conventional laboratorybased think-aloud method. The three remote methods are a remote synchronous condition, where testing is conducted in real time but the test monitor is separated spatially from the test subjects, and two remote...

  16. A semi-empirical concept for the calculation of electron-impact ionization cross sections of neutral and ionized fullerenes

    International Nuclear Information System (INIS)

    Deutsch, H.; Scheier, P.; Maerk, T.D.; Becker, K.

    2002-01-01

    A semi-empirical approach to the calculation of cross section functions (absolute value and energy dependence) for the electron-impact ionization of several neutral and ionized fullerenes C 60 n+ (n =0-3) was developed, for which reliable experimental data have been reported. In particular, it is proposed a modification of the simplistic assumption that the ionization cross section of a cluster/fullerene is given as the product of the monomer ionization cross section and a factor m a , where 'm' is the number of monomers in the ensemble and 'a' is a constant. A comparison between these calculations and the available experimental data reveals good agreement for n = 0,103. In the case of ionization of C 60 2+ (n = 2) the calculation lies significantly below the measured cross section which it was interpret as an indication that additional indirect ionization processes are present for this charge state. (nevyjel)

  17. Stability of numerical method for semi-linear stochastic pantograph differential equations

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2016-01-01

    Full Text Available Abstract As a particular expression of stochastic delay differential equations, stochastic pantograph differential equations have been widely used in nonlinear dynamics, quantum mechanics, and electrodynamics. In this paper, we mainly study the stability of analytical solutions and numerical solutions of semi-linear stochastic pantograph differential equations. Some suitable conditions for the mean-square stability of an analytical solution are obtained. Then we proved the general mean-square stability of the exponential Euler method for a numerical solution of semi-linear stochastic pantograph differential equations, that is, if an analytical solution is stable, then the exponential Euler method applied to the system is mean-square stable for arbitrary step-size h > 0 $h>0$ . Numerical examples further illustrate the obtained theoretical results.

  18. Semi-supervised eigenvectors for large-scale locally-biased learning

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Mahoney, Michael W.

    2014-01-01

    improved scaling properties. We provide several empirical examples demonstrating how these semi-supervised eigenvectors can be used to perform locally-biased learning; and we discuss the relationship between our results and recent machine learning algorithms that use global eigenvectors of the graph......In many applications, one has side information, e.g., labels that are provided in a semi-supervised manner, about a specific target region of a large data set, and one wants to perform machine learning and data analysis tasks nearby that prespecified target region. For example, one might......-based machine learning and data analysis tools. At root, the reason is that eigenvectors are inherently global quantities, thus limiting the applicability of eigenvector-based methods in situations where one is interested in very local properties of the data. In this paper, we address this issue by providing...

  19. A semi-automated method for measuring thickness and white matter ...

    African Journals Online (AJOL)

    A semi-automated method for measuring thickness and white matter integrity of the corpus callosum. ... and interhemispheric differences. Future research will determine normal values for age and compare CC thickness with peripheral white matter volume loss in large groups of patients, using the semiautomated technique.

  20. Application of semi-empirical modeling and non-linear regression to unfolding fast neutron spectra from integral reaction rate data

    International Nuclear Information System (INIS)

    Harker, Y.D.

    1976-01-01

    A semi-empirical analytical expression representing a fast reactor neutron spectrum has been developed. This expression was used in a non-linear regression computer routine to obtain from measured multiple foil integral reaction data the neutron spectrum inside the Coupled Fast Reactivity Measurement Facility. In this application six parameters in the analytical expression for neutron spectrum were adjusted in the non-linear fitting process to maximize consistency between calculated and measured integral reaction rates for a set of 15 dosimetry detector foils. In two-thirds of the observations the calculated integral agreed with its respective measured value to within the experimental standard deviation, and in all but one case agreement within two standard deviations was obtained. Based on this quality of fit the estimated 70 to 75 percent confidence intervals for the derived spectrum are 10 to 20 percent for the energy range 100 eV to 1 MeV, 10 to 50 percent for 1 MeV to 10 MeV and 50 to 90 percent for 10 MeV to 18 MeV. The analytical model has demonstrated a flexibility to describe salient features of neutron spectra of the fast reactor type. The use of regression analysis with this model has produced a stable method to derive neutron spectra from a limited amount of integral data

  1. An Empirical Method for Particle Damping Design

    Directory of Open Access Journals (Sweden)

    Zhi Wei Xu

    2004-01-01

    Full Text Available Particle damping is an effective vibration suppression method. The purpose of this paper is to develop an empirical method for particle damping design based on extensive experiments on three structural objects – steel beam, bond arm and bond head stand. The relationships among several key parameters of structure/particles are obtained. Then the procedures with the use of particle damping are proposed to provide guidelines for practical applications. It is believed that the results presented in this paper would be helpful to effectively implement the particle damping for various structural systems for the purpose of vibration suppression.

  2. Numerical simulation of 2D ablation profile in CCI-2 experiment by moving particle semi-implicit method

    Energy Technology Data Exchange (ETDEWEB)

    Chai, Penghui, E-mail: phchai@vis.t.u-tokyo.ac.jp; Kondo, Masahiro; Erkan, Nejdet; Okamoto, Koji

    2016-05-15

    Highlights: • Multiphysics models were developed based on Moving Particle Semi-implicit method. • Mixing process, chemical reaction can be simulated in MCCI calculation. • CCI-2 experiment was simulated to validate the models. • Simulation and experimental results for sidewall ablation agree well. • Simulation results confirm the rapid erosion phenomenon observed in the experiment. - Abstract: Numerous experiments have been performed to explore the mechanisms of molten core-concrete interaction (MCCI) phenomena since the 1980s. However, previous experimental results show that uncertainties pertaining to several aspects such as the mixing process and crust behavior remain. To explore the mechanism governing such aspects, as well as to predict MCCI behavior in real severe accident events, a number of simulation codes have been developed for process calculations. However, uncertainties exist among the codes because of the use of different empirical models. In this study, a new computational code is developed using multiphysics models to simulate MCCI phenomena based on the moving particle semi-implicit (MPS) method. Momentum and energy equations are used to solve the velocity and temperature fields, and multiphysics models are developed on the basis of the basic MPS method. The CCI-2 experiment is simulated by applying the developed code. With respect to sidewall ablation, good agreement is observed between the simulation and experimental results. However, axial ablation is slower in the simulation, which is probably due to the underestimation of the enhancement effect of heat transfer provided by the moving bubbles at the bottom. In addition, the simulation results confirm the rapid erosion phenomenon observed in the experiment, which in the numerical simulation is explained by solutal convection provided by the liquid concrete at the corium/concrete interface. The results of the comparison of different model combinations show the effect of each

  3. Improvement of spatial discretization error on the semi-analytic nodal method using the scattered source subtraction method

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Tatsumi, Masahiro

    2006-01-01

    In this paper, the scattered source subtraction (SSS) method is newly proposed to improve the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. In the SSS method, the scattered source is subtracted from both side of the diffusion or the transport equation to make spatial variation of the source term to be small. The same neutron balance equation is still used in the SSS method. Since the SSS method just modifies coefficients of node coupling equations (those used in evaluation for the response of partial currents), its implementation is easy. Validity of the present method is verified through test calculations that are carried out in PWR multi-assemblies configurations. The calculation results show that the SSS method can significantly improve the spatial discretization error. Since the SSS method does not have any negative impact on execution time, convergence behavior and memory requirement, it will be useful to reduce the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. (author)

  4. Calibration strategy for semi-quantitative direct gas analysis using inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Gerdes, Kirk; Carter, Kimberly E.

    2011-01-01

    A process is described by which an ICP-MS equipped with an Octopole Reaction System (ORS) is calibrated using liquid phase standards to facilitate direct analysis of gas phase samples. The instrument response to liquid phase standards is analyzed to produce empirical factors relating ion generation and transmission efficiencies to standard operating parameters. Empirical factors generated for liquid phase samples are then used to produce semi-quantitative analysis of both mixed liquid/gas samples and pure gas samples. The method developed is similar to the semi-quantitative analysis algorithms in the commercial software, which have here been expanded to include gas phase elements such as Xe and Kr. Equations for prediction of relative ionization efficiencies and isotopic transmission are developed for several combinations of plasma operating conditions, which allows adjustment of limited parameters between liquid and gas injection modes. In particular, the plasma temperature and electron density are calculated from comparison of experimental results to the predictions of the Saha equation. Comparisons between operating configurations are made to determine the robustness of the analysis to plasma conditions and instrument operating parameters. Using the methods described in this research, the elemental concentrations in a liquid standard containing 45 analytes and treated as an unknown sample were quantified accurately to ± 50% for most elements using 133 Cs as a single internal reference. The method is used to predict liquid phase mercury within 12% of the actual concentration and gas phase mercury within 28% of the actual concentration. The results verify that the calibration method facilitates accurate semi-quantitative, gas phase analysis of metal species with sufficient sensitivity to quantify metal concentrations lower than 1 ppb for many metallic analytes.

  5. Semi top-down method combined with earth-bank, an effective method for basement construction.

    Science.gov (United States)

    Tuan, B. Q.; Tam, Ng M.

    2018-04-01

    Choosing an appropriate method of deep excavation not only plays a decisive role in technical success, but also in economics of the construction project. Presently, we mainly base on to key methods: “Bottom-up” and “Top-down” construction method. Right now, this paper presents an another method of construction that is “Semi Top-down method combining with earth-bank” in order to take the advantages and limit the weakness of the above methods. The Bottom-up method was improved by using the earth-bank to stabilize retaining walls instead of the bracing steel struts. The Top-down method was improved by using the open cut method for the half of the earthwork quantities.

  6. Characteristic vibration patterns of odor compounds from bread-baking volatiles upon protein binding: density functional and ONIOM study and principal component analysis.

    Science.gov (United States)

    Treesuwan, Witcha; Hirao, Hajime; Morokuma, Keiji; Hannongbua, Supa

    2012-05-01

    As the mechanism underlying the sense of smell is unclear, different models have been used to rationalize structure-odor relationships. To gain insight into odorant molecules from bread baking, binding energies and vibration spectra in the gas phase and in the protein environment [7-transmembrane helices (7TMHs) of rhodopsin] were calculated using density functional theory [B3LYP/6-311++G(d,p)] and ONIOM [B3LYP/6-311++G(d,p):PM3] methods. It was found that acetaldehyde ("acid" category) binds strongly in the large cavity inside the receptor, whereas 2-ethyl-3-methylpyrazine ("roasted") binds weakly. Lys296, Tyr268, Thr118 and Ala117 were identified as key residues in the binding site. More emphasis was placed on how vibrational frequencies are shifted and intensities modified in the receptor protein environment. Principal component analysis (PCA) suggested that the frequency shifts of C-C stretching, CH(3) umbrella, C = O stretching and CH(3) stretching modes have a significant effect on odor quality. In fact, the frequency shifts of the C-C stretching and C = O stretching modes, as well as CH(3) umbrella and CH(3) symmetric stretching modes, exhibit different behaviors in the PCA loadings plot. A large frequency shift in the CH(3) symmetric stretching mode is associated with the sweet-roasted odor category and separates this from the acid odor category. A large frequency shift of the C-C stretching mode describes the roasted and oily-popcorn odor categories, and separates these from the buttery and acid odor categories.

  7. A QM/MM–Based Computational Investigation on the Catalytic Mechanism of Saccharopine Reductase

    Directory of Open Access Journals (Sweden)

    James W. Gauld

    2011-10-01

    Full Text Available Saccharopine reductase from Magnaporthe grisea, an NADPH-containing enzyme in the α-aminoadipate pathway, catalyses the formation of saccharopine, a precursor to L-lysine, from the substrates glutamate and α-aminoadipate-δ-semialdehyde. Its catalytic mechanism has been investigated using quantum mechanics/molecular mechanics (QM/MM ONIOM-based approaches. In particular, the overall catalytic pathway has been elucidated and the effects of electron correlation and the anisotropic polar protein environment have been examined via the use of the ONIOM(HF/6-31G(d:AMBER94 and ONIOM(MP2/6-31G(d//HF/6-31G(d:AMBER94 methods within the mechanical embedding formulism and ONIOM(MP2/6-31G(d//HF/6-31G(d:AMBER94 and ONIOM(MP2/6-311G(d,p//HF/6-31G(d:AMBER94 within the electronic embedding formulism. The results of the present study suggest that saccharopine reductase utilises a substrate-assisted catalytic pathway in which acid/base groups within the cosubstrates themselves facilitate the mechanistically required proton transfers. Thus, the enzyme appears to act most likely by binding the three required reactant molecules glutamate, α-aminoadipate-δ-semialdehyde and NADPH in a manner and polar environment conducive to reaction.

  8. MR Imaging-based Semi-quantitative Methods for Knee Osteoarthritis

    Science.gov (United States)

    JARRAYA, Mohamed; HAYASHI, Daichi; ROEMER, Frank Wolfgang; GUERMAZI, Ali

    2016-01-01

    Magnetic resonance imaging (MRI)-based semi-quantitative (SQ) methods applied to knee osteoarthritis (OA) have been introduced during the last decade and have fundamentally changed our understanding of knee OA pathology since then. Several epidemiological studies and clinical trials have used MRI-based SQ methods to evaluate different outcome measures. Interest in MRI-based SQ scoring system has led to continuous update and refinement. This article reviews the different SQ approaches for MRI-based whole organ assessment of knee OA and also discuss practical aspects of whole joint assessment. PMID:26632537

  9. Application of a semi-automatic cartilage segmentation method for biomechanical modeling of the knee joint.

    Science.gov (United States)

    Liukkonen, Mimmi K; Mononen, Mika E; Tanska, Petri; Saarakkala, Simo; Nieminen, Miika T; Korhonen, Rami K

    2017-10-01

    Manual segmentation of articular cartilage from knee joint 3D magnetic resonance images (MRI) is a time consuming and laborious task. Thus, automatic methods are needed for faster and reproducible segmentations. In the present study, we developed a semi-automatic segmentation method based on radial intensity profiles to generate 3D geometries of knee joint cartilage which were then used in computational biomechanical models of the knee joint. Six healthy volunteers were imaged with a 3T MRI device and their knee cartilages were segmented both manually and semi-automatically. The values of cartilage thicknesses and volumes produced by these two methods were compared. Furthermore, the influences of possible geometrical differences on cartilage stresses and strains in the knee were evaluated with finite element modeling. The semi-automatic segmentation and 3D geometry construction of one knee joint (menisci, femoral and tibial cartilages) was approximately two times faster than with manual segmentation. Differences in cartilage thicknesses, volumes, contact pressures, stresses, and strains between segmentation methods in femoral and tibial cartilage were mostly insignificant (p > 0.05) and random, i.e. there were no systematic differences between the methods. In conclusion, the devised semi-automatic segmentation method is a quick and accurate way to determine cartilage geometries; it may become a valuable tool for biomechanical modeling applications with large patient groups.

  10. Reliability of a semi-quantitative method for dermal exposure assessment (DREAM)

    NARCIS (Netherlands)

    Wendel de Joode, B. van; Hemmen, J.J. van; Meijster, T.; Major, V.; London, L.; Kromhout, H.

    2005-01-01

    Valid and reliable semi-quantitative dermal exposure assessment methods for epidemiological research and for occupational hygiene practice, applicable for different chemical agents, are practically nonexistent. The aim of this study was to assess the reliability of a recently developed

  11. An empirical method to estimate bulk particulate refractive index for ocean satellite applications

    Digital Repository Service at National Institute of Oceanography (India)

    Suresh, T.; Desa, E.; Mascarenhas, A.A.M.Q.; Matondkar, S.G.P.; Naik, P.; Nayak, S.R.

    An empirical method is presented here to estimates bulk particulate refractive index using the measured inherent and apparent optical properties from the various waters types of the Arabian Sea. The empirical model, where the bulk refractive index...

  12. An alternative method for determination of oscillator strengths: The example of Sc II

    International Nuclear Information System (INIS)

    Ruczkowski, J.; Elantkowska, M.; Dembczyński, J.

    2014-01-01

    We describe our method for determining oscillator strengths and hyperfine structure splittings that is an alternative to the commonly used, purely theoretical calculations, or to the semi-empirical approach combined with theoretically calculated transition integrals. We have developed our own computer programs that allow us to determine all attributes of the structure of complex atoms starting from the measured frequencies emitted by the atoms. As an example, we present the results of the calculation of the structure, electric dipole transitions, and hyperfine splittings of Sc II. The angular coefficients of the transition matrix in pure SL coupling were found from straightforward Racah algebra. The transition matrix was transformed into the actual intermediate coupling by the fine structure eigenvectors obtained from the semi-empirical approach. The transition integrals were treated as free parameters in the least squares fit to experimental gf values. For most transitions, the experimental and the calculated gf-values are consistent with the accuracy claimed in the NIST compilation. - Highlights: • The method of simultaneous determination of all the attributes of atomic structure. • The semi-empirical method of parameterization of oscillator strengths. • Illustration of the method application for the example of Sc II data

  13. Variational principles for Ginzburg-Landau equation by He's semi-inverse method

    International Nuclear Information System (INIS)

    Liu, W.Y.; Yu, Y.J.; Chen, L.D.

    2007-01-01

    Via the semi-inverse method of establishing variational principles proposed by He, a generalized variational principle is established for Ginzburg-Landau equation. The present theory provides a quite straightforward tool to the search for various variational principles for physical problems. This paper aims at providing a more complete theoretical basis for applications using finite element and other direct variational methods

  14. Moving Particle Semi-implicit method: a numerical method for thermal hydraulic analysis with topological deformations

    International Nuclear Information System (INIS)

    Koshizuka, S.; Oka, Y.

    1997-01-01

    Moving Particle Semi-implicit (MPS) method is presented. Partial differential operators in the governing equations, such as gradient and Laplacian, are modeled as particle interactions without grids. A semi-implicit algorithm is used for incompressible flow analysis. In the present study, calculation models of moving solids, thin structures and phase change between liquid and gas are developed. Interaction between breaking waves and a floating solid is simulated using the model of moving solids. Calculations of collapsing water with a vertical thin plate show that water spills out over the plate which is largely deformed. Impingement of water jets on a molten metal pool is analyzed to investigate fundamental processes of vapor explosions. Water, vapor and molten metal are simultaneously calculated with evaporation. This calculation reveals that filaments of the molten metal emerge as the fragmentation process of vapor explosions. The MPS method is useful for complex problems involving moving interfaces even if topological deformations occur. (author)

  15. Semi-classical quantization non-manifestly using the method of harmonic balance

    International Nuclear Information System (INIS)

    Stepanov, S.S.; Tutik, R.S.; Yaroshenko, A.P.; Schlippe, W. von.

    1990-01-01

    Based on the ideas of the harmonic balance method and h-expansion a semi-classical procedure for deriving approximations to the energy levels of one-dimensional quantum systems is developed. The procedure is applied to treat the perturbed oscillator potentials. 12 refs.; 2 tabs

  16. Evaluation of three semi-empirical approaches to estimate the net radiation over a drip-irrigated olive orchard

    Directory of Open Access Journals (Sweden)

    Rafael López-Olivari

    2015-09-01

    Full Text Available The use of actual evapotranspiration (ETα models requires an appropriate parameterization of the available energy, where the net radiation (Rn is the most important component. Thus, a study was carried out to calibrate and evaluate three semi-empirical approaches to estimate net radiation (Rn over a drip-irrigated olive (Olea europaea L. 'Arbequina' orchard during 2009/2010 and 2010/2011 seasons. The orchard was planted in 2005 at high density in the Pencahue Valley, Maule Region, Chile. The evaluated models were calculated using the balance between long and short wave radiation. To achieve this objective it was assumed that Ts = Tα for Model 1, Ts = Tv for Model 2 and Ts = Tr for Model 3 (Ts is surface temperature; Tα is air temperature; and Tv is temperature inside of the tree canopy; Tr is radiometric temperature. For the three models, the Brutsaert's empirical coefficient (Φ was calibrated using incoming long wave radiation equation with the database of 2009/2010 season. Thus, the calibration indicated that Φ was equal to 1.75. Using the database from 2010/2011 season, the validation indicated that the three models were able to predict the Rn at a 30-min interval with errors lower than 6%, root mean square error (RMSE between 26 and 39 W m-2 and mean absolute error (MAE between 20 and 31 W m-2. On daily time intervals, validation indicated that models presented errors, RMSE and MAE between 2% and 3%, 1.22-1.54 and 1.04-1.35 MJ m-2 d-1, respectively. The three R„-Models would be evaluated and used in others Mediterranean conditions according to the availability of data to estimate net radiation over a drip-irrigated olive orchard planted at high density.

  17. Semi-Local DFT Functionals with Exact-Exchange-Like Features: Beyond the AK13

    Science.gov (United States)

    Armiento, Rickard

    The Armiento-Kümmel functional from 2013 (AK13) is a non-empirical semi-local exchange functional on generalized gradient approximation form (GGA) in Kohn-Sham (KS) density functional theory (DFT). Recent works have established that AK13 gives improved electronic-structure exchange features over other semi-local methods, with a qualitatively improved orbital description and band structure. For example, the Kohn-Sham band gap is greatly extended, as it is for exact exchange. This talk outlines recent efforts towards new exchange-correlation functionals based on, and extending, the AK13 design ideas. The aim is to improve the quantitative accuracy, the description of energetics, and to address other issues found with the original formulation. Swedish e-Science Research Centre (SeRC).

  18. Towards Multi-Method Research Approach in Empirical Software Engineering

    Science.gov (United States)

    Mandić, Vladimir; Markkula, Jouni; Oivo, Markku

    This paper presents results of a literature analysis on Empirical Research Approaches in Software Engineering (SE). The analysis explores reasons why traditional methods, such as statistical hypothesis testing and experiment replication are weakly utilized in the field of SE. It appears that basic assumptions and preconditions of the traditional methods are contradicting the actual situation in the SE. Furthermore, we have identified main issues that should be considered by the researcher when selecting the research approach. In virtue of reasons for weak utilization of traditional methods we propose stronger use of Multi-Method approach with Pragmatism as the philosophical standpoint.

  19. Semi-automatic version of the potentiometric titration method for characterization of uranium compounds

    International Nuclear Information System (INIS)

    Cristiano, Bárbara F.G.; Delgado, José Ubiratan; Wanderley S da Silva, José; Barros, Pedro D. de; Araújo, Radier M.S. de; Dias, Fábio C.; Lopes, Ricardo T.

    2012-01-01

    The potentiometric titration method was used for characterization of uranium compounds to be applied in intercomparison programs. The method is applied with traceability assured using a potassium dichromate primary standard. A semi-automatic version was developed to reduce the analysis time and the operator variation. The standard uncertainty in determining the total concentration of uranium was around 0.01%, which is suitable for uranium characterization and compatible with those obtained by manual techniques. - Highlights: ► A semi-automatic potentiometric titration method was developed for U charaterization. ► K 2 Cr 2 O 7 was the only certified reference material used. ► Values obtained for U 3 O 8 samples were consistent with certified. ► Uncertainty of 0.01% was useful for characterization and intercomparison program.

  20. A study of the relationship between the semi-classical and the generator coordinate methods

    International Nuclear Information System (INIS)

    Passos, E.J.V. de; Souza Cruz, F.F. de.

    Using a very simple type of wave-packet which is obtained by letting unitary displacement operators having as generators canonical operators Q and P in the many-body Hilbert space act on a reference state, the relatinship between the semi-classical and the generator coordinate methods is investigated. The semi-classical method is based on the time-dependent variational principle whereas in the generator coordinate method the wave-packets are taken as generator states. To establish the equivalence of the two-methods, the concept of redundancy of the wave-packet and the importance of the zero-point energy effects are examined in detail, using tools developed in previous works. A numerical application to the case of the Goldhaber-Teller mode in 4 He is made. (Author) [pt

  1. Climate Prediction for Brazil's Nordeste: Performance of Empirical and Numerical Modeling Methods.

    Science.gov (United States)

    Moura, Antonio Divino; Hastenrath, Stefan

    2004-07-01

    Comparisons of performance of climate forecast methods require consistency in the predictand and a long common reference period. For Brazil's Nordeste, empirical methods developed at the University of Wisconsin use preseason (October January) rainfall and January indices of the fields of meridional wind component and sea surface temperature (SST) in the tropical Atlantic and the equatorial Pacific as input to stepwise multiple regression and neural networking. These are used to predict the March June rainfall at a network of 27 stations. An experiment at the International Research Institute for Climate Prediction, Columbia University, with a numerical model (ECHAM4.5) used global SST information through February to predict the March June rainfall at three grid points in the Nordeste. The predictands for the empirical and numerical model forecasts are correlated at +0.96, and the period common to the independent portion of record of the empirical prediction and the numerical modeling is 1968 99. Over this period, predicted versus observed rainfall are evaluated in terms of correlation, root-mean-square error, absolute error, and bias. Performance is high for both approaches. Numerical modeling produces a correlation of +0.68, moderate errors, and strong negative bias. For the empirical methods, errors and bias are small, and correlations of +0.73 and +0.82 are reached between predicted and observed rainfall.

  2. An empirical study of ensemble-based semi-supervised learning approaches for imbalanced splice site datasets.

    Science.gov (United States)

    Stanescu, Ana; Caragea, Doina

    2015-01-01

    Recent biochemical advances have led to inexpensive, time-efficient production of massive volumes of raw genomic data. Traditional machine learning approaches to genome annotation typically rely on large amounts of labeled data. The process of labeling data can be expensive, as it requires domain knowledge and expert involvement. Semi-supervised learning approaches that can make use of unlabeled data, in addition to small amounts of labeled data, can help reduce the costs associated with labeling. In this context, we focus on the problem of predicting splice sites in a genome using semi-supervised learning approaches. This is a challenging problem, due to the highly imbalanced distribution of the data, i.e., small number of splice sites as compared to the number of non-splice sites. To address this challenge, we propose to use ensembles of semi-supervised classifiers, specifically self-training and co-training classifiers. Our experiments on five highly imbalanced splice site datasets, with positive to negative ratios of 1-to-99, showed that the ensemble-based semi-supervised approaches represent a good choice, even when the amount of labeled data consists of less than 1% of all training data. In particular, we found that ensembles of co-training and self-training classifiers that dynamically balance the set of labeled instances during the semi-supervised iterations show improvements over the corresponding supervised ensemble baselines. In the presence of limited amounts of labeled data, ensemble-based semi-supervised approaches can successfully leverage the unlabeled data to enhance supervised ensembles learned from highly imbalanced data distributions. Given that such distributions are common for many biological sequence classification problems, our work can be seen as a stepping stone towards more sophisticated ensemble-based approaches to biological sequence annotation in a semi-supervised framework.

  3. Semi-empirical simulation of thermoluminescent response under different filter geometries; Simulacao semi-empirica da resposta termoluminescente sob diferentes geometrias de filtro

    Energy Technology Data Exchange (ETDEWEB)

    Shammas, Gabriel Issa Jabra

    2006-07-01

    Many thermoluminescent materials has been developed and used for photon personal dosimetry but no one has all desired characteristics alone. These characteristics include robustness, high sensitivity, energy photon independence, large range of photon energy detection, good reproducibility, small fading and simple glow curve with peaks above 150 deg C. Calcium Sulfate Dysprosium doped (CaSO{sub 4}:Dy) phosphor Thermoluminescent Dosimeter (TLD) has been used by many laboratories, mainly in Brazil and India. Another interesting phosphor is Calcium Fluoride (CaF{sub 2}). These phosphor advantages begin to be more required and its disadvantages have became more apparent, in a global market more and more competitive. These phosphors are used in environmental and area monitoring, once they present more sensibility than other phosphors, like LiF:Mg. Theirs mainly disadvantage is a strong energetic dependence response, which must be corrected for theirs application in the field, where photon radiation is unknown a priori. An interesting way do make this correction in orthogonal incidence of the radiation on the phosphor is to interject a plane leaked filter between the beam and the phosphor. In order to reduce the energetic dependence on any incidence angle, reducing the field dose measurement uncertainty too, this work presents a simulation study on spherical filter geometries. It was simulated photon irradiations with Gamma rays of {sup 60}Co and x-rays of 33; 48 and 118 keV, on many incidence angles from zero to ninety degrees. These semi-empirical computational simulations using finite differences in three dimensions were done in spherical coordinates. The results pointed out the best filter thicknesses and widths, in order to optimize the correction on energetic dependence. (author)

  4. A semi-automated method of monitoring dam passage of American Eels Anguilla rostrata

    Science.gov (United States)

    Welsh, Stuart A.; Aldinger, Joni L.

    2014-01-01

    Fish passage facilities at dams have become an important focus of fishery management in riverine systems. Given the personnel and travel costs associated with physical monitoring programs, automated or semi-automated systems are an attractive alternative for monitoring fish passage facilities. We designed and tested a semi-automated system for eel ladder monitoring at Millville Dam on the lower Shenandoah River, West Virginia. A motion-activated eel ladder camera (ELC) photographed each yellow-phase American Eel Anguilla rostrata that passed through the ladder. Digital images (with date and time stamps) of American Eels allowed for total daily counts and measurements of eel TL using photogrammetric methods with digital imaging software. We compared physical counts of American Eels with camera-based counts; TLs obtained with a measuring board were compared with TLs derived from photogrammetric methods. Data from the ELC were consistent with data obtained by physical methods, thus supporting the semi-automated camera system as a viable option for monitoring American Eel passage. Time stamps on digital images allowed for the documentation of eel passage time—data that were not obtainable from physical monitoring efforts. The ELC has application to eel ladder facilities but can also be used to monitor dam passage of other taxa, such as crayfishes, lampreys, and water snakes.

  5. Mathematical properties of a semi-classical signal analysis method: Noisy signal case

    KAUST Repository

    Liu, Dayan

    2012-08-01

    Recently, a new signal analysis method based on a semi-classical approach has been proposed [1]. The main idea in this method is to interpret a signal as a potential of a Schrodinger operator and then to use the discrete spectrum of this operator to analyze the signal. In this paper, we are interested in a mathematical analysis of this method in discrete case considering noisy signals. © 2012 IEEE.

  6. Mathematical properties of a semi-classical signal analysis method: Noisy signal case

    KAUST Repository

    Liu, Dayan; Laleg-Kirati, Taous-Meriem

    2012-01-01

    Recently, a new signal analysis method based on a semi-classical approach has been proposed [1]. The main idea in this method is to interpret a signal as a potential of a Schrodinger operator and then to use the discrete spectrum of this operator to analyze the signal. In this paper, we are interested in a mathematical analysis of this method in discrete case considering noisy signals. © 2012 IEEE.

  7. A semi-Markov model for the duration of stay in a non-homogenous ...

    African Journals Online (AJOL)

    The semi-Markov approach to a non-homogenous manpower system is considered. The mean duration of stay in a grade and the total duration of stay in the system are obtained. A renewal type equation is developed and used in deriving the limiting distribution of the semi – Markov process. Empirical estimators of the ...

  8. X-ray structure, semi-empirical MO calculations and π-electron delocalization of 1-cyanoacetyl-5-trifluoromethyl-5-hydroxy-4,5-dihydro-1 H-pyrazoles

    Science.gov (United States)

    Martins, Marcos A. P.; Moreira, Dayse N.; Frizzo, Clarissa P.; Campos, Patrick T.; Longhi, Kelvis; Marzari, Mara R. B.; Zanatta, Nilo; Bonacorso, Helio G.

    2010-04-01

    The structure of three 1-cyanoacetyl-3-alkyl[aryl]-5-trifluoromethyl-5-hydroxy-4,5-dihydro-1 H-pyrazoles ( 1- 3) has been determined by X-ray diffractometry. The 4,5-dihydro-1 H-pyrazole rings were obtained as almost planar structures showing RMS deviation at a range of 0.0196-0.0736 Å. The data demonstrate that the molecular packing is dependent on the substituent present in each molecule. In addition, a computational investigation using semi-empirical AM1 and RM1 methods was performed in order to investigate the correlation between experimental and calculated geometrical parameters. The data obtained suggest that the structural data furnished by the AM1 method is in better agreement with those experimentally determined for the above compounds. An analysis of the π-electron delocalization by HOMA calculations indicate that there is a hyperconjugation effect in the imine group toward to phenyl group at ring 3-position of compound 2, and that this resonance effect decrease in compounds 1 and 3. In addition, it was showed that bond N(1) and C(6) do not have an amide character. Thus, the O(6)-C(6)-N(1)-N(2)-C(3) fragment is not completely delocalized mainly due to the low π-electron delocalization in N(1)-N(2) bond for all compounds.

  9. Semi empirical model for astrophysical nuclear fusion reactions of 1≤Z≤15

    International Nuclear Information System (INIS)

    Manjunatha, H.C.; Seenappa, L.; Sridhar, K.N.

    2017-01-01

    The fusion reaction is one of the most important reactions in the stellar evolution. Due to the complicated reaction mechanism of fusion, there is great uncertainty in the reaction rate which limits our understanding of various stellar objects. Low z elements are formed through many fusion reactions such as "4He+"1"2C→"1"6O, "1"2C+"1"2C→"2"0Ne+"4He, "1"2C+"1"2C→"2"3Na, "1"2C+"1"2C→"2"3Mg, "1"6O+"1"6O→"2"8Si+"4He, "1"2C+"1H→"1"3N and "1"3C+"4He→"1"6O. A detail study is required on Coulomb and nuclear interaction in formation of low Z elements in stars through fusion reactions. For astrophysics, the important energy range extends from 1 MeV to 3 MeV in the center of mass frame, which is only partially covered by experiments. In the present work, we have studied the basic fusion parameters such as barrier heights (V_B), positions (R_B), curvature of the inverted parabola (ħω_1) for fusion barrier, cross section and compound nucleus formation probability (P_C_N) and fusion process in the low Z element (1≤Z≤15) formation process. For each isotope, we have studied all possible projectile-target combinations. We have also studied the astrophysical S(E) factor for these reactions. Based on this study, we have formulated the semi empirical relations for barrier heights (V_B), positions (R_B), curvature of the inverted parabola and hence for the fusion cross section and astrophysical S(E) factor. The values produced by the present model compared with the experiments and data available in the literature. (author)

  10. An empirical method for dynamic camouflage assessment

    Science.gov (United States)

    Blitch, John G.

    2011-06-01

    As camouflage systems become increasingly sophisticated in their potential to conceal military personnel and precious cargo, evaluation methods need to evolve as well. This paper presents an overview of one such attempt to explore alternative methods for empirical evaluation of dynamic camouflage systems which aspire to keep pace with a soldier's movement through rapidly changing environments that are typical of urban terrain. Motivating factors are covered first, followed by a description of the Blitz Camouflage Assessment (BCA) process and results from an initial proof of concept experiment conducted in November 2006. The conclusion drawn from these results, related literature and the author's personal experience suggest that operational evaluation of personal camouflage needs to be expanded beyond its foundation in signal detection theory and embrace the challenges posed by high levels of cognitive processing.

  11. Semi-Empiric Algorithm for Assessment of the Vehicle Mobility

    Directory of Open Access Journals (Sweden)

    Ticusor CIOBOTARU

    2009-12-01

    Full Text Available The mobility of military vehicles plays a key role in operation. The ability to reach the desired area in war theatre represents the most important condition for a successful accomplishment of the mission for military vehicles. The off-road vehicles face a broad spectrum of terrains to cross. These terrains differ by geometry and the soil characteristics.NATO References Mobility Model (NRMM software is based on empirical relationship between the terrain characteristics, running conditions and vehicles design. The paper presents the main results of a comparative mobility analysis for M1 and HMMWV vehicles obtained using NRMM.

  12. Semi-quantitative methods yield greater inter- and intraobserver agreement than subjective methods for interpreting 99m technetium-hydroxymethylene-diphosphonate uptake in equine thoracic processi spinosi.

    Science.gov (United States)

    van Zadelhoff, Claudia; Ehrle, Anna; Merle, Roswitha; Jahn, Werner; Lischer, Christoph

    2018-05-09

    Scintigraphy is a standard diagnostic method for evaluating horses with back pain due to suspected thoracic processus spinosus pathology. Lesion detection is based on subjective or semi-quantitative assessments of increased uptake. This retrospective, analytical study is aimed to compare semi-quantitative and subjective methods in the evaluation of scintigraphic images of the processi spinosi in the equine thoracic spine. Scintigraphic images of 20 Warmblood horses, presented for assessment of orthopedic conditions between 2014 and 2016, were included in the study. Randomized, blinded image evaluation was performed by 11 veterinarians using subjective and semi-quantitative methods. Subjective grading was performed for the analysis of red-green-blue and grayscale scintigraphic images, which were presented in full-size or as masked images. For the semi-quantitative assessment, observers placed regions of interest over each processus spinosus. The uptake ratio of each processus spinosus in comparison to a reference region of interest was determined. Subsequently, a modified semi-quantitative calculation was developed whereby only the highest counts-per-pixel for a specified number of pixels was processed. Inter- and intraobserver agreement was calculated using intraclass correlation coefficients. Inter- and intraobserver intraclass correlation coefficients were 41.65% and 71.39%, respectively, for the subjective image assessment. Additionally, a correlation between intraobserver agreement, experience, and grayscale images was identified. The inter- and intraobserver agreement was significantly increased when using semi-quantitative analysis (97.35% and 98.36%, respectively) or the modified semi-quantitative calculation (98.61% and 98.82%, respectively). The proposed modified semi-quantitative technique showed a higher inter- and intraobserver agreement when compared to other methods, which makes it a useful tool for the analysis of scintigraphic images. The

  13. Semi-implicit semi-Lagrangian modelling of the atmosphere: a Met Office perspective

    Directory of Open Access Journals (Sweden)

    Benacchio Tommaso

    2016-09-01

    Full Text Available The semi-Lagrangian numerical method, in conjunction with semi-implicit time integration, provides numerical weather prediction models with numerical stability for large time steps, accurate modes of interest, and good representation of hydrostatic and geostrophic balance. Drawing on the legacy of dynamical cores at the Met Office, the use of the semi-implicit semi-Lagrangian method in an operational numerical weather prediction context is surveyed, together with details of the solution approach and associated issues and challenges. The numerical properties and performance of the current operational version of the Met Office’s numerical model are then investigated in a simplified setting along with the impact of different modelling choices.

  14. Hydrodynamic Modeling for Autonomous Underwater Vehicles Using Computational and Semi-Empirical Methods

    OpenAIRE

    Geisbert, Jesse Stuart

    2007-01-01

    Buoyancy driven underwater gliders, which locomote by modulating their buoyancy and their attitude with moving mass actuators and inflatable bladders, are proving their worth as efficient long-distance, long-duration ocean sampling platforms. Gliders have the capability to travel thousands of kilometers without a need to stop or recharge. There is a need for the development of methods for hydrodynamic modeling. This thesis aims to determine the hydrodynamic parameters for the governing equat...

  15. Optimizing irrigation and nitrogen for wheat through empirical modeling under semi-arid environment.

    Science.gov (United States)

    Saeed, Umer; Wajid, Syed Aftab; Khaliq, Tasneem; Zahir, Zahir Ahmad

    2017-04-01

    reducing irrigation from I 300 to I 240 mm during 2012-2013 and 2013-2014 did not reduce crop yield significantly (P nitrogen application ranged from 31.2 to 55.4% at N 180 and N 240 kg ha -1 for different levels of irrigation. It is concluded from study that irrigation and nitrogen relationship can be used for efficient management of irrigation and nitrogen and to reduce nitrogen losses. The empirical equations developed in this study can help farmers of semi-arid environment to calculate optimum level of irrigation and nitrogen for maximum economic return from wheat.

  16. Empirical Evidence or Intuition? An Activity Involving the Scientific Method

    Science.gov (United States)

    Overway, Ken

    2007-01-01

    Students need to have basic understanding of scientific method during their introductory science classes and for this purpose an activity was devised which involved a game based on famous Monty Hall game problem. This particular activity allowed students to banish or confirm their intuition based on empirical evidence.

  17. Semi-empirical software for the aluminothermic and carbothermic reactions

    Directory of Open Access Journals (Sweden)

    Milorad Gavrilovski

    2014-09-01

    Full Text Available Understanding the reaction thermochemistry as well as formatting the empirical data about element distribution in gas-metal-slag phases is essential for creating a good model for aluminothermic and carbothermic reaction. In this paper modeling of material and energy balance of these reactions is described with the algorithm. The software, based on this model is basically made for production of high purity ferro alloys through aluminothermic process and then extended for some carbothermic process. Model validation is demonstrated with production of FeTi, FeW, FeB and FeMo in aluminothermic and reduction of mill scale, pyrite cinders and magnetite fines in carbothermic process.

  18. Obtaining bixin from semi-defatted annatto seeds by a mechanical method and solvent extraction: Process integration and economic evaluation.

    Science.gov (United States)

    Alcázar-Alay, Sylvia C; Osorio-Tobón, J Felipe; Forster-Carneiro, Tânia; Meireles, M Angela A

    2017-09-01

    This work involves the application of physical separation methods to concentrate the pigment of semi-defatted annatto seeds, a noble vegetal biomass rich in bixin pigments. Semi-defatted annatto seeds are the residue produced after the extraction of the lipid fraction from annatto seeds using supercritical fluid extraction (SFE). Semi-defatted annatto seeds are use in this work due to three important reasons: i) previous lipid extraction is necessary to recovery the tocotrienol-rich oil present in the annatto seeds, ii) an initial removal of the oil via SFE process favors bixin separation and iii) the cost of raw material is null. Physical methods including i) the mechanical fractionation method and ii) an integrated process of mechanical fractionation method and low-pressure solvent extraction (LPSE) were studied. The integrated process was proposed for processing two different semi-defatted annatto materials denoted Batches 1 and 2. The cost of manufacture (COM) was calculated for two different production scales (5 and 50L) considering the integrated process vs. only the mechanical fractionation method. The integrated process showed a significantly higher COM than mechanical fractionation method. This work suggests that mechanical fractionation method is an adequate and low-cost process to obtain a rich-pigment product from semi-defatted annatto seeds. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Seakeeping with the semi-Lagrangian particle finite element method

    Science.gov (United States)

    Nadukandi, Prashanth; Servan-Camas, Borja; Becker, Pablo Agustín; Garcia-Espinosa, Julio

    2017-07-01

    The application of the semi-Lagrangian particle finite element method (SL-PFEM) for the seakeeping simulation of the wave adaptive modular vehicle under spray generating conditions is presented. The time integration of the Lagrangian advection is done using the explicit integration of the velocity and acceleration along the streamlines (X-IVAS). Despite the suitability of the SL-PFEM for the considered seakeeping application, small time steps were needed in the X-IVAS scheme to control the solution accuracy. A preliminary proposal to overcome this limitation of the X-IVAS scheme for seakeeping simulations is presented.

  20. Comparison of artificial intelligence methods and empirical equations to estimate daily solar radiation

    Science.gov (United States)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2016-08-01

    In the present research, three artificial intelligence methods including Gene Expression Programming (GEP), Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) as well as, 48 empirical equations (10, 12 and 26 equations were temperature-based, sunshine-based and meteorological parameters-based, respectively) were used to estimate daily solar radiation in Kerman, Iran in the period of 1992-2009. To develop the GEP, ANN and ANFIS models, depending on the used empirical equations, various combinations of minimum air temperature, maximum air temperature, mean air temperature, extraterrestrial radiation, actual sunshine duration, maximum possible sunshine duration, sunshine duration ratio, relative humidity and precipitation were considered as inputs in the mentioned intelligent methods. To compare the accuracy of empirical equations and intelligent models, root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE) and determination coefficient (R2) indices were used. The results showed that in general, sunshine-based and meteorological parameters-based scenarios in ANN and ANFIS models presented high accuracy than mentioned empirical equations. Moreover, the most accurate method in the studied region was ANN11 scenario with five inputs. The values of RMSE, MAE, MARE and R2 indices for the mentioned model were 1.850 MJ m-2 day-1, 1.184 MJ m-2 day-1, 9.58% and 0.935, respectively.

  1. Semi-implicit magnetohydrodynamic calculations

    International Nuclear Information System (INIS)

    Schnack, D.D.; Barnes, D.C.; Mikic, Z.; Harned, D.S.; Caramana, E.J.

    1987-01-01

    A semi-implicit algorithm for the solution of the nonlinear, three-dimensional, resistive MHD equations in cylindrical geometry is presented. The specific model assumes uniform density and pressure, although this is not a restriction of the method. The spatial approximation employs finite differences in the radial coordinate, and the pseudo-spectral algorithm in the periodic poloidal and axial coordinates. A leapfrog algorithm is used to advance wave-like terms; advective terms are treated with a simple predictor--corrector method. The semi-implicit term is introduced as a simple modification to the momentum equation. Dissipation is treated implicitly. The resulting algorithm is unconditionally stable with respect to normal modes. A general discussion of the semi-implicit method is given, and specific forms of the semi-implicit operator are compared in physically relevant test cases. Long-time simulations are presented. copyright 1987 Academic Press, Inc

  2. Adjusted Empirical Likelihood Method in the Presence of Nuisance Parameters with Application to the Sharpe Ratio

    Directory of Open Access Journals (Sweden)

    Yuejiao Fu

    2018-04-01

    Full Text Available The Sharpe ratio is a widely used risk-adjusted performance measurement in economics and finance. Most of the known statistical inferential methods devoted to the Sharpe ratio are based on the assumption that the data are normally distributed. In this article, without making any distributional assumption on the data, we develop the adjusted empirical likelihood method to obtain inference for a parameter of interest in the presence of nuisance parameters. We show that the log adjusted empirical likelihood ratio statistic is asymptotically distributed as the chi-square distribution. The proposed method is applied to obtain inference for the Sharpe ratio. Simulation results illustrate that the proposed method is comparable to Jobson and Korkie’s method (1981 and outperforms the empirical likelihood method when the data are from a symmetric distribution. In addition, when the data are from a skewed distribution, the proposed method significantly outperforms all other existing methods. A real-data example is analyzed to exemplify the application of the proposed method.

  3. A novel semi-quantitative method for measuring tissue bleeding.

    Science.gov (United States)

    Vukcevic, G; Volarevic, V; Raicevic, S; Tanaskovic, I; Milicic, B; Vulovic, T; Arsenijevic, S

    2014-03-01

    In this study, we describe a new semi-quantitative method for measuring the extent of bleeding in pathohistological tissue samples. To test our novel method, we recruited 120 female patients in their first trimester of pregnancy and divided them into three groups of 40. Group I was the control group, in which no dilation was applied. Group II was an experimental group, in which dilation was performed using classical mechanical dilators. Group III was also an experimental group, in which dilation was performed using a hydraulic dilator. Tissue samples were taken from the patients' cervical canals using a Novak's probe via energetic single-step curettage prior to any dilation in Group I and after dilation in Groups II and III. After the tissue samples were prepared, light microscopy was used to obtain microphotographs at 100x magnification. The surfaces affected by bleeding were measured in the microphotographs using the Autodesk AutoCAD 2009 program and its "polylines" function. The lines were used to mark the area around the entire sample (marked A) and to create "polyline" areas around each bleeding area on the sample (marked B). The percentage of the total area affected by bleeding was calculated using the formula: N = Bt x 100 / At where N is the percentage (%) of the tissue sample surface affected by bleeding, At (A total) is the sum of the surfaces of all of the tissue samples and Bt (B total) is the sum of all the surfaces affected by bleeding in all of the tissue samples. This novel semi-quantitative method utilizes the Autodesk AutoCAD 2009 program, which is simple to use and widely available, thereby offering a new, objective and precise approach to estimate the extent of bleeding in tissue samples.

  4. A CALCULATION OF SEMI-EMPIRICAL ONE-ELECTRON WAVE FUNCTIONS FOR MULTI-ELECTRON ATOMS USED FOR ELEMENTARY PROCESS SIMULATION IN NONLOCAL PLASMA

    Directory of Open Access Journals (Sweden)

    M. V. Tchernycheva

    2017-01-01

    Full Text Available Subject of Research. The paper deals with development outcomes for creation method of one-electron wave functions of complex atoms, relatively simple, symmetrical for all atom electrons and free from hard computations. The accuracy and resource intensity of the approach are focused on systematic calculations of cross sections and rate constants of elementary processes of inelastic collisions of atoms or molecules with electrons (ionization, excitation, excitation transfer, and others. Method. The method is based on a set of two iterative processes. At the first iteration step the Schrödinger equation was solved numerically for the radial parts of the electron wave functions in the potential of the atomic core self-consistent field. At the second iteration step the new approximationfor the atomic core field is created that uses found solutions for all one-electron wave functions. The solution optimization for described multiparameter problem is achieved by the use of genetic algorithm. The suitability of the developed method was verified by comparing the calculation results with numerous data on the energies of atoms in the ground and excited states. Main Results. We have created the run-time version of the program for creation of sets of one-electron wave functions and calculation of the cross sections and constants of collisional transition rates in the first Born approximation. The priori available information about binding energies of the electrons for any many-particle system for creation of semi-empirical refined solutions for the one-electron wave functions can be considered at any step of this procedure. Practical Relevance. The proposed solution enables a simple and rapid preparation of input data for the numerical simulation of nonlocal gas discharge plasma. The approach is focused on the calculation of discharges in complex gas mixtures requiring inclusion in the model of a large number of elementary collisional and radiation

  5. Study between the semi-classical and the generator-coordinate methods

    International Nuclear Information System (INIS)

    Souza Cruz, F.F. de.

    1979-01-01

    In this work it is performed a comparison between two microscopic theories of the colective movement: semi-classical theory and the quantum theory from the generator -coordinate method. In boths cases, it is used wave packets |p,q> which depend on two canonical conjugate parameters. These wave packets are constructed by the action of displacement unitory operators, which are generated by canonical operators Q-circumflex and P-circumflex on a referencial state. (A.C.A.S.) [pt

  6. A New Empirical Model for Radar Scattering from Bare Soil Surfaces

    Directory of Open Access Journals (Sweden)

    Nicolas Baghdadi

    2016-11-01

    Full Text Available The objective of this paper is to propose a new semi-empirical radar backscattering model for bare soil surfaces based on the Dubois model. A wide dataset of backscattering coefficients extracted from synthetic aperture radar (SAR images and in situ soil surface parameter measurements (moisture content and roughness is used. The retrieval of soil parameters from SAR images remains challenging because the available backscattering models have limited performances. Existing models, physical, semi-empirical, or empirical, do not allow for a reliable estimate of soil surface geophysical parameters for all surface conditions. The proposed model, developed in HH, HV, and VV polarizations, uses a formulation of radar signals based on physical principles that are validated in numerous studies. Never before has a backscattering model been built and validated on such an important dataset as the one proposed in this study. It contains a wide range of incidence angles (18°–57° and radar wavelengths (L, C, X, well distributed, geographically, for regions with different climate conditions (humid, semi-arid, and arid sites, and involving many SAR sensors. The results show that the new model shows a very good performance for different radar wavelengths (L, C, X, incidence angles, and polarizations (RMSE of about 2 dB. This model is easy to invert and could provide a way to improve the retrieval of soil parameters.

  7. Comparing the Effectiveness of Blended, Semi-Flipped, and Flipped Formats in an Engineering Numerical Methods Course

    Science.gov (United States)

    Clark, Renee M.; Kaw, Autar; Besterfield-Sacre, Mary

    2016-01-01

    Blended, flipped, and semi-flipped instructional approaches were used in various sections of a numerical methods course for undergraduate mechanical engineers. During the spring of 2014, a blended approach was used; in the summer of 2014, a combination of blended and flipped instruction was used to deliver a semi-flipped course; and in the fall of…

  8. Method and apparatus for mounting or dismounting a semi-automatic twist-lock

    NARCIS (Netherlands)

    Klein Breteler, A.J.; Tekeli, G.

    2001-01-01

    The invention relates to a method for mounting or dismounting a semi-automatic twistlock at a corner of a deck container, wherein the twistlock is mounted or dismounted on a quayside where a ship may be docked for loading or unloading, in a loading or unloading terminal installed on the quayside,

  9. Application of semi parametric modelling to times series forecasting: case of the electricity consumption

    International Nuclear Information System (INIS)

    Lefieux, V.

    2007-10-01

    Reseau de Transport d'Electricite (RTE), in charge of operating the French electric transportation grid, needs an accurate forecast of the power consumption in order to operate it correctly. The forecasts used everyday result from a model combining a nonlinear parametric regression and a SARIMA model. In order to obtain an adaptive forecasting model, nonparametric forecasting methods have already been tested without real success. In particular, it is known that a nonparametric predictor behaves badly with a great number of explanatory variables, what is commonly called the curse of dimensionality. Recently, semi parametric methods which improve the pure nonparametric approach have been proposed to estimate a regression function. Based on the concept of 'dimension reduction', one those methods (called MAVE : Moving Average -conditional- Variance Estimate) can apply to time series. We study empirically its effectiveness to predict the future values of an autoregressive time series. We then adapt this method, from a practical point of view, to forecast power consumption. We propose a partially linear semi parametric model, based on the MAVE method, which allows to take into account simultaneously the autoregressive aspect of the problem and the exogenous variables. The proposed estimation procedure is practically efficient. (author)

  10. SSC-EKE: Semi-Supervised Classification with Extensive Knowledge Exploitation.

    Science.gov (United States)

    Qian, Pengjiang; Xi, Chen; Xu, Min; Jiang, Yizhang; Su, Kuan-Hao; Wang, Shitong; Muzic, Raymond F

    2018-01-01

    We introduce a new, semi-supervised classification method that extensively exploits knowledge. The method has three steps. First, the manifold regularization mechanism, adapted from the Laplacian support vector machine (LapSVM), is adopted to mine the manifold structure embedded in all training data, especially in numerous label-unknown data. Meanwhile, by converting the labels into pairwise constraints, the pairwise constraint regularization formula (PCRF) is designed to compensate for the few but valuable labelled data. Second, by further combining the PCRF with the manifold regularization, the precise manifold and pairwise constraint jointly regularized formula (MPCJRF) is achieved. Third, by incorporating the MPCJRF into the framework of the conventional SVM, our approach, referred to as semi-supervised classification with extensive knowledge exploitation (SSC-EKE), is developed. The significance of our research is fourfold: 1) The MPCJRF is an underlying adjustment, with respect to the pairwise constraints, to the graph Laplacian enlisted for approximating the potential data manifold. This type of adjustment plays the correction role, as an unbiased estimation of the data manifold is difficult to obtain, whereas the pairwise constraints, converted from the given labels, have an overall high confidence level. 2) By transforming the values of the two terms in the MPCJRF such that they have the same range, with a trade-off factor varying within the invariant interval [0, 1), the appropriate impact of the pairwise constraints to the graph Laplacian can be self-adaptively determined. 3) The implication regarding extensive knowledge exploitation is embodied in SSC-EKE. That is, the labelled examples are used not only to control the empirical risk but also to constitute the MPCJRF. Moreover, all data, both labelled and unlabelled, are recruited for the model smoothness and manifold regularization. 4) The complete framework of SSC-EKE organically incorporates multiple

  11. Volumetric analysis of pelvic hematomas after blunt trauma using semi-automated seeded region growing segmentation: a method validation study.

    Science.gov (United States)

    Dreizin, David; Bodanapally, Uttam K; Neerchal, Nagaraj; Tirada, Nikki; Patlas, Michael; Herskovits, Edward

    2016-11-01

    Manually segmented traumatic pelvic hematoma volumes are strongly predictive of active bleeding at conventional angiography, but the method is time intensive, limiting its clinical applicability. We compared volumetric analysis using semi-automated region growing segmentation to manual segmentation and diameter-based size estimates in patients with pelvic hematomas after blunt pelvic trauma. A 14-patient cohort was selected in an anonymous randomized fashion from a dataset of patients with pelvic binders at MDCT, collected retrospectively as part of a HIPAA-compliant IRB-approved study from January 2008 to December 2013. To evaluate intermethod differences, one reader (R1) performed three volume measurements using the manual technique and three volume measurements using the semi-automated technique. To evaluate interobserver differences for semi-automated segmentation, a second reader (R2) performed three semi-automated measurements. One-way analysis of variance was used to compare differences in mean volumes. Time effort was also compared. Correlation between the two methods as well as two shorthand appraisals (greatest diameter, and the ABC/2 method for estimating ellipsoid volumes) was assessed with Spearman's rho (r). Intraobserver variability was lower for semi-automated compared to manual segmentation, with standard deviations ranging between ±5-32 mL and ±17-84 mL, respectively (p = 0.0003). There was no significant difference in mean volumes between the two readers' semi-automated measurements (p = 0.83); however, means were lower for the semi-automated compared with the manual technique (manual: mean and SD 309.6 ± 139 mL; R1 semi-auto: 229.6 ± 88.2 mL, p = 0.004; R2 semi-auto: 243.79 ± 99.7 mL, p = 0.021). Despite differences in means, the correlation between the two methods was very strong and highly significant (r = 0.91, p hematoma volumes correlate strongly with manually segmented volumes. Since semi-automated segmentation

  12. Calibrating a combined energy systems analysis and controller design method with empirical data

    International Nuclear Information System (INIS)

    Murphy, Gavin Bruce; Counsell, John; Allison, John; Brindley, Joseph

    2013-01-01

    The drive towards low carbon constructions has seen buildings increasingly utilise many different energy systems simultaneously to control the human comfort of the indoor environment; such as ventilation with heat recovery, various heating solutions and applications of renewable energy. This paper describes a dynamic modelling and simulation method (IDEAS – Inverse Dynamics based Energy Assessment and Simulation) for analysing the energy utilisation of a building and its complex servicing systems. The IDEAS case study presented in this paper is based upon small perturbation theory and can be used for the analysis of the performance of complex energy systems and also for the design of smart control systems. This paper presents a process of how any dynamic model can be calibrated against a more empirical based data model, in this case the UK Government's SAP (Standard Assessment Procedure). The research targets of this work are building simulation experts for analysing the energy use of a building and also control engineers to assist in the design of smart control systems for dwellings. The calibration process presented is transferable and has applications for simulation experts to assist in calibrating any dynamic building simulation method with an empirical based method. - Highlights: • Presentation of an energy systems analysis method for assessing the energy utilisation of buildings and their complex servicing systems. • An inverse dynamics based controller design method is detailed. • Method of how a dynamic model can be calibrated with an empirical based model

  13. An empirical study of cultural evolution: the development of European cooking from medieval to modern times

    Directory of Open Access Journals (Sweden)

    Lindenfors, Patrik

    2015-12-01

    Full Text Available We have carried out an empirical study of long-term change in European cookery to test if the development of this cultural phenomenon matches a general hypothesis about cultural evolution: that human cultural change is characterized by cumulativity. Data from seven cookery books, evenly spaced across time, the oldest one written in medieval times (~1200 and the most recent one dating from late modernity (1999, were compared. Ten recipes from each of the categories “poultry recipes”, “fish recipes” and “meat recipes” were arbitrarily selected from each cookery book by selecting the first ten recipes in each category, and the numbers (per recipe of steps, separate partial processes, methods, ingredients, semi-manufactured ingredients, compound semi-manufactured ingredients (defined as semi-manufactured ingredients containing no less than two raw products, and self-made semi-manufactured ingredients were counted. Regression analyses were used to quantitatively compare the cookery from different ages. We found a significant increase in the numbers (per recipe of steps, separate partial processes, methods, ingredients and semi-manufactured ingredients. These significant increases enabled us to identify the development of cookery as an example of the general trend of cumulativity in long-term cultural evolution. The number of self-made semi-manufactured ingredients per recipe, however, may have decreased somewhat over time, something which may reflect the cumulative characteristics of cultural evolution at the level of society, considering the accumulation of knowledge that is required to industrialize food production.

  14. Evaluation of registration methods on thoracic CT : the EMPIRE10 challenge

    NARCIS (Netherlands)

    Murphy, K.; Ginneken, van B.; Reinhardt, J.M.; Kabus, S.; Ding, K.; Deng, Xiang; Cao, K.; Du, K.; Christensen, G.E.; Garcia, V.; Vercauteren, T.; Ayache, N.; Commowick, O.; Malandain, G.; Glocker, B.; Paragios, N.; Navab, N.; Gorbunova, V.; Sporring, J.; Bruijne, de M.; Han, Xiao; Heinrich, M.P.; Schnabel, J.A.; Jenkinson, M.; Lorenz, C.; Modat, M.; McClelland, J.R.; Ourselin, S.; Muenzing, S.E.A.; Viergever, M.A.; Nigris, De D.; Collins, D.L.; Arbel, T.; Peroni, M.; Li, R.; Sharp, G.; Schmidt-Richberg, A.; Ehrhardt, J.; Werner, R.; Smeets, D.; Loeckx, D.; Song, G.; Tustison, N.; Avants, B.; Gee, J.C.; Staring, M.; Klein, S.; Stoel, B.C.; Urschler, M.; Werlberger, M.; Vandemeulebroucke, J.; Rit, S.; Sarrut, D.; Pluim, J.P.W.

    2011-01-01

    EMPIRE10 (Evaluation of Methods for Pulmonary Image REgistration 2010) is a public platform for fair and meaningful comparison of registration algorithms which are applied to a database of intrapatient thoracic CT image pairs. Evaluation of nonrigid registration techniques is a nontrivial task. This

  15. A semi-empirical model for the formation and depletion of the high burnup structure in UO{sub 2}

    Energy Technology Data Exchange (ETDEWEB)

    Pizzocri, D. [European Commission, Joint Research Centre, Directorate for Nuclear Safety and Security, PO Box 2340, 76125, Karlsruhe (Germany); Politecnico di Milano, Department of Energy, Nuclear Engineering Division, Via La Masa 34, 20156, Milan (Italy); Cappia, F. [European Commission, Joint Research Centre, Directorate for Nuclear Safety and Security, PO Box 2340, 76125, Karlsruhe (Germany); Technische Universität München, Boltzmannstraße 15, 85747, Garching bei München (Germany); Luzzi, L., E-mail: lelio.luzzi@polimi.it [Politecnico di Milano, Department of Energy, Nuclear Engineering Division, Via La Masa 34, 20156, Milan (Italy); Pastore, G. [Idaho National Laboratory, Fuel Modeling and Simulation Department, 2525 Fremont Avenue, 83415, Idaho Falls (United States); Rondinella, V.V.; Van Uffelen, P. [European Commission, Joint Research Centre, Directorate for Nuclear Safety and Security, PO Box 2340, 76125, Karlsruhe (Germany)

    2017-04-15

    In the rim zone of UO{sub 2} nuclear fuel pellets, the combination of high burnup and low temperature drives a microstructural change, leading to the formation of the high burnup structure (HBS). In this work, we propose a semi-empirical model to describe the formation of the HBS, which embraces the polygonisation/recrystallization process and the depletion of intra-granular fission gas, describing them as inherently related. For this purpose, we performed grain-size measurements on samples at radial positions in which the restructuring was incomplete. Based on these new experimental data, we infer an exponential reduction of the average grain size with local effective burnup, paired with a simultaneous depletion of intra-granular fission gas driven by diffusion. The comparison with currently used models indicates the applicability of the herein developed model within integral fuel performance codes. - Highlights: •Development of a new model for the formation and depletion of the high burnup structure. •New average grain-size measurements to support model development. •Formation threshold of the high burnup structure based on the concept of effective burnup. •Coupled description of grain recrystallization/polygonisation and depletion of intra-granular fission gas. •Model suitable for application in fuel performance codes.

  16. Molecular models of zinc phthalocyanines: semi-empirical molecular orbital computations and physicochemical properties studied by molecular mechanics simulations

    International Nuclear Information System (INIS)

    Gantchev, Tsvetan G.; van Lier, Johan E.; Hunting, Darel J.

    2005-01-01

    To build 3D-molecular models of Zinc-phthalocyanines (ZnPc) and to study their diverse chemical and photosensitization properties, we performed quantum mechanical molecular orbital (MO) semi-empirical (AM1) computations of the ground, excited singlet and triplet states as well as free radical (ionic) species. RHF and UHF (open shell) geometry optimizations led to near-perfect symmetrical ZnPc. Predicted ionization potentials (IP), electron affinities (EA) and lowest electronic transitions of ZnPc are in good agreement with the published experimental and theoretical data. The computation-derived D 4h /D 2h -symmetry 3D-structures of ground and excited states and free radicals of ZnPc, together with the frontier orbital energies and Mulliken electron population analysis enabled us to build robust molecular models. These models were used to predict important chemical-reactivity entities such as global electronegativity (χ), hardness (η) and local softness based on Fukui-functions analysis. Examples of molecular mechanics (MM) applications of the 3D-molecular models are presented as approaches to evaluate solvation free energy (ΔG 0 ) solv and to estimate ground- and excited- state oxidation/reduction potentials as well as intermolecular interactions and stability of ground and excited state dimers (exciplexes) and radical ion-pairs

  17. Application of semi parametric modelling to times series forecasting: case of the electricity consumption; Modeles semi-parametriques appliques a la prevision des series temporelles. Cas de la consommation d'electricite

    Energy Technology Data Exchange (ETDEWEB)

    Lefieux, V

    2007-10-15

    Reseau de Transport d'Electricite (RTE), in charge of operating the French electric transportation grid, needs an accurate forecast of the power consumption in order to operate it correctly. The forecasts used everyday result from a model combining a nonlinear parametric regression and a SARIMA model. In order to obtain an adaptive forecasting model, nonparametric forecasting methods have already been tested without real success. In particular, it is known that a nonparametric predictor behaves badly with a great number of explanatory variables, what is commonly called the curse of dimensionality. Recently, semi parametric methods which improve the pure nonparametric approach have been proposed to estimate a regression function. Based on the concept of 'dimension reduction', one those methods (called MAVE : Moving Average -conditional- Variance Estimate) can apply to time series. We study empirically its effectiveness to predict the future values of an autoregressive time series. We then adapt this method, from a practical point of view, to forecast power consumption. We propose a partially linear semi parametric model, based on the MAVE method, which allows to take into account simultaneously the autoregressive aspect of the problem and the exogenous variables. The proposed estimation procedure is practically efficient. (author)

  18. Investigation of naproxen drug using mass spectrometry, thermal analyses and semi-empirical molecular orbital calculation

    Directory of Open Access Journals (Sweden)

    M.A. Zayed

    2017-03-01

    Full Text Available Naproxen (C14H14O3 is a non-steroidal anti-inflammatory drug (NSAID. It is important to investigate its structure to know the active groups and weak bonds responsible for medical activity. In the present study, naproxen was investigated by mass spectrometry (MS, thermal analysis (TA measurements (TG/DTG and DTA and confirmed by semi empirical molecular orbital (MO calculation, using PM3 procedure. These calculations included, bond length, bond order, bond strain, partial charge distribution, ionization energy and heat of formation (ΔHf. The mass spectra and thermal analysis fragmentation pathways were proposed and compared to select the most suitable scheme representing the correct fragmentation pathway of the drug in both techniques. The PM3 procedure reveals that the primary cleavage site of the charged molecule is the rupture of the COOH group (lowest bond order and high strain which followed by CH3 loss of the methoxy group. Thermal analysis of the neutral drug reveals a high response to the temperature variation with very fast rate. It decomposed in several sequential steps in the temperature range 80–400 °C. These mass losses appear as two endothermic and one exothermic peaks which required energy values of 255.42, 10.67 and 371.49 J g−1 respectively. The initial thermal ruptures are similar to that obtained by mass spectral fragmentation (COOH rupture. It was followed by the loss of the methyl group and finally by ethylene loss. Therefore, comparison between MS and TA helps in selection of the proper pathway representing its fragmentation. This comparison is successfully confirmed by MO-calculation.

  19. Semi-parametrical NAA method for paper analysis

    International Nuclear Information System (INIS)

    Medeiros, Ilca M.M.A.; Zamboni, Cibele B.; Cruz, Manuel T.F. da; Morel, Jose C.O.; Park, Song W.

    2007-01-01

    The semi-parametric Neutron Activation Analysis technique, using Au as flux monitor, was applied to determine element concentrations in white paper, usually commercialized, aiming to check the quality control of its production in industrial process. (author)

  20. GMDH-Based Semi-Supervised Feature Selection for Electricity Load Classification Forecasting

    Directory of Open Access Journals (Sweden)

    Lintao Yang

    2018-01-01

    Full Text Available With the development of smart power grids, communication network technology and sensor technology, there has been an exponential growth in complex electricity load data. Irregular electricity load fluctuations caused by the weather and holiday factors disrupt the daily operation of the power companies. To deal with these challenges, this paper investigates a day-ahead electricity peak load interval forecasting problem. It transforms the conventional continuous forecasting problem into a novel interval forecasting problem, and then further converts the interval forecasting problem into the classification forecasting problem. In addition, an indicator system influencing the electricity load is established from three dimensions, namely the load series, calendar data, and weather data. A semi-supervised feature selection algorithm is proposed to address an electricity load classification forecasting issue based on the group method of data handling (GMDH technology. The proposed algorithm consists of three main stages: (1 training the basic classifier; (2 selectively marking the most suitable samples from the unclassified label data, and adding them to an initial training set; and (3 training the classification models on the final training set and classifying the test samples. An empirical analysis of electricity load dataset from four Chinese cities is conducted. Results show that the proposed model can address the electricity load classification forecasting problem more efficiently and effectively than the FW-Semi FS (forward semi-supervised feature selection and GMDH-U (GMDH-based semi-supervised feature selection for customer classification models.

  1. Semi-supervised and unsupervised extreme learning machines.

    Science.gov (United States)

    Huang, Gao; Song, Shiji; Gupta, Jatinder N D; Wu, Cheng

    2014-12-01

    Extreme learning machines (ELMs) have proven to be efficient and effective learning mechanisms for pattern classification and regression. However, ELMs are primarily applied to supervised learning problems. Only a few existing research papers have used ELMs to explore unlabeled data. In this paper, we extend ELMs for both semi-supervised and unsupervised tasks based on the manifold regularization, thus greatly expanding the applicability of ELMs. The key advantages of the proposed algorithms are as follows: 1) both the semi-supervised ELM (SS-ELM) and the unsupervised ELM (US-ELM) exhibit learning capability and computational efficiency of ELMs; 2) both algorithms naturally handle multiclass classification or multicluster clustering; and 3) both algorithms are inductive and can handle unseen data at test time directly. Moreover, it is shown in this paper that all the supervised, semi-supervised, and unsupervised ELMs can actually be put into a unified framework. This provides new perspectives for understanding the mechanism of random feature mapping, which is the key concept in ELM theory. Empirical study on a wide range of data sets demonstrates that the proposed algorithms are competitive with the state-of-the-art semi-supervised or unsupervised learning algorithms in terms of accuracy and efficiency.

  2. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    Science.gov (United States)

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  3. Stress analysis of thermal sprayed coatings using a semi-destructive hole-drilling strain gauge method

    International Nuclear Information System (INIS)

    Dolhof, V.; Musil, J.; Cepera, M.; Zeman, J.

    1995-01-01

    Residual stress is an important parameter in coating technology since it often relates to the maximum coating thickness which can be deposited without spallation, and this applies to coatings produced by different thermal spray and thin film technologies. Indeed, the mechanisms by which residual stress is built up or locked into a coating depends markedly on the deposition process and coating structure (growth structure, phase composition) in the same way too. Methods for determining residual stresses in materials include both destructive and non-destructive methods. This contribution describes semi-destructive hole-drilling strain gauge method modified for measurement of residual stresses in thermal sprayed coatings. This method of stress analysis was used for determination of stress levels in thermal sprayed WC-17% Co coatings onto 13% Cr steel substrates. Results show that deposition conditions and final coating structure influence directly the residual stress level in the coatings. It is proved that semi-destructive hole-tube drilling measurement is effective reproducible method of coating stress analysis and good solution for optimization of deposition process

  4. A GPU-accelerated semi-implicit fractional-step method for numerical solutions of incompressible Navier-Stokes equations

    Science.gov (United States)

    Ha, Sanghyun; Park, Junshin; You, Donghyun

    2018-01-01

    Utility of the computational power of Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. The Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution methods used in the semi-implicit fractional-step method take advantage of multiple tridiagonal matrices whose inversion is known as the major bottleneck for acceleration on a typical multi-core machine. A novel implementation of the semi-implicit fractional-step method designed for GPU acceleration of the incompressible Navier-Stokes equations is presented. Aspects of the programing model of Compute Unified Device Architecture (CUDA), which are critical to the bandwidth-bound nature of the present method are discussed in detail. A data layout for efficient use of CUDA libraries is proposed for acceleration of tridiagonal matrix inversion and fast Fourier transform. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while the Navier-Stokes equations are computed on a GPU. Performance of the present method using CUDA is assessed by comparing the speed of solving three tridiagonal matrices using ADI with the speed of solving one heptadiagonal matrix using a conjugate gradient method. An overall speedup of 20 times is achieved using a Tesla K40 GPU in comparison with a single-core Xeon E5-2660 v3 CPU in simulations of turbulent boundary-layer flow over a flat plate conducted on over 134 million grids. Enhanced performance of 48 times speedup is reached for the same problem using a Tesla P100 GPU.

  5. Method of semi-automatic high precision potentiometric titration for characterization of uranium compounds

    International Nuclear Information System (INIS)

    Cristiano, Barbara Fernandes G.; Dias, Fabio C.; Barros, Pedro D. de; Araujo, Radier Mario S. de; Delgado, Jose Ubiratan; Silva, Jose Wanderley S. da; Lopes, Ricardo T.

    2011-01-01

    The method of high precision potentiometric titration is widely used in the certification and characterization of uranium compounds. In order to reduce the analysis and diminish the influence if the annalist, a semi-automatic version of the method was developed at the safeguards laboratory of the CNEN-RJ, Brazil. The method was applied with traceability guaranteed by use of primary standard of potassium dichromate. The standard uncertainty combined in the determination of concentration of total uranium was of the order of 0.01%, which is better related to traditionally methods used by the nuclear installations which is of the order of 0.1%

  6. A Parameter Identification Method for Helicopter Noise Source Identification and Physics-Based Semi-Empirical Modeling

    Science.gov (United States)

    Greenwood, Eric, II; Schmitz, Fredric H.

    2010-01-01

    A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.

  7. A Simple Semi-Empirical Model for the Estimation of Photosynthetically Active Radiation from Satellite Data in the Tropics

    Directory of Open Access Journals (Sweden)

    S. Janjai

    2013-01-01

    Full Text Available This paper presents a simple semi-empirical model for estimating global photosynthetically active radiation (PAR under all sky conditions. The model expresses PAR as a function of cloud index, aerosol optical depth, total ozone column, solar zenith angle, and air mass. The formulation of the model was based on a four-year period (2008–2011 of PAR data obtained from the measurements at four solar monitoring stations in a tropical environment of Thailand. These are Chiang Mai (18.78°N, 98.98°E, Ubon Ratchathani (15.25°N, 104.87°E, Nakhon Pathom (13.82°N, 100.04°E, and Songkhla (7.20°N, 100.60°E. The cloud index was derived from MTSAT-1R satellite, whereas the aerosol optical depth was obtained from MODIS/Terra satellite. For the total ozone column, it was retrieved from OMI/Aura satellite. The model was validated against independent data set from the four stations. It was found that hourly PAR estimated from the proposed model and that obtained from the measurements were in reasonable agreement, with the root mean square difference (RMSD and mean bias difference (MBD of 14.3% and −5.8%, respectively. In addition, for the case of monthly average hourly PAR, RMSD and MBD were reduced to 11.1% and −5.1%, respectively.

  8. Evaluating Method Engineer Performance: an error classification and preliminary empirical study

    Directory of Open Access Journals (Sweden)

    Steven Kelly

    1998-11-01

    Full Text Available We describe an approach to empirically test the use of metaCASE environments to model methods. Both diagrams and matrices have been proposed as a means for presenting the methods. These different paradigms may have their own effects on how easily and well users can model methods. We extend Batra's classification of errors in data modelling to cover metamodelling, and use it to measure the performance of a group of metamodellers using either diagrams or matrices. The tentative results from this pilot study confirm the usefulness of the classification, and show some interesting differences between the paradigms.

  9. Empirical projection-based basis-component decomposition method

    Science.gov (United States)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  10. A semi-empirical model for mesospheric and stratospheric NOy produced by energetic particle precipitation

    Directory of Open Access Journals (Sweden)

    B. Funke

    2016-07-01

    Full Text Available The MIPAS Fourier transform spectrometer on board Envisat has measured global distributions of the six principal reactive nitrogen (NOy compounds (HNO3, NO2, NO, N2O5, ClONO2, and HNO4 during 2002–2012. These observations were used previously to detect regular polar winter descent of reactive nitrogen produced by energetic particle precipitation (EPP down to the lower stratosphere, often called the EPP indirect effect. It has further been shown that the observed fraction of NOy produced by EPP (EPP-NOy has a nearly linear relationship with the geomagnetic Ap index when taking into account the time lag introduced by transport. Here we exploit these results in a semi-empirical model for computation of EPP-modulated NOy densities and wintertime downward fluxes through stratospheric and mesospheric pressure levels. Since the Ap dependence of EPP-NOy is distorted during episodes of strong descent in Arctic winters associated with elevated stratopause events, a specific parameterization has been developed for these episodes. This model accurately reproduces the observations from MIPAS and is also consistent with estimates from other satellite instruments. Since stratospheric EPP-NOy depositions lead to changes in stratospheric ozone with possible implications for climate, the model presented here can be utilized in climate simulations without the need to incorporate many thermospheric and upper mesospheric processes. By employing historical geomagnetic indices, the model also allows for reconstruction of the EPP indirect effect since 1850. We found secular variations of solar cycle-averaged stratospheric EPP-NOy depositions on the order of 1 GM. In particular, we model a reduction of the EPP-NOy deposition rate during the last 3 decades, related to the coincident decline of geomagnetic activity that corresponds to 1.8 % of the NOy production rate by N2O oxidation. As the decline of the geomagnetic activity level is expected to continue in the

  11. A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves

    Science.gov (United States)

    Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang

    2018-03-01

    The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.

  12. Semi-automatic version of the potentiometric titration method for characterization of uranium compounds.

    Science.gov (United States)

    Cristiano, Bárbara F G; Delgado, José Ubiratan; da Silva, José Wanderley S; de Barros, Pedro D; de Araújo, Radier M S; Dias, Fábio C; Lopes, Ricardo T

    2012-09-01

    The potentiometric titration method was used for characterization of uranium compounds to be applied in intercomparison programs. The method is applied with traceability assured using a potassium dichromate primary standard. A semi-automatic version was developed to reduce the analysis time and the operator variation. The standard uncertainty in determining the total concentration of uranium was around 0.01%, which is suitable for uranium characterization and compatible with those obtained by manual techniques. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. SU-E-J-252: Reproducibility of Radiogenomic Image Features: Comparison of Two Semi-Automated Segmentation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lee, M; Woo, B; Kim, J [Seoul National University, Seoul (Korea, Republic of); Jamshidi, N; Kuo, M [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automatically from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI.

  14. SU-E-J-252: Reproducibility of Radiogenomic Image Features: Comparison of Two Semi-Automated Segmentation Methods

    International Nuclear Information System (INIS)

    Lee, M; Woo, B; Kim, J; Jamshidi, N; Kuo, M

    2015-01-01

    Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automatically from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI

  15. Efficiency indicators versus forntier methods: an empirical investigation of italian public hospitals

    Directory of Open Access Journals (Sweden)

    Lorenzo Clementi

    2013-05-01

    Full Text Available Efficiency has a key-role in the measurement of the impact of the National Health Service (NHS reforms. We investigate the issue of inefficiency in health sector and provide empirical evidence derived from Italian public hospitals. Despite the importance of efficiency measurement in health care services, only recently advanced econometric methods have been applied to hospital data. We provide a synoptic survey of few empirical analyses of efficiency measurement in health care services. An estimate of the cost efficiency level in Italian public hospitals during 2001-2003 is obtained through a sample. We propose an efficiency indicator and provide cost frontiers for such hospitals, using stochastic frontier analysis (SFA for longitudinal data.

  16. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  17. Soil surface moisture estimation over a semi-arid region using ENVISAT ASAR radar data for soil evaporation evaluation

    Directory of Open Access Journals (Sweden)

    M. Zribi

    2011-01-01

    Full Text Available The present paper proposes a method for the evaluation of soil evaporation, using soil moisture estimations based on radar satellite measurements. We present firstly an approach for the estimation and monitoring of soil moisture in a semi-arid region in North Africa, using ENVISAT ASAR images, over two types of vegetation covers. The first mapping process is dedicated solely to the monitoring of moisture variability related to rainfall events, over areas in the "non-irrigated olive tree" class of land use. The developed approach is based on a simple linear relationship between soil moisture and the backscattered radar signal normalised at a reference incidence angle. The second process is proposed over wheat fields, using an analysis of moisture variability due to both rainfall and irrigation. A semi-empirical model, based on the water-cloud model for vegetation correction, is used to retrieve soil moisture from the radar signal. Moisture mapping is carried out over wheat fields, showing high variability between irrigated and non-irrigated wheat covers. This analysis is based on a large database, including both ENVISAT ASAR and simultaneously acquired ground-truth measurements (moisture, vegetation, roughness, during the 2008–2009 vegetation cycle. Finally, a semi-empirical approach is proposed in order to relate surface moisture to the difference between soil evaporation and the climate demand, as defined by the potential evaporation. Mapping of the soil evaporation is proposed.

  18. A semi-spring and semi-edge combined contact model in CDEM and its application to analysis of Jiweishan landslide

    Directory of Open Access Journals (Sweden)

    Chun Feng

    2014-02-01

    Full Text Available Continuum-based discrete element method (CDEM is an explicit numerical method used for simulation of progressive failure of geological body. To improve the efficiency of contact detection and simplify the calculation steps for contact forces, semi-spring and semi-edge are introduced in calculation. Semi-spring is derived from block vertex, and formed by indenting the block vertex into each face (24 semi-springs for a hexahedral element. The formation process of semi-edge is the same as that of semi-spring (24 semi-edges for a hexahedral element. Based on the semi-springs and semi-edges, a new type of combined contact model is presented. According to this model, six contact types could be reduced to two, i.e. the semi-spring target face contact and semi-edge target edge contact. By the combined model, the contact force could be calculated directly (the information of contact type is not necessary, and the failure judgment could be executed in a straightforward way (each semi-spring and semi-edge own their characteristic areas. The algorithm has been successfully programmed in C++ program. Some simple numerical cases are presented to show the validity and accuracy of this model. Finally, the failure mode, sliding distance and critical friction angle of Jiweishan landslide are studied with the combined model.

  19. Tourism forecasting using modified empirical mode decomposition and group method of data handling

    Science.gov (United States)

    Yahya, N. A.; Samsudin, R.; Shabri, A.

    2017-09-01

    In this study, a hybrid model using modified Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) model is proposed for tourism forecasting. This approach reconstructs intrinsic mode functions (IMFs) produced by EMD using trial and error method. The new component and the remaining IMFs is then predicted respectively using GMDH model. Finally, the forecasted results for each component are aggregated to construct an ensemble forecast. The data used in this experiment are monthly time series data of tourist arrivals from China, Thailand and India to Malaysia from year 2000 to 2016. The performance of the model is evaluated using Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) where conventional GMDH model and EMD-GMDH model are used as benchmark models. Empirical results proved that the proposed model performed better forecasts than the benchmarked models.

  20. Semi-analytic equations to the Cox-Thompson inverse scattering method at fixed energy for special cases

    International Nuclear Information System (INIS)

    Palmai, T.; Apagyi, B.; Horvath, M.

    2008-01-01

    Solution of the Cox-Thompson inverse scattering problem at fixed energy 1-3 is reformulated resulting in semi-analytic equations. The new set of equations for the normalization constants and the nonphysical (shifted) angular momenta are free of matrix inversion operations. This simplification is a result of treating only the input phase shifts of partial waves of a given parity. Therefore, the proposed method can be applied for identical particle scattering of the bosonic type (or for certain cases of identical fermionic scattering). The new formulae are expected to be numerically more efficient than the previous ones. Based on the semi-analytic equations an approximate method is proposed for the generic inverse scattering problem, when partial waves of arbitrary parity are considered. (author)

  1. Optimization analysis of the motor cooling method in semi-closed single screw refrigeration compressor

    Science.gov (United States)

    Wang, Z. L.; Shen, Y. F.; Wang, Z. B.; Wang, J.

    2017-08-01

    Semi-closed single screw refrigeration compressors (SSRC) are widely used in refrigeration and air conditioning systems owing to the advantages of simple structure, balanced forces on the rotor, high volumetric efficiency and so on. In semi-closed SSRCs, motor is often cooled by suction gas or injected refrigerant liquid. Motor cooling method will changes the suction gas temperature, this to a certain extent, is an important factor influencing the thermal dynamic performance of a compressor. Thus the effects of motor cooling method on the performance of the compressor must be studied. In this paper mathematical models of motor cooling process by using these two methods were established. Influences of motor cooling parameters such as suction gas temperature, suction gas quantity, temperature of the injected refrigerant liquid and quantity of the injected refrigerant liquid on the thermal dynamic performance of the compressor were analyzed. The performances of the compressor using these two kinds of motor cooling methods were compared. The motor cooling capacity of the injected refrigerant liquid is proved to be better than the suction gas. All analysis results obtained can be useful for optimum design of the motor cooling process to improve the efficiency and the energy efficiency of the compressor.

  2. Positivity for Convective Semi-discretizations

    KAUST Repository

    Fekete, Imre

    2017-04-19

    We propose a technique for investigating stability properties like positivity and forward invariance of an interval for method-of-lines discretizations, and apply the technique to study positivity preservation for a class of TVD semi-discretizations of 1D scalar hyperbolic conservation laws. This technique is a generalization of the approach suggested in Khalsaraei (J Comput Appl Math 235(1): 137–143, 2010). We give more relaxed conditions on the time-step for positivity preservation for slope-limited semi-discretizations integrated in time with explicit Runge–Kutta methods. We show that the step-size restrictions derived are sharp in a certain sense, and that many higher-order explicit Runge–Kutta methods, including the classical 4th-order method and all non-confluent methods with a negative Butcher coefficient, cannot generally maintain positivity for these semi-discretizations under any positive step size. We also apply the proposed technique to centered finite difference discretizations of scalar hyperbolic and parabolic problems.

  3. Evaluation and parameterization of ATCOR3 topographic correction method for forest cover mapping in mountain areas

    Science.gov (United States)

    Balthazar, Vincent; Vanacker, Veerle; Lambin, Eric F.

    2012-08-01

    A topographic correction of optical remote sensing data is necessary to improve the quality of quantitative forest cover change analyses in mountainous terrain. The implementation of semi-empirical correction methods requires the calibration of model parameters that are empirically defined. This study develops a method to improve the performance of topographic corrections for forest cover change detection in mountainous terrain through an iterative tuning method of model parameters based on a systematic evaluation of the performance of the correction. The latter was based on: (i) the general matching of reflectances between sunlit and shaded slopes and (ii) the occurrence of abnormal reflectance values, qualified as statistical outliers, in very low illuminated areas. The method was tested on Landsat ETM+ data for rough (Ecuadorian Andes) and very rough mountainous terrain (Bhutan Himalayas). Compared to a reference level (no topographic correction), the ATCOR3 semi-empirical correction method resulted in a considerable reduction of dissimilarities between reflectance values of forested sites in different topographic orientations. Our results indicate that optimal parameter combinations are depending on the site, sun elevation and azimuth and spectral conditions. We demonstrate that the results of relatively simple topographic correction methods can be greatly improved through a feedback loop between parameter tuning and evaluation of the performance of the correction model.

  4. Modeling ionospheric foF2 by using empirical orthogonal function analysis

    Directory of Open Access Journals (Sweden)

    E. A

    2011-08-01

    Full Text Available A similar-parameters interpolation method and an empirical orthogonal function analysis are used to construct empirical models for the ionospheric foF2 by using the observational data from three ground-based ionosonde stations in Japan which are Wakkanai (Geographic 45.4° N, 141.7° E, Kokubunji (Geographic 35.7° N, 140.1° E and Yamagawa (Geographic 31.2° N, 130.6° E during the years of 1971–1987. The impact of different drivers towards ionospheric foF2 can be well indicated by choosing appropriate proxies. It is shown that the missing data of original foF2 can be optimal refilled using similar-parameters method. The characteristics of base functions and associated coefficients of EOF model are analyzed. The diurnal variation of base functions can reflect the essential nature of ionospheric foF2 while the coefficients represent the long-term alteration tendency. The 1st order EOF coefficient A1 can reflect the feature of the components with solar cycle variation. A1 also contains an evident semi-annual variation component as well as a relatively weak annual fluctuation component. Both of which are not so obvious as the solar cycle variation. The 2nd order coefficient A2 contains mainly annual variation components. The 3rd order coefficient A3 and 4th order coefficient A4 contain both annual and semi-annual variation components. The seasonal variation, solar rotation oscillation and the small-scale irregularities are also included in the 4th order coefficient A4. The amplitude range and developing tendency of all these coefficients depend on the level of solar activity and geomagnetic activity. The reliability and validity of EOF model are verified by comparison with observational data and with International Reference Ionosphere (IRI. The agreement between observations and EOF model is quite well, indicating that the EOF model can reflect the major changes and the temporal distribution characteristics of the mid-latitude ionosphere of the

  5. Recent Progress in Treating Protein-Ligand Interactions with Quantum-Mechanical Methods.

    Science.gov (United States)

    Yilmazer, Nusret Duygu; Korth, Martin

    2016-05-16

    We review the first successes and failures of a "new wave" of quantum chemistry-based approaches to the treatment of protein/ligand interactions. These approaches share the use of "enhanced", dispersion (D), and/or hydrogen-bond (H) corrected density functional theory (DFT) or semi-empirical quantum mechanical (SQM) methods, in combination with ensemble weighting techniques of some form to capture entropic effects. Benchmark and model system calculations in comparison to high-level theoretical as well as experimental references have shown that both DFT-D (dispersion-corrected density functional theory) and SQM-DH (dispersion and hydrogen bond-corrected semi-empirical quantum mechanical) perform much more accurately than older DFT and SQM approaches and also standard docking methods. In addition, DFT-D might soon become and SQM-DH already is fast enough to compute a large number of binding modes of comparably large protein/ligand complexes, thus allowing for a more accurate assessment of entropic effects.

  6. Recent Progress in Treating Protein–Ligand Interactions with Quantum-Mechanical Methods

    Directory of Open Access Journals (Sweden)

    Nusret Duygu Yilmazer

    2016-05-01

    Full Text Available We review the first successes and failures of a “new wave” of quantum chemistry-based approaches to the treatment of protein/ligand interactions. These approaches share the use of “enhanced”, dispersion (D, and/or hydrogen-bond (H corrected density functional theory (DFT or semi-empirical quantum mechanical (SQM methods, in combination with ensemble weighting techniques of some form to capture entropic effects. Benchmark and model system calculations in comparison to high-level theoretical as well as experimental references have shown that both DFT-D (dispersion-corrected density functional theory and SQM-DH (dispersion and hydrogen bond-corrected semi-empirical quantum mechanical perform much more accurately than older DFT and SQM approaches and also standard docking methods. In addition, DFT-D might soon become and SQM-DH already is fast enough to compute a large number of binding modes of comparably large protein/ligand complexes, thus allowing for a more accurate assessment of entropic effects.

  7. A Robust Semi-Parametric Test for Detecting Trait-Dependent Diversification.

    Science.gov (United States)

    Rabosky, Daniel L; Huang, Huateng

    2016-03-01

    Rates of species diversification vary widely across the tree of life and there is considerable interest in identifying organismal traits that correlate with rates of speciation and extinction. However, it has been challenging to develop methodological frameworks for testing hypotheses about trait-dependent diversification that are robust to phylogenetic pseudoreplication and to directionally biased rates of character change. We describe a semi-parametric test for trait-dependent diversification that explicitly requires replicated associations between character states and diversification rates to detect effects. To use the method, diversification rates are reconstructed across a phylogenetic tree with no consideration of character states. A test statistic is then computed to measure the association between species-level traits and the corresponding diversification rate estimates at the tips of the tree. The empirical value of the test statistic is compared to a null distribution that is generated by structured permutations of evolutionary rates across the phylogeny. The test is applicable to binary discrete characters as well as continuous-valued traits and can accommodate extremely sparse sampling of character states at the tips of the tree. We apply the test to several empirical data sets and demonstrate that the method has acceptable Type I error rates. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Heat Conduction Analysis Using Semi Analytical Finite Element Method

    International Nuclear Information System (INIS)

    Wargadipura, A. H. S.

    1997-01-01

    Heat conduction problems are very often found in science and engineering fields. It is of accrual importance to determine quantitative descriptions of this important physical phenomena. This paper discusses the development and application of a numerical formulation and computation that can be used to analyze heat conduction problems. The mathematical equation which governs the physical behaviour of heat conduction is in the form of second order partial differential equations. The numerical resolution used in this paper is performed using the finite element method and Fourier series, which is known as semi-analytical finite element methods. The numerical solution results in simultaneous algebraic equations which is solved using the Gauss elimination methodology. The computer implementation is carried out using FORTRAN language. In the final part of the paper, a heat conduction problem in a rectangular plate domain with isothermal boundary conditions in its edge is solved to show the application of the computer program developed and also a comparison with analytical solution is discussed to assess the accuracy of the numerical solution obtained

  9. Semi-empirical correlation for binary interaction parameters of the Peng-Robinson equation of state with the van der Waals mixing rules for the prediction of high-pressure vapor-liquid equilibrium.

    Science.gov (United States)

    Fateen, Seif-Eddeen K; Khalil, Menna M; Elnabawy, Ahmed O

    2013-03-01

    Peng-Robinson equation of state is widely used with the classical van der Waals mixing rules to predict vapor liquid equilibria for systems containing hydrocarbons and related compounds. This model requires good values of the binary interaction parameter kij . In this work, we developed a semi-empirical correlation for kij partly based on the Huron-Vidal mixing rules. We obtained values for the adjustable parameters of the developed formula for over 60 binary systems and over 10 categories of components. The predictions of the new equation system were slightly better than the constant-kij model in most cases, except for 10 systems whose predictions were considerably improved with the new correlation.

  10. Semi-automated extraction of longitudinal subglacial bedforms from digital terrain models - Two new methods

    Science.gov (United States)

    Jorge, Marco G.; Brennand, Tracy A.

    2017-07-01

    Relict drumlin and mega-scale glacial lineation (positive relief, longitudinal subglacial bedforms - LSBs) morphometry has been used as a proxy for paleo ice-sheet dynamics. LSB morphometric inventories have relied on manual mapping, which is slow and subjective and thus potentially difficult to reproduce. Automated methods are faster and reproducible, but previous methods for LSB semi-automated mapping have not been highly successful. Here, two new object-based methods for the semi-automated extraction of LSBs (footprints) from digital terrain models are compared in a test area in the Puget Lowland, Washington, USA. As segmentation procedures to create LSB-candidate objects, the normalized closed contour method relies on the contouring of a normalized local relief model addressing LSBs on slopes, and the landform elements mask method relies on the classification of landform elements derived from the digital terrain model. For identifying which LSB-candidate objects correspond to LSBs, both methods use the same LSB operational definition: a ruleset encapsulating expert knowledge, published morphometric data, and the morphometric range of LSBs in the study area. The normalized closed contour method was separately applied to four different local relief models, two computed in moving windows and two hydrology-based. Overall, the normalized closed contour method outperformed the landform elements mask method. The normalized closed contour method performed on a hydrological relief model from a multiple direction flow routing algorithm performed best. For an assessment of its transferability, the normalized closed contour method was evaluated on a second area, the Chautauqua drumlin field, Pennsylvania and New York, USA where it performed better than in the Puget Lowland. A broad comparison to previous methods suggests that the normalized relief closed contour method may be the most capable method to date, but more development is required.

  11. Measurement of Am-242 fission yields at the Lohengrin spectrometer; improvement and Benchmarking of the semi-empirical code GEF

    International Nuclear Information System (INIS)

    Amouroux, Charlotte

    2014-01-01

    The study of fission yields has a major impact on the characterization and understanding of the fission process and is mandatory for reactor applications. While the yields are known for the major actinides (U-235, Pu-239) in the thermal neutron-induced fission, only few measurements were performed on Am-242. Moreover, the two main data libraries do not agree among each other on the light peak. Am-241 and Am-242 are nuclei of interest for the MOX-fuel reactors and for the reduction of nuclear waste radiotoxicity using transmutation reactions. Thus, a campaign of precise measurement of the fission mass yields from the reaction Am-241(2n,f) was performed at the Lohengrin mass spectrometer (ILL, France) for both the light and the heavy peak. Forty-one masses were measured. Moreover, the measurement of the isotopic fission yields on the heavy peak by gamma-ray spectrometry led to the extraction of 20 independent isotopic yields. Our measurement was also meant to determine whether there is a difference in fission yields between the Am-242 isomeric state and its ground state as it exists in fission cross sections. The experimental method used to answer this question is based on the measurement a set of fission mass yields as a function of the ratio of Am-242gs to Am-242m fission rate. Results show that the mass yields are independent of the fission rate ratio. A future experimental campaign is proposed to observe a possible influence on the isomeric yields. The theoretical models are nowadays unable to predict the fission yields with enough accuracy and therefore we have to rely on experimental data and phenomenological models. The accuracy of the predictions of the semi empirical GEF fission model predictions makes it a useful tool for evaluation. This thesis also presents the physical content and part of the development of this model. Validation of the kinetic energy distributions, isomeric yields and fission yields predictions was performed. The extension of the GEF

  12. Hybrid RHF/MP2 geometry optimizations with the effective fragment molecular orbital method

    DEFF Research Database (Denmark)

    Christensen, A. S.; Svendsen, Casper Steinmann; Fedorov, D. G.

    2014-01-01

    while the rest of the system is treated at the RHF level. MP2 geometry optimization is found to lower the barrier by up to 3.5 kcal/mol compared to RHF optimzations and ONIOM energy refinement and leads to a smoother convergence with respect to the basis set for the reaction profile. For double zeta...

  13. The Computation of Nash Equilibrium in Fashion Games via Semi-Tensor Product Method

    Institute of Scientific and Technical Information of China (English)

    GUO Peilian; WANG Yuzhen

    2016-01-01

    Using the semi-tensor product of matrices,this paper investigates the computation of pure-strategy Nash equilibrium (PNE) for fashion games,and presents several new results.First,a formal fashion game model on a social network is given.Second,the utility function of each player is converted into an algebraic form via the semi-tensor product of matrices,based on which the case of two-strategy fashion game is studied and two methods are obtained for the case to verify the existence of PNE.Third,the multi-strategy fashion game model is investigated and an algorithm is established to find all the PNEs for the general case.Finally,two kinds of optimization problems,that is,the so-called social welfare and normalized satisfaction degree optimization problems are investigated and two useful results are given.The study of several illustrative examples shows that the new results obtained in this paper are effective.

  14. An unconditionally stable fully conservative semi-Lagrangian method

    KAUST Repository

    Lentine, Michael; Gré tarsson, Jó n Tó mas; Fedkiw, Ronald

    2011-01-01

    of the conserved quantity that was not accounted for in the typical semi-Lagrangian advection. We show that this new scheme can be used to conserve both mass and momentum for incompressible flows. For incompressible flows, we further explore properly conserving

  15. The scale-dependent market trend: Empirical evidences using the lagged DFA method

    Science.gov (United States)

    Li, Daye; Kou, Zhun; Sun, Qiankun

    2015-09-01

    In this paper we make an empirical research and test the efficiency of 44 important market indexes in multiple scales. A modified method based on the lagged detrended fluctuation analysis is utilized to maximize the information of long-term correlations from the non-zero lags and keep the margin of errors small when measuring the local Hurst exponent. Our empirical result illustrates that a common pattern can be found in the majority of the measured market indexes which tend to be persistent (with the local Hurst exponent > 0.5) in the small time scale, whereas it displays significant anti-persistent characteristics in large time scales. Moreover, not only the stock markets but also the foreign exchange markets share this pattern. Considering that the exchange markets are only weakly synchronized with the economic cycles, it can be concluded that the economic cycles can cause anti-persistence in the large time scale but there are also other factors at work. The empirical result supports the view that financial markets are multi-fractal and it indicates that deviations from efficiency and the type of model to describe the trend of market price are dependent on the forecasting horizon.

  16. Nonlinearly perturbed semi-Markov processes

    CERN Document Server

    Silvestrov, Dmitrii

    2017-01-01

    The book presents new methods of asymptotic analysis for nonlinearly perturbed semi-Markov processes with a finite phase space. These methods are based on special time-space screening procedures for sequential phase space reduction of semi-Markov processes combined with the systematical use of operational calculus for Laurent asymptotic expansions. Effective recurrent algorithms are composed for getting asymptotic expansions, without and with explicit upper bounds for remainders, for power moments of hitting times, stationary and conditional quasi-stationary distributions for nonlinearly perturbed semi-Markov processes. These results are illustrated by asymptotic expansions for birth-death-type semi-Markov processes, which play an important role in various applications. The book will be a useful contribution to the continuing intensive studies in the area. It is an essential reference for theoretical and applied researchers in the field of stochastic processes and their applications that will cont...

  17. Empirical methods for estimating future climatic conditions

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    Applying the empirical approach permits the derivation of estimates of the future climate that are nearly independent of conclusions based on theoretical (model) estimates. This creates an opportunity to compare these results with those derived from the model simulations of the forthcoming changes in climate, thus increasing confidence in areas of agreement and focusing research attention on areas of disagreements. The premise underlying this approach for predicting anthropogenic climate change is based on associating the conditions of the climatic optimums of the Holocene, Eemian, and Pliocene with corresponding stages of the projected increase of mean global surface air temperature. Provided that certain assumptions are fulfilled in matching the value of the increased mean temperature for a certain epoch with the model-projected change in global mean temperature in the future, the empirical approach suggests that relationships leading to the regional variations in air temperature and other meteorological elements could be deduced and interpreted based on use of empirical data describing climatic conditions for past warm epochs. Considerable care must be taken, of course, in making use of these spatial relationships, especially in accounting for possible large-scale differences that might, in some cases, result from different factors contributing to past climate changes than future changes and, in other cases, might result from the possible influences of changes in orography and geography on regional climatic conditions over time

  18. An Empirical Review of Research Methodologies and Methods in Creativity Studies (2003-2012)

    Science.gov (United States)

    Long, Haiying

    2014-01-01

    Based on the data collected from 5 prestigious creativity journals, research methodologies and methods of 612 empirical studies on creativity, published between 2003 and 2012, were reviewed and compared to those in gifted education. Major findings included: (a) Creativity research was predominantly quantitative and psychometrics and experiment…

  19. Semi-empirical procedures for correcting detector size effect on clinical MV x-ray beam profiles

    International Nuclear Information System (INIS)

    Sahoo, Narayan; Kazi, Abdul M.; Hoffman, Mark

    2008-01-01

    The measured radiation beam profiles need to be corrected for the detector size effect to derive the real profiles. This paper describes two new semi-empirical procedures to determine the real profiles of high-energy x-ray beams by removing the detector size effect from the measured profiles. Measured profiles are corrected by shifting the position of each measurement point by a specific amount determined from available theoretical and experimental knowledge in the literature. The authors developed two procedures to determine the amount of shift. In the first procedure, which employs the published analytical deconvolution procedure of other investigators, the shift is determined from the comparison of the analytical fit of the measured profile and the corresponding analytical real profile derived from the deconvolution of the fitted measured profile and the Gaussian detector response function. In the second procedure, the amount of shift at any measurement point is considered to be proportional to the value of an analytical function related to the second derivative of the real profile at that point. The constant of proportionality and a parameter in the function are obtained from the values of the shifts at the 90%, 80%, 20%, and 10% dose levels, which are experimentally known from the published results of other investigators to be approximately equal to half of the radius of the detector. These procedures were tested by correcting the profiles of 6 and 18 MV x-ray beams measured by three different ionization chambers and a stereotactic field diode detector with 2.75, 2, 1, and 0.3 mm radii of their respective active cylindrical volumes. The corrected profiles measured by different detectors are found to be in close agreement. The detector size corrected penumbra widths also agree with the expected values based on the results of an earlier investigation. Thus, the authors concluded that the proposed procedures are accurate and can be used to derive the real

  20. Bioactive conformational generation of small molecules: A comparative analysis between force-field and multiple empirical criteria based methods

    Directory of Open Access Journals (Sweden)

    Jiang Hualiang

    2010-11-01

    Full Text Available Abstract Background Conformational sampling for small molecules plays an essential role in drug discovery research pipeline. Based on multi-objective evolution algorithm (MOEA, we have developed a conformational generation method called Cyndi in the previous study. In this work, in addition to Tripos force field in the previous version, Cyndi was updated by incorporation of MMFF94 force field to assess the conformational energy more rationally. With two force fields against a larger dataset of 742 bioactive conformations of small ligands extracted from PDB, a comparative analysis was performed between pure force field based method (FFBM and multiple empirical criteria based method (MECBM hybrided with different force fields. Results Our analysis reveals that incorporating multiple empirical rules can significantly improve the accuracy of conformational generation. MECBM, which takes both empirical and force field criteria as the objective functions, can reproduce about 54% (within 1Å RMSD of the bioactive conformations in the 742-molecule testset, much higher than that of pure force field method (FFBM, about 37%. On the other hand, MECBM achieved a more complete and efficient sampling of the conformational space because the average size of unique conformations ensemble per molecule is about 6 times larger than that of FFBM, while the time scale for conformational generation is nearly the same as FFBM. Furthermore, as a complementary comparison study between the methods with and without empirical biases, we also tested the performance of the three conformational generation methods in MacroModel in combination with different force fields. Compared with the methods in MacroModel, MECBM is more competitive in retrieving the bioactive conformations in light of accuracy but has much lower computational cost. Conclusions By incorporating different energy terms with several empirical criteria, the MECBM method can produce more reasonable conformational

  1. Two-dimensional semi-analytic nodal method for multigroup pin power reconstruction

    International Nuclear Information System (INIS)

    Seung Gyou, Baek; Han Gyu, Joo; Un Chul, Lee

    2007-01-01

    A pin power reconstruction method applicable to multigroup problems involving square fuel assemblies is presented. The method is based on a two-dimensional semi-analytic nodal solution which consists of eight exponential terms and 13 polynomial terms. The 13 polynomial terms represent the particular solution obtained under the condition of a 2-dimensional 13 term source expansion. In order to achieve better approximation of the source distribution, the least square fitting method is employed. The 8 exponential terms represent a part of the analytically obtained homogeneous solution and the 8 coefficients are determined by imposing constraints on the 4 surface average currents and 4 corner point fluxes. The surface average currents determined from a transverse-integrated nodal solution are used directly whereas the corner point fluxes are determined during the course of the reconstruction by employing an iterative scheme that would realize the corner point balance condition. The outgoing current based corner point flux determination scheme is newly introduced. The accuracy of the proposed method is demonstrated with the L336C5 benchmark problem. (authors)

  2. Empirical Hamiltonians

    International Nuclear Information System (INIS)

    Peggs, S.; Talman, R.

    1987-01-01

    As proton accelerators get larger, and include more magnets, the conventional tracking programs which simulate them run slower. The purpose of this paper is to describe a method, still under development, in which element-by-element tracking around one turn is replaced by a single man, which can be processed far faster. It is assumed for this method that a conventional program exists which can perform faithful tracking in the lattice under study for some hundreds of turns, with all lattice parameters held constant. An empirical map is then generated by comparison with the tracking program. A procedure has been outlined for determining an empirical Hamiltonian, which can represent motion through many nonlinear kicks, by taking data from a conventional tracking program. Though derived by an approximate method this Hamiltonian is analytic in form and can be subjected to further analysis of varying degrees of mathematical rigor. Even though the empirical procedure has only been described in one transverse dimension, there is good reason to hope that it can be extended to include two transverse dimensions, so that it can become a more practical tool in realistic cases

  3. Semi-empirical correlation for binary interaction parameters of the Peng–Robinson equation of state with the van der Waals mixing rules for the prediction of high-pressure vapor–liquid equilibrium

    Directory of Open Access Journals (Sweden)

    Seif-Eddeen K. Fateen

    2013-03-01

    Full Text Available Peng–Robinson equation of state is widely used with the classical van der Waals mixing rules to predict vapor liquid equilibria for systems containing hydrocarbons and related compounds. This model requires good values of the binary interaction parameter kij. In this work, we developed a semi-empirical correlation for kij partly based on the Huron–Vidal mixing rules. We obtained values for the adjustable parameters of the developed formula for over 60 binary systems and over 10 categories of components. The predictions of the new equation system were slightly better than the constant-kij model in most cases, except for 10 systems whose predictions were considerably improved with the new correlation.

  4. Band structure calculation of GaSe-based nanostructures using empirical pseudopotential method

    International Nuclear Information System (INIS)

    Osadchy, A V; Obraztsova, E D; Volotovskiy, S G; Golovashkin, D L; Savin, V V

    2016-01-01

    In this paper we present the results of band structure computer simulation of GaSe- based nanostructures using the empirical pseudopotential method. Calculations were performed using a specially developed software that allows performing simulations using cluster computing. Application of this method significantly reduces the demands on computing resources compared to traditional approaches based on ab-initio techniques and provides receiving the adequate comparable results. The use of cluster computing allows to obtain information for structures that require an explicit account of a significant number of atoms, such as quantum dots and quantum pillars. (paper)

  5. Empirical source strength correlations for rans-based acoustic analogy methods

    Science.gov (United States)

    Kube-McDowell, Matthew Tyndall

    JeNo is a jet noise prediction code based on an acoustic analogy method developed by Mani, Gliebe, Balsa, and Khavaran. Using the flow predictions from a standard Reynolds-averaged Navier-Stokes computational fluid dynamics solver, JeNo predicts the overall sound pressure level and angular spectra for high-speed hot jets over a range of observer angles, with a processing time suitable for rapid design purposes. JeNo models the noise from hot jets as a combination of two types of noise sources; quadrupole sources dependent on velocity fluctuations, which represent the major noise of turbulent mixing, and dipole sources dependent on enthalpy fluctuations, which represent the effects of thermal variation. These two sources are modeled by JeNo as propagating independently into the far-field, with no cross-correlation at the observer location. However, high-fidelity computational fluid dynamics solutions demonstrate that this assumption is false. In this thesis, the theory, assumptions, and limitations of the JeNo code are briefly discussed, and a modification to the acoustic analogy method is proposed in which the cross-correlation of the two primary noise sources is allowed to vary with the speed of the jet and the observer location. As a proof-of-concept implementation, an empirical correlation correction function is derived from comparisons between JeNo's noise predictions and a set of experimental measurements taken for the Air Force Aero-Propulsion Laboratory. The empirical correlation correction is then applied to JeNo's predictions of a separate data set of hot jets tested at NASA's Glenn Research Center. Metrics are derived to measure the qualitative and quantitative performance of JeNo's acoustic predictions, and the empirical correction is shown to provide a quantitative improvement in the noise prediction at low observer angles with no freestream flow, and a qualitative improvement in the presence of freestream flow. However, the results also demonstrate

  6. An Empirical Investigation of Strategic Planning in QS Practices

    OpenAIRE

    Murphy, Roisin

    2012-01-01

    The benefit of engaging in strategic planning has been well documented over several decades of strategic management research. Despite the significant body of existing knowledge in the field, there remains a limited collection of empirically tested research pertaining to strategic planning within professional service firms (PSFs) in construction, particularly from an Irish context. The research is an exploratory study involving in-depth, semi-structured interviews and a widespread survey of...

  7. Semi-automatic watershed medical image segmentation methods for customized cancer radiation treatment planning simulation

    International Nuclear Information System (INIS)

    Kum Oyeon; Kim Hye Kyung; Max, N.

    2007-01-01

    A cancer radiation treatment planning simulation requires image segmentation to define the gross tumor volume, clinical target volume, and planning target volume. Manual segmentation, which is usual in clinical settings, depends on the operator's experience and may, in addition, change for every trial by the same operator. To overcome this difficulty, we developed semi-automatic watershed medical image segmentation tools using both the top-down watershed algorithm in the insight segmentation and registration toolkit (ITK) and Vincent-Soille's bottom-up watershed algorithm with region merging. We applied our algorithms to segment two- and three-dimensional head phantom CT data and to find pixel (or voxel) numbers for each segmented area, which are needed for radiation treatment optimization. A semi-automatic method is useful to avoid errors incurred by both human and machine sources, and provide clear and visible information for pedagogical purpose. (orig.)

  8. Geometry optimization method versus predictive ability in QSPR modeling for ionic liquids

    Science.gov (United States)

    Rybinska, Anna; Sosnowska, Anita; Barycki, Maciej; Puzyn, Tomasz

    2016-02-01

    Computational techniques, such as Quantitative Structure-Property Relationship (QSPR) modeling, are very useful in predicting physicochemical properties of various chemicals. Building QSPR models requires calculating molecular descriptors and the proper choice of the geometry optimization method, which will be dedicated to specific structure of tested compounds. Herein, we examine the influence of the ionic liquids' (ILs) geometry optimization methods on the predictive ability of QSPR models by comparing three models. The models were developed based on the same experimental data on density collected for 66 ionic liquids, but with employing molecular descriptors calculated from molecular geometries optimized at three different levels of the theory, namely: (1) semi-empirical (PM7), (2) ab initio (HF/6-311+G*) and (3) density functional theory (B3LYP/6-311+G*). The model in which the descriptors were calculated by using ab initio HF/6-311+G* method indicated the best predictivity capabilities ({{Q}}_{{EXT}}2 = 0.87). However, PM7-based model has comparable values of quality parameters ({{Q}}_{{EXT}}2 = 0.84). Obtained results indicate that semi-empirical methods (faster and less expensive regarding CPU time) can be successfully employed to geometry optimization in QSPR studies for ionic liquids.

  9. Detection of fungi by conventional methods and semi-nested PCR in patients with presumed fungal keratitis.

    Science.gov (United States)

    Haghani, I; Amirinia, F; Nowroozpoor-Dailami, K; Shokohi, T

    2015-06-01

    Fungal keratitis is a suppurative, ulcerative, and sight-threatening infection of the cornea that sometimes leads to blindness. The aims of this study were: recuperating facilities for laboratory diagnosis, determining the causative microorganisms, and comparing conventional laboratory diagnostic tools and semi-nested PCR. Sampling was conducted in patients with suspected fungal keratitis. Two corneal scrapings specimens, one for direct smear and culture and the other for semi- nested PCR were obtained. Of the 40 expected cases of mycotic keratitis, calcofluor white staining showed positivity in 25%, culture in 17.5%, KOH in 10%, and semi-nested PCR in 27.5%. The sensitivities of semi-nested PCR, KOH, and CFW were 57.1%, 28.5%, and 42% while the specificities were 78.7%, 94%, and 78.7%, respectively. The time taken for PCR assay was 4 to 8 hours, whereas positive fungal cultures took at least 5 to 7 days. Due to the increasing incidence of fungal infections in people with weakened immune systems, uninformed using of topical corticosteroids and improper use of contact lens, fast diagnosis and accurate treatment of keratomycosis seems to be essential . Therefore, according to the current study, molecular methods can detect mycotic keratitis early and correctly leading to appropriate treatment.

  10. Empirical data and moral theory. A plea for integrated empirical ethics.

    Science.gov (United States)

    Molewijk, Bert; Stiggelbout, Anne M; Otten, Wilma; Dupuis, Heleen M; Kievit, Job

    2004-01-01

    Ethicists differ considerably in their reasons for using empirical data. This paper presents a brief overview of four traditional approaches to the use of empirical data: "the prescriptive applied ethicists," "the theorists," "the critical applied ethicists," and "the particularists." The main aim of this paper is to introduce a fifth approach of more recent date (i.e. "integrated empirical ethics") and to offer some methodological directives for research in integrated empirical ethics. All five approaches are presented in a table for heuristic purposes. The table consists of eight columns: "view on distinction descriptive-prescriptive sciences," "location of moral authority," "central goal(s)," "types of normativity," "use of empirical data," "method," "interaction empirical data and moral theory," and "cooperation with descriptive sciences." Ethicists can use the table in order to identify their own approach. Reflection on these issues prior to starting research in empirical ethics should lead to harmonization of the different scientific disciplines and effective planning of the final research design. Integrated empirical ethics (IEE) refers to studies in which ethicists and descriptive scientists cooperate together continuously and intensively. Both disciplines try to integrate moral theory and empirical data in order to reach a normative conclusion with respect to a specific social practice. IEE is not wholly prescriptive or wholly descriptive since IEE assumes an interdepence between facts and values and between the empirical and the normative. The paper ends with three suggestions for consideration on some of the future challenges of integrated empirical ethics.

  11. Feasibility of a semi-automated method for cardiac conduction velocity analysis of high-resolution activation maps

    NARCIS (Netherlands)

    Doshi, Ashish N.; Walton, Richard D.; Krul, Sébastien P.; de Groot, Joris R.; Bernus, Olivier; Efimov, Igor R.; Boukens, Bastiaan J.; Coronel, Ruben

    2015-01-01

    Myocardial conduction velocity is important for the genesis of arrhythmias. In the normal heart, conduction is primarily dependent on fiber direction (anisotropy) and may be discontinuous at sites with tissue heterogeneities (trabeculated or fibrotic tissue). We present a semi-automated method for

  12. A Semi-Discrete Landweber-Kaczmarz Method for Cone Beam Tomography and Laminography Exploiting Geometric Prior Information

    Science.gov (United States)

    Vogelgesang, Jonas; Schorr, Christian

    2016-12-01

    We present a semi-discrete Landweber-Kaczmarz method for solving linear ill-posed problems and its application to Cone Beam tomography and laminography. Using a basis function-type discretization in the image domain, we derive a semi-discrete model of the underlying scanning system. Based on this model, the proposed method provides an approximate solution of the reconstruction problem, i.e. reconstructing the density function of a given object from its projections, in suitable subspaces equipped with basis function-dependent weights. This approach intuitively allows the incorporation of additional information about the inspected object leading to a more accurate model of the X-rays through the object. Also, physical conditions of the scanning geometry, like flat detectors in computerized tomography as used in non-destructive testing applications as well as non-regular scanning curves e.g. appearing in computed laminography (CL) applications, are directly taken into account during the modeling process. Finally, numerical experiments of a typical CL application in three dimensions are provided to verify the proposed method. The introduction of geometric prior information leads to a significantly increased image quality and superior reconstructions compared to standard iterative methods.

  13. Semi-implicit and fully implicit shock-capturing methods for hyperbolic conservation laws with stiff source terms

    International Nuclear Information System (INIS)

    Yee, H.C.; Shinn, J.L.

    1986-12-01

    Some numerical aspects of finite-difference algorithms for nonlinear multidimensional hyperbolic conservation laws with stiff nonhomogenous (source) terms are discussed. If the stiffness is entirely dominated by the source term, a semi-implicit shock-capturing method is proposed provided that the Jacobian of the source terms possesses certain properties. The proposed semi-implicit method can be viewed as a variant of the Bussing and Murman point-implicit scheme with a more appropriate numerical dissipation for the computation of strong shock waves. However, if the stiffness is not solely dominated by the source terms, a fully implicit method would be a better choice. The situation is complicated by problems that are higher than one dimension, and the presence of stiff source terms further complicates the solution procedures for alternating direction implicit (ADI) methods. Several alternatives are discussed. The primary motivation for constructing these schemes was to address thermally and chemically nonequilibrium flows in the hypersonic regime. Due to the unique structure of the eigenvalues and eigenvectors for fluid flows of this type, the computation can be simplified, thus providing a more efficient solution procedure than one might have anticipated

  14. Semi-implicit and fully implicit shock-capturing methods for hyperbolic conservation laws with stiff source terms

    International Nuclear Information System (INIS)

    Yee, H.C.; Shinn, J.L.

    1987-01-01

    Some numerical aspects of finite-difference algorithms for nonlinear multidimensional hyperbolic conservation laws with stiff nonhomogeneous (source) terms are discussed. If the stiffness is entirely dominated by the source term, a semi-implicit shock-capturing method is proposed provided that the Jacobian of the source terms possesses certain properties. The proposed semi-implicit method can be viewed as a variant of the Bussing and Murman point-implicit scheme with a more appropriate numerical dissipation for the computation of strong shock waves. However, if the stiffness is not solely dominated by the source terms, a fully implicit method would be a better choice. The situation is complicated by problems that are higher than one dimension, and the presence of stiff source terms further complicates the solution procedures for alternating direction implicit (ADI) methods. Several alternatives are discussed. The primary motivation for constructing these schemes was to address thermally and chemically nonequilibrium flows in the hypersonic regime. Due to the unique structure of the eigenvalues and eigenvectors for fluid flows of this type, the computation can be simplified, thus providing a more efficient solution procedure than one might have anticipated. 46 references

  15. Robust fluence map optimization via alternating direction method of multipliers with empirical parameter optimization

    International Nuclear Information System (INIS)

    Gao, Hao

    2016-01-01

    For the treatment planning during intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), beam fluence maps can be first optimized via fluence map optimization (FMO) under the given dose prescriptions and constraints to conformally deliver the radiation dose to the targets while sparing the organs-at-risk, and then segmented into deliverable MLC apertures via leaf or arc sequencing algorithms. This work is to develop an efficient algorithm for FMO based on alternating direction method of multipliers (ADMM). Here we consider FMO with the least-square cost function and non-negative fluence constraints, and its solution algorithm is based on ADMM, which is efficient and simple-to-implement. In addition, an empirical method for optimizing the ADMM parameter is developed to improve the robustness of the ADMM algorithm. The ADMM based FMO solver was benchmarked with the quadratic programming method based on the interior-point (IP) method using the CORT dataset. The comparison results suggested the ADMM solver had a similar plan quality with slightly smaller total objective function value than IP. A simple-to-implement ADMM based FMO solver with empirical parameter optimization is proposed for IMRT or VMAT. (paper)

  16. Variable discrete ordinates method for radiation transfer in plane-parallel semi-transparent media with variable refractive index

    Science.gov (United States)

    Sarvari, S. M. Hosseini

    2017-09-01

    The traditional form of discrete ordinates method is applied to solve the radiative transfer equation in plane-parallel semi-transparent media with variable refractive index through using the variable discrete ordinate directions and the concept of refracted radiative intensity. The refractive index are taken as constant in each control volume, such that the direction cosines of radiative rays remain non-variant through each control volume, and then, the directions of discrete ordinates are changed locally by passing each control volume, according to the Snell's law of refraction. The results are compared by the previous studies in this field. Despite simplicity, the results show that the variable discrete ordinate method has a good accuracy in solving the radiative transfer equation in the semi-transparent media with arbitrary distribution of refractive index.

  17. Cancer survival analysis using semi-supervised learning method based on Cox and AFT models with L1/2 regularization.

    Science.gov (United States)

    Liang, Yong; Chai, Hua; Liu, Xiao-Ying; Xu, Zong-Ben; Zhang, Hai; Leung, Kwong-Sak

    2016-03-01

    One of the most important objectives of the clinical cancer research is to diagnose cancer more accurately based on the patients' gene expression profiles. Both Cox proportional hazards model (Cox) and accelerated failure time model (AFT) have been widely adopted to the high risk and low risk classification or survival time prediction for the patients' clinical treatment. Nevertheless, two main dilemmas limit the accuracy of these prediction methods. One is that the small sample size and censored data remain a bottleneck for training robust and accurate Cox classification model. In addition to that, similar phenotype tumours and prognoses are actually completely different diseases at the genotype and molecular level. Thus, the utility of the AFT model for the survival time prediction is limited when such biological differences of the diseases have not been previously identified. To try to overcome these two main dilemmas, we proposed a novel semi-supervised learning method based on the Cox and AFT models to accurately predict the treatment risk and the survival time of the patients. Moreover, we adopted the efficient L1/2 regularization approach in the semi-supervised learning method to select the relevant genes, which are significantly associated with the disease. The results of the simulation experiments show that the semi-supervised learning model can significant improve the predictive performance of Cox and AFT models in survival analysis. The proposed procedures have been successfully applied to four real microarray gene expression and artificial evaluation datasets. The advantages of our proposed semi-supervised learning method include: 1) significantly increase the available training samples from censored data; 2) high capability for identifying the survival risk classes of patient in Cox model; 3) high predictive accuracy for patients' survival time in AFT model; 4) strong capability of the relevant biomarker selection. Consequently, our proposed semi

  18. Modeling of the phase equilibria of polystyrene in methylcyclohexane with semi-empirical quantum mechanical methods I

    DEFF Research Database (Denmark)

    Wilczura-Wachnik, H.; Jonsdottir, Svava Osk

    2003-01-01

    for the repeating unit of the polymer, the intermolecular interaction energies between the solvent molecule and the polymer were simulated. The semiempirical quantum mechanical method AM1, and a method for sampling relevant internal orientations for a pair of molecules developed previously were used. Interaction...

  19. Method for semi-automated microscopy of filtration-enriched circulating tumor cells.

    Science.gov (United States)

    Pailler, Emma; Oulhen, Marianne; Billiot, Fanny; Galland, Alexandre; Auger, Nathalie; Faugeroux, Vincent; Laplace-Builhé, Corinne; Besse, Benjamin; Loriot, Yohann; Ngo-Camus, Maud; Hemanda, Merouan; Lindsay, Colin R; Soria, Jean-Charles; Vielh, Philippe; Farace, Françoise

    2016-07-14

    Circulating tumor cell (CTC)-filtration methods capture high numbers of CTCs in non-small-cell lung cancer (NSCLC) and metastatic prostate cancer (mPCa) patients, and hold promise as a non-invasive technique for treatment selection and disease monitoring. However filters have drawbacks that make the automation of microscopy challenging. We report the semi-automated microscopy method we developed to analyze filtration-enriched CTCs from NSCLC and mPCa patients. Spiked cell lines in normal blood and CTCs were enriched by ISET (isolation by size of epithelial tumor cells). Fluorescent staining was carried out using epithelial (pan-cytokeratins, EpCAM), mesenchymal (vimentin, N-cadherin), leukocyte (CD45) markers and DAPI. Cytomorphological staining was carried out with Mayer-Hemalun or Diff-Quik. ALK-, ROS1-, ERG-rearrangement were detected by filter-adapted-FISH (FA-FISH). Microscopy was carried out using an Ariol scanner. Two combined assays were developed. The first assay sequentially combined four-color fluorescent staining, scanning, automated selection of CD45(-) cells, cytomorphological staining, then scanning and analysis of CD45(-) cell phenotypical and cytomorphological characteristics. CD45(-) cell selection was based on DAPI and CD45 intensity, and a nuclear area >55 μm(2). The second assay sequentially combined fluorescent staining, automated selection of CD45(-) cells, FISH scanning on CD45(-) cells, then analysis of CD45(-) cell FISH signals. Specific scanning parameters were developed to deal with the uneven surface of filters and CTC characteristics. Thirty z-stacks spaced 0.6 μm apart were defined as the optimal setting, scanning 82 %, 91 %, and 95 % of CTCs in ALK-, ROS1-, and ERG-rearranged patients respectively. A multi-exposure protocol consisting of three separate exposure times for green and red fluorochromes was optimized to analyze the intensity, size and thickness of FISH signals. The semi-automated microscopy method reported here

  20. Empirical research in medical ethics: How conceptual accounts on normative-empirical collaboration may improve research practice

    Science.gov (United States)

    2012-01-01

    Background The methodology of medical ethics during the last few decades has shifted from a predominant use of normative-philosophical analyses to an increasing involvement of empirical methods. The articles which have been published in the course of this so-called 'empirical turn' can be divided into conceptual accounts of empirical-normative collaboration and studies which use socio-empirical methods to investigate ethically relevant issues in concrete social contexts. Discussion A considered reference to normative research questions can be expected from good quality empirical research in medical ethics. However, a significant proportion of empirical studies currently published in medical ethics lacks such linkage between the empirical research and the normative analysis. In the first part of this paper, we will outline two typical shortcomings of empirical studies in medical ethics with regard to a link between normative questions and empirical data: (1) The complete lack of normative analysis, and (2) cryptonormativity and a missing account with regard to the relationship between 'is' and 'ought' statements. Subsequently, two selected concepts of empirical-normative collaboration will be presented and how these concepts may contribute to improve the linkage between normative and empirical aspects of empirical research in medical ethics will be demonstrated. Based on our analysis, as well as our own practical experience with empirical research in medical ethics, we conclude with a sketch of concrete suggestions for the conduct of empirical research in medical ethics. Summary High quality empirical research in medical ethics is in need of a considered reference to normative analysis. In this paper, we demonstrate how conceptual approaches of empirical-normative collaboration can enhance empirical research in medical ethics with regard to the link between empirical research and normative analysis. PMID:22500496

  1. Evaluation of binding energies by using quantum mechanical methods

    International Nuclear Information System (INIS)

    Postolache, Cristian; Matei, Lidia; Postolache, Carmen

    2002-01-01

    Evaluation of binding energies (BE) in molecular structure is needed for modelling chemical and radiochemical processes by quantum-chemical methods. An important field of application is evaluation of radiolysis and autoradiolysis stability of organic and inorganic compounds as well as macromolecular structures. The current methods of calculation do not allow direct determination of BE but only of total binding energies (TBE) and enthalpies. BEs were evaluated indirectly by determining the homolytic dissociation energies. The molecular structures were built and geometrically optimized by the molecular mechanics methods MM+ and AMBER. The energy minimizations were refined by semi-empirical methods. Depending on the chosen molecular structure, the CNDO, INDO, PM3 and AM1 methods were used. To reach a high confidence level the minimizations were done for gradients lower than 10 -3 RMS. The energy values obtained by the difference of the fragment TBLs, of the transition states and initial molecular structures, respectively, were associated to the hemolytic fragmentation energy and BE, respectively. In order to evaluate the method's accuracy and to establish the application fields of the evaluation methods, the obtained values of BEs were compared with the experimental data taken from literature. To this goal there were built, geometrically optimized by semi-empirical methods and evaluated the BEs for 74 organic and inorganic compounds (alkanes, alkene, alkynes, halogenated derivatives, alcohols, aldehydes, ketones, carboxylic acids, nitrogen and sulfur compounds, water, hydrogen peroxide, ammonia, hydrazine, etc. (authors)

  2. Application of Dynamic Analysis in Semi-Analytical Finite Element Method.

    Science.gov (United States)

    Liu, Pengfei; Xing, Qinyan; Wang, Dawei; Oeser, Markus

    2017-08-30

    Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement's state.

  3. Validation of five minimally obstructive methods to estimate physical activity energy expenditure in young adults in semi-standardized settings

    DEFF Research Database (Denmark)

    Schneller, Mikkel Bo; Pedersen, Mogens Theisen; Gupta, Nidhi

    2015-01-01

    We compared the accuracy of five objective methods, including two newly developed methods combining accelerometry and activity type recognition (Acti4), against indirect calorimetry, to estimate total energy expenditure (EE) of different activities in semi-standardized settings. Fourteen particip...

  4. Semi-classical signal analysis

    KAUST Repository

    Laleg-Kirati, Taous-Meriem; Cré peau, Emmanuelle; Sorine, Michel

    2012-01-01

    This study introduces a new signal analysis method, based on a semi-classical approach. The main idea in this method is to interpret a pulse-shaped signal as a potential of a Schrödinger operator and then to use the discrete spectrum

  5. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    Science.gov (United States)

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Semi-Automatic Rating Method for Neutrophil Alkaline Phosphatase Activity.

    Science.gov (United States)

    Sugano, Kanae; Hashi, Kotomi; Goto, Misaki; Nishi, Kiyotaka; Maeda, Rie; Kono, Keigo; Yamamoto, Mai; Okada, Kazunori; Kaga, Sanae; Miwa, Keiko; Mikami, Taisei; Masauzi, Nobuo

    2017-01-01

    The neutrophil alkaline phosphatase (NAP) score is a valuable test for the diagnosis of myeloproliferative neoplasms, but it has still manually rated. Therefore, we developed a semi-automatic rating method using Photoshop ® and Image-J, called NAP-PS-IJ. Neutrophil alkaline phosphatase staining was conducted with Tomonaga's method to films of peripheral blood taken from three healthy volunteers. At least 30 neutrophils with NAP scores from 0 to 5+ were observed and taken their images. From which the outer part of neutrophil was removed away with Image-J. These were binarized with two different procedures (P1 and P2) using Photoshop ® . NAP-positive area (NAP-PA) and granule (NAP-PGC) were measured and counted with Image-J. The NAP-PA in images binarized with P1 significantly (P < 0.05) differed between images with NAP scores from 0 to 3+ (group 1) and those from 4+ to 5+ (group 2). The original images in group 1 were binarized with P2. NAP-PGC of them significantly (P < 0.05) differed among all four NAP score groups. The mean NAP-PGC with NAP-PS-IJ indicated a good correlation (r = 0.92, P < 0.001) to results by human examiners. The sensitivity and specificity of NAP-PS-IJ were 60% and 92%, which might be considered as a prototypic method for the full-automatic rating NAP score. © 2016 Wiley Periodicals, Inc.

  7. Prediction of shear wave velocity using empirical correlations and artificial intelligence methods

    Science.gov (United States)

    Maleki, Shahoo; Moradzadeh, Ali; Riabi, Reza Ghavami; Gholami, Raoof; Sadeghzadeh, Farhad

    2014-06-01

    Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR) and Back-Propagation Neural Network (BPNN). Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.

  8. Prediction of shear wave velocity using empirical correlations and artificial intelligence methods

    Directory of Open Access Journals (Sweden)

    Shahoo Maleki

    2014-06-01

    Full Text Available Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR and Back-Propagation Neural Network (BPNN. Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.

  9. Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method

    Science.gov (United States)

    Asavaskulkiet, Krissada

    2018-04-01

    In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.

  10. A semi-automatic method for positioning a femoral bone reconstruction for strict view generation.

    Science.gov (United States)

    Milano, Federico; Ritacco, Lucas; Gomez, Adrian; Gonzalez Bernaldo de Quiros, Fernan; Risk, Marcelo

    2010-01-01

    In this paper we present a semi-automatic method for femoral bone positioning after 3D image reconstruction from Computed Tomography images. This serves as grounding for the definition of strict axial, longitudinal and anterior-posterior views, overcoming the problem of patient positioning biases in 2D femoral bone measuring methods. After the bone reconstruction is aligned to a standard reference frame, new tomographic slices can be generated, on which unbiased measures may be taken. This could allow not only accurate inter-patient comparisons but also intra-patient comparisons, i.e., comparisons of images of the same patient taken at different times. This method could enable medical doctors to diagnose and follow up several bone deformities more easily.

  11. Linear Discriminant Analysis for the in Silico Discovery of Mechanism-Based Reversible Covalent Inhibitors of a Serine Protease: Application of Hydration Thermodynamics Analysis and Semi-empirical Molecular Orbital Calculation.

    Science.gov (United States)

    Masuda, Yosuke; Yoshida, Tomoki; Yamaotsu, Noriyuki; Hirono, Shuichi

    2018-01-01

    We recently reported that the Gibbs free energy of hydrolytic water molecules (ΔG wat ) in acyl-trypsin intermediates calculated by hydration thermodynamics analysis could be a useful metric for estimating the catalytic rate constants (k cat ) of mechanism-based reversible covalent inhibitors. For thorough evaluation, the proposed method was tested with an increased number of covalent ligands that have no corresponding crystal structures. After modeling acyl-trypsin intermediate structures using flexible molecular superposition, ΔG wat values were calculated according to the proposed method. The orbital energies of antibonding π* molecular orbitals (MOs) of carbonyl C=O in covalently modified catalytic serine (E orb ) were also calculated by semi-empirical MO calculations. Then, linear discriminant analysis (LDA) was performed to build a model that can discriminate covalent inhibitor candidates from substrate-like ligands using ΔG wat and E orb . The model was built using a training set (10 compounds) and then validated by a test set (4 compounds). As a result, the training set and test set ligands were perfectly discriminated by the model. Hydrolysis was slower when (1) the hydrolytic water molecule has lower ΔG wat ; (2) the covalent ligand presents higher E orb (higher reaction barrier). Results also showed that the entropic term of hydrolytic water molecule (-TΔS wat ) could be used for estimating k cat and for covalent inhibitor optimization; when the rotational freedom of the hydrolytic water molecule is limited, the chance for favorable interaction with the electrophilic acyl group would also be limited. The method proposed in this study would be useful for screening and optimizing the mechanism-based reversible covalent inhibitors.

  12. Heat transfer study on convective–radiative semi-spherical fins with temperature-dependent properties and heat generation using efficient computational methods

    International Nuclear Information System (INIS)

    Atouei, S.A.; Hosseinzadeh, Kh.; Hatami, M.; Ghasemi, Seiyed E.; Sahebi, S.A.R.; Ganji, D.D.

    2015-01-01

    In this study, heat transfer and temperature distribution equations for semi-spherical convective–radiative porous fins are presented. Temperature-dependent heat generation, convection and radiation effects are considered and after deriving the governing equation, Least Square Method (LSM), Collocation Method (CM) and fourth order Runge-Kutta method (NUM) are applied for predicting the temperature distribution in the described fins. Results reveal that LSM has excellent agreement with numerical method, so can be suitable analytical method for solving the problem. Also, the effect of some physical parameters which are appeared in the mathematical formulation on fin surface temperature is investigated to show the effect of radiation and heat generation in a solid fin temperature. - Highlights: • Thermal analysis of a semi-spherical fin is investigated. • Collocation and Least Square Methods are applied on the problem. • Convection, radiation and heat generation is considered. • Physical results are compared to numerical outcomes.

  13. A GPU-accelerated semi-implicit fractional step method for numerical solutions of incompressible Navier-Stokes equations

    Science.gov (United States)

    Ha, Sanghyun; Park, Junshin; You, Donghyun

    2017-11-01

    Utility of the computational power of modern Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. Due to its serial and bandwidth-bound nature, the present choice of numerical methods is considered to be a good candidate for evaluating the potential of GPUs for solving Navier-Stokes equations using non-explicit time integration. An efficient algorithm is presented for GPU acceleration of the Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution method used in the semi-implicit fractional-step method. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while Navier-Stokes equations are computed on a GPU. Extension to multiple NVIDIA GPUs is implemented using NVLink supported by the Pascal architecture. Performance of the present method is experimented on multiple Tesla P100 GPUs compared with a single-core Xeon E5-2650 v4 CPU in simulations of boundary-layer flow over a flat plate. Supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (Ministry of Science, ICT and Future Planning NRF-2016R1E1A2A01939553, NRF-2014R1A2A1A11049599, and Ministry of Trade, Industry and Energy 201611101000230).

  14. Modeling of the phase equilibria of polystyrene in methylcyclohexane with semi-empirical quantum mechanical methods I.

    Science.gov (United States)

    Wilczura-Wachnik, Hanna; Jónsdóttir, Svava Osk

    2003-04-01

    A method for calculating interaction parameters traditionally used in phase-equilibrium computations in low-molecular systems has been extended for the prediction of solvent activities of aromatic polymer solutions (polystyrene+methylcyclohexane). Using ethylbenzene as a model compound for the repeating unit of the polymer, the intermolecular interaction energies between the solvent molecule and the polymer were simulated. The semiempirical quantum chemical method AM1, and a method for sampling relevant internal orientations for a pair of molecules developed previously were used. Interaction energies are determined for three molecular pairs, the solvent and the model molecule, two solvent molecules and two model molecules, and used to calculated UNIQUAC interaction parameters, a(ij) and a(ji). Using these parameters, the solvent activities of the polystyrene 90,000 amu+methylcyclohexane system, and the total vapor pressures of the methylcyclohexane+ethylbenzene system were calculated. The latter system was compared to experimental data, giving qualitative agreement. Figure Solvent activities for the methylcylcohexane(1)+polystyrene(2) system at 316 K. Parameters aij (blue line) obtained with the AM1 method; parameters aij (pink line) from VLE data for the ethylbenzene+methylcyclohexane system. The abscissa is the polymer weight fraction defined as y2(x1)=(1mx1)M2/[x1M1+(1mx1)M2], where x1 is the solvent mole fraction and Mi are the molecular weights of the components.

  15. Gray-Matter Volume Estimate Score: A Novel Semi-Automatic Method Measuring Early Ischemic Change on CT

    OpenAIRE

    Song, Dongbeom; Lee, Kijeong; Kim, Eun Hye; Kim, Young Dae; Lee, Hye Sun; Kim, Jinkwon; Song, Tae-Jin; Ahn, Sung Soo; Nam, Hyo Suk; Heo, Ji Hoe

    2015-01-01

    Background and Purpose We developed a novel method named Gray-matter Volume Estimate Score (GRAVES), measuring early ischemic changes on Computed Tomography (CT) semi-automatically by computer software. This study aimed to compare GRAVES and Alberta Stroke Program Early CT Score (ASPECTS) with regards to outcome prediction and inter-rater agreement. Methods This was a retrospective cohort study. Among consecutive patients with ischemic stroke in the anterior circulation who received intra-art...

  16. Empirical philosophy of science

    DEFF Research Database (Denmark)

    Wagenknecht, Susann; Nersessian, Nancy J.; Andersen, Hanne

    2015-01-01

    A growing number of philosophers of science make use of qualitative empirical data, a development that may reconfigure the relations between philosophy and sociology of science and that is reminiscent of efforts to integrate history and philosophy of science. Therefore, the first part...... of this introduction to the volume Empirical Philosophy of Science outlines the history of relations between philosophy and sociology of science on the one hand, and philosophy and history of science on the other. The second part of this introduction offers an overview of the papers in the volume, each of which...... is giving its own answer to questions such as: Why does the use of qualitative empirical methods benefit philosophical accounts of science? And how should these methods be used by the philosopher?...

  17. Measurement of polarization curve and development of a unique semi-empirical model for description of PEMFC and DMFC performances

    Directory of Open Access Journals (Sweden)

    M. SHAKERI

    2011-06-01

    Full Text Available In this study, a single polymer electrolyte membrane fuel cell (PEMFC in H2/ /O2 form with an effective dimension of 5 cm5 cm as well as a single direct methanol fuel cell (DMFC with a dimension of 10 cm10 cm were fabricated. In an existing test station, the voltage-current density performances of the fabricated PEMFC and DMFC were examined under various operating conditions. As expected, DMFC showed a lower electrical performance which can be attributed to the slower methanol oxidation rate in comparison to the hydrogen oxidation. The results obtained from the cell operation indicated that the temperature has a great effect on the cell performance. At 60 C, the best power output was obtained for PEMFC. There was a drop in the cell voltage beyond 60 C, which can be attributed to the reduction of water content inside the membrane. For DMFC, the maximum power output resulted at 64 C. Increasing oxygen stoichiometry and total cell pressure had a marginal effect on the cell performance. The results also revealed that the cell performance improved by increasing pressure differences between the anode and cathode. A unified semi-empirical thermodynamic based model was developed to describe the cell voltage as a function of current density for both kinds of fuel cells. The model equation parameters were obtained through a nonlinear fit to the experimental data. There was a good agreement between the experimental data and the model predicted cell performance for both types of fuel cells.

  18. Empirical method for simulation of water tables by digital computers

    International Nuclear Information System (INIS)

    Carnahan, C.L.; Fenske, P.R.

    1975-09-01

    An empirical method is described for computing a matrix of water-table elevations from a matrix of topographic elevations and a set of observed water-elevation control points which may be distributed randomly over the area of interest. The method is applicable to regions, such as the Great Basin, where the water table can be assumed to conform to a subdued image of overlying topography. A first approximation to the water table is computed by smoothing a matrix of topographic elevations and adjusting each node of the smoothed matrix according to a linear regression between observed water elevations and smoothed topographic elevations. Each observed control point is assumed to exert a radially decreasing influence on the first approximation surface. The first approximation is then adjusted further to conform to observed water-table elevations near control points. Outside the domain of control, the first approximation is assumed to represent the most probable configuration of the water table. The method has been applied to the Nevada Test Site and the Hot Creek Valley areas in Nevada

  19. Empirical Philosophy of Science

    DEFF Research Database (Denmark)

    Mansnerus, Erika; Wagenknecht, Susann

    2015-01-01

    knowledge takes place through the integration of the empirical or historical research into the philosophical studies, as Chang, Nersessian, Thagard and Schickore argue in their work. Building upon their contributions we will develop a blueprint for an Empirical Philosophy of Science that draws upon...... qualitative methods from the social sciences in order to advance our philosophical understanding of science in practice. We will regard the relationship between philosophical conceptualization and empirical data as an iterative dialogue between theory and data, which is guided by a particular ‘feeling with......Empirical insights are proven fruitful for the advancement of Philosophy of Science, but the integration of philosophical concepts and empirical data poses considerable methodological challenges. Debates in Integrated History and Philosophy of Science suggest that the advancement of philosophical...

  20. Estimation methods with ordered exposure subject to measurement error and missingness in semi-ecological design

    Directory of Open Access Journals (Sweden)

    Kim Hyang-Mi

    2012-09-01

    Full Text Available Abstract Background In epidemiological studies, it is often not possible to measure accurately exposures of participants even if their response variable can be measured without error. When there are several groups of subjects, occupational epidemiologists employ group-based strategy (GBS for exposure assessment to reduce bias due to measurement errors: individuals of a group/job within study sample are assigned commonly to the sample mean of exposure measurements from their group in evaluating the effect of exposure on the response. Therefore, exposure is estimated on an ecological level while health outcomes are ascertained for each subject. Such study design leads to negligible bias in risk estimates when group means are estimated from ‘large’ samples. However, in many cases, only a small number of observations are available to estimate the group means, and this causes bias in the observed exposure-disease association. Also, the analysis in a semi-ecological design may involve exposure data with the majority missing and the rest observed with measurement errors and complete response data collected with ascertainment. Methods In workplaces groups/jobs are naturally ordered and this could be incorporated in estimation procedure by constrained estimation methods together with the expectation and maximization (EM algorithms for regression models having measurement error and missing values. Four methods were compared by a simulation study: naive complete-case analysis, GBS, the constrained GBS (CGBS, and the constrained expectation and maximization (CEM. We illustrated the methods in the analysis of decline in lung function due to exposures to carbon black. Results Naive and GBS approaches were shown to be inadequate when the number of exposure measurements is too small to accurately estimate group means. The CEM method appears to be best among them when within each exposure group at least a ’moderate’ number of individuals have their

  1. Critical factors in the empirical performance of temporal difference and evolutionary methods for reinforcement learning

    NARCIS (Netherlands)

    Whiteson, S.; Taylor, M.E.; Stone, P.

    2010-01-01

    Temporal difference and evolutionary methods are two of the most common approaches to solving reinforcement learning problems. However, there is little consensus on their relative merits and there have been few empirical studies that directly compare their performance. This article aims to address

  2. Bacterial and Fungal Counts of Dried and Semi-Dried Foods Collected from Dhaka, Bangladesh, and Their Reduction Methods.

    Science.gov (United States)

    Feroz, Farahnaaz; Shimizu, Hiromi; Nishioka, Terumi; Mori, Miho; Sakagami, Yoshikazu

    2016-01-01

     Food is a basic necessity for human survival, but it is still the vehicle for the transmission of food borne disease. Various studies have examined the roles of spices, herbs, nuts, and semi-dried fruits, making the need for safe and convenient methods of decontamination a necessity. The current study determined the bacterial and fungal loads of 26 spices and herbs, 5 nuts, 10 semi-dried fruits and 5 other foods. Spices, herbs and semi-dried foods demonstrated the highest bacterial and fungal loads with the majority showing over 10 4 CFU/mL. Nuts and other foods showed growths ranging from 10 2 to 10 6 CFU/mL. The current study also attempted to determine the effects of heat and plasma treatment. The log reduction of bacterial growth after heat treatment (maximum: 120 min for 60℃) was between 0.08 to 4.47, and the log reduction after plasma treatment (maximum: 40 min) ranged from 2.37 to 5.75. Spices showed the lowest rates of reduction, whereas the semi-dried and other foods showed moderate to high levels of decrease after heat treatment. The log reduction of fungal growth after heat treatment ranged from 0.27 to 4.40, and log reduction after plasma treatment ranged from 2.15 to 5.91.Furthermore, we validated the sterilization effect of plasma treatment against Bacillus spp. and Staphylococcus spp. by using scanning electron microscopy. Both treatment methods could prove to be advantageous in the agriculture related fields, enhancing the quality of the foods.

  3. A Framework for Analysing Driver Interactions with Semi-Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    Siraj Shaikh

    2012-12-01

    Full Text Available Semi-autonomous vehicles are increasingly serving critical functions in various settings from mining to logistics to defence. A key characteristic of such systems is the presence of the human (drivers in the control loop. To ensure safety, both the driver needs to be aware of the autonomous aspects of the vehicle and the automated features of the vehicle built to enable safer control. In this paper we propose a framework to combine empirical models describing human behaviour with the environment and system models. We then analyse, via model checking, interaction between the models for desired safety properties. The aim is to analyse the design for safe vehicle-driver interaction. We demonstrate the applicability of our approach using a case study involving semi-autonomous vehicles where the driver fatigue are factors critical to a safe journey.

  4. Method for semi-automated microscopy of filtration-enriched circulating tumor cells

    International Nuclear Information System (INIS)

    Pailler, Emma; Oulhen, Marianne; Billiot, Fanny; Galland, Alexandre; Auger, Nathalie; Faugeroux, Vincent; Laplace-Builhé, Corinne; Besse, Benjamin; Loriot, Yohann; Ngo-Camus, Maud; Hemanda, Merouan; Lindsay, Colin R.; Soria, Jean-Charles; Vielh, Philippe; Farace, Françoise

    2016-01-01

    Circulating tumor cell (CTC)-filtration methods capture high numbers of CTCs in non-small-cell lung cancer (NSCLC) and metastatic prostate cancer (mPCa) patients, and hold promise as a non-invasive technique for treatment selection and disease monitoring. However filters have drawbacks that make the automation of microscopy challenging. We report the semi-automated microscopy method we developed to analyze filtration-enriched CTCs from NSCLC and mPCa patients. Spiked cell lines in normal blood and CTCs were enriched by ISET (isolation by size of epithelial tumor cells). Fluorescent staining was carried out using epithelial (pan-cytokeratins, EpCAM), mesenchymal (vimentin, N-cadherin), leukocyte (CD45) markers and DAPI. Cytomorphological staining was carried out with Mayer-Hemalun or Diff-Quik. ALK-, ROS1-, ERG-rearrangement were detected by filter-adapted-FISH (FA-FISH). Microscopy was carried out using an Ariol scanner. Two combined assays were developed. The first assay sequentially combined four-color fluorescent staining, scanning, automated selection of CD45 − cells, cytomorphological staining, then scanning and analysis of CD45 − cell phenotypical and cytomorphological characteristics. CD45 − cell selection was based on DAPI and CD45 intensity, and a nuclear area >55 μm 2 . The second assay sequentially combined fluorescent staining, automated selection of CD45 − cells, FISH scanning on CD45 − cells, then analysis of CD45 − cell FISH signals. Specific scanning parameters were developed to deal with the uneven surface of filters and CTC characteristics. Thirty z-stacks spaced 0.6 μm apart were defined as the optimal setting, scanning 82 %, 91 %, and 95 % of CTCs in ALK-, ROS1-, and ERG-rearranged patients respectively. A multi-exposure protocol consisting of three separate exposure times for green and red fluorochromes was optimized to analyze the intensity, size and thickness of FISH signals. The semi-automated microscopy method reported here

  5. Projected estimators for robust semi-supervised classification

    NARCIS (Netherlands)

    Krijthe, J.H.; Loog, M.

    2017-01-01

    For semi-supervised techniques to be applied safely in practice we at least want methods to outperform their supervised counterparts. We study this question for classification using the well-known quadratic surrogate loss function. Unlike other approaches to semi-supervised learning, the

  6. Semi-Parametric Maximum Likelihood Method for Interaction in Case-Mother Control-Mother Designs: Package SPmlficmcm

    Directory of Open Access Journals (Sweden)

    Moliere Nguile-Makao

    2015-12-01

    Full Text Available The analysis of interaction effects involving genetic variants and environmental exposures on the risk of adverse obstetric and early-life outcomes is generally performed using standard logistic regression in the case-mother and control-mother design. However such an analysis is inefficient because it does not take into account the natural family-based constraints present in the parent-child relationship. Recently, a new approach based on semi-parametric maximum likelihood estimation was proposed. The advantage of this approach is that it takes into account the parental relationship between the mother and her child in estimation. But a package implementing this method has not been widely available. In this paper, we present SPmlficmcm, an R package implementing this new method and we propose an extension of the method to handle missing offspring genotype data by maximum likelihood estimation. Our choice to treat missing data of the offspring genotype was motivated by the fact that in genetic association studies where the genetic data of mother and child are available, there are usually more missing data on the genotype of the offspring than that of the mother. The package builds a non-linear system from the data and solves and computes the estimates from the gradient and the Hessian matrix of the log profile semi-parametric likelihood function. Finally, we analyze a simulated dataset to show the usefulness of the package.

  7. A semi-automatic method for developing an anthropomorphic numerical model of dielectric anatomy by MRI

    International Nuclear Information System (INIS)

    Mazzurana, M; Sandrini, L; Vaccari, A; Malacarne, C; Cristoforetti, L; Pontalti, R

    2003-01-01

    Complex permittivity values have a dominant role in the overall consideration of interaction between radiofrequency electromagnetic fields and living matter, and in related applications such as electromagnetic dosimetry. There are still some concerns about the accuracy of published data and about their variability due to the heterogeneous nature of biological tissues. The aim of this study is to provide an alternative semi-automatic method by which numerical dielectric human models for dosimetric studies can be obtained. Magnetic resonance imaging (MRI) tomography was used to acquire images. A new technique was employed to correct nonuniformities in the images and frequency-dependent transfer functions to correlate image intensity with complex permittivity were used. The proposed method provides frequency-dependent models in which permittivity and conductivity vary with continuity-even in the same tissue-reflecting the intrinsic realistic spatial dispersion of such parameters. The human model is tested with an FDTD (finite difference time domain) algorithm at different frequencies; the results of layer-averaged and whole-body-averaged SAR (specific absorption rate) are compared with published work, and reasonable agreement has been found. Due to the short time needed to obtain a whole body model, this semi-automatic method may be suitable for efficient study of various conditions that can determine large differences in the SAR distribution, such as body shape, posture, fat-to-muscle ratio, height and weight

  8. Comparison of machine learning and semi-quantification algorithms for (I123)FP-CIT classification: the beginning of the end for semi-quantification?

    Science.gov (United States)

    Taylor, Jonathan Christopher; Fenner, John Wesley

    2017-11-29

    Semi-quantification methods are well established in the clinic for assisted reporting of (I123) Ioflupane images. Arguably, these are limited diagnostic tools. Recent research has demonstrated the potential for improved classification performance offered by machine learning algorithms. A direct comparison between methods is required to establish whether a move towards widespread clinical adoption of machine learning algorithms is justified. This study compared three machine learning algorithms with that of a range of semi-quantification methods, using the Parkinson's Progression Markers Initiative (PPMI) research database and a locally derived clinical database for validation. Machine learning algorithms were based on support vector machine classifiers with three different sets of features: Voxel intensities Principal components of image voxel intensities Striatal binding radios from the putamen and caudate. Semi-quantification methods were based on striatal binding ratios (SBRs) from both putamina, with and without consideration of the caudates. Normal limits for the SBRs were defined through four different methods: Minimum of age-matched controls Mean minus 1/1.5/2 standard deviations from age-matched controls Linear regression of normal patient data against age (minus 1/1.5/2 standard errors) Selection of the optimum operating point on the receiver operator characteristic curve from normal and abnormal training data Each machine learning and semi-quantification technique was evaluated with stratified, nested 10-fold cross-validation, repeated 10 times. The mean accuracy of the semi-quantitative methods for classification of local data into Parkinsonian and non-Parkinsonian groups varied from 0.78 to 0.87, contrasting with 0.89 to 0.95 for classifying PPMI data into healthy controls and Parkinson's disease groups. The machine learning algorithms gave mean accuracies between 0.88 to 0.92 and 0.95 to 0.97 for local and PPMI data respectively. Classification

  9. Methods for semi-automated indexing for high precision information retrieval

    Science.gov (United States)

    Berrios, Daniel C.; Cucina, Russell J.; Fagan, Lawrence M.

    2002-01-01

    OBJECTIVE: To evaluate a new system, ISAID (Internet-based Semi-automated Indexing of Documents), and to generate textbook indexes that are more detailed and more useful to readers. DESIGN: Pilot evaluation: simple, nonrandomized trial comparing ISAID with manual indexing methods. Methods evaluation: randomized, cross-over trial comparing three versions of ISAID and usability survey. PARTICIPANTS: Pilot evaluation: two physicians. Methods evaluation: twelve physicians, each of whom used three different versions of the system for a total of 36 indexing sessions. MEASUREMENTS: Total index term tuples generated per document per minute (TPM), with and without adjustment for concordance with other subjects; inter-indexer consistency; ratings of the usability of the ISAID indexing system. RESULTS: Compared with manual methods, ISAID decreased indexing times greatly. Using three versions of ISAID, inter-indexer consistency ranged from 15% to 65% with a mean of 41%, 31%, and 40% for each of three documents. Subjects using the full version of ISAID were faster (average TPM: 5.6) and had higher rates of concordant index generation. There were substantial learning effects, despite our use of a training/run-in phase. Subjects using the full version of ISAID were much faster by the third indexing session (average TPM: 9.1). There was a statistically significant increase in three-subject concordant indexing rate using the full version of ISAID during the second indexing session (p < 0.05). SUMMARY: Users of the ISAID indexing system create complex, precise, and accurate indexing for full-text documents much faster than users of manual methods. Furthermore, the natural language processing methods that ISAID uses to suggest indexes contributes substantially to increased indexing speed and accuracy.

  10. A two-dimensional, semi-analytic expansion method for nodal calculations

    International Nuclear Information System (INIS)

    Palmtag, S.P.

    1995-08-01

    Most modern nodal methods used today are based upon the transverse integration procedure in which the multi-dimensional flux shape is integrated over the transverse directions in order to produce a set of coupled one-dimensional flux shapes. The one-dimensional flux shapes are then solved either analytically or by representing the flux shape by a finite polynomial expansion. While these methods have been verified for most light-water reactor applications, they have been found to have difficulty predicting the large thermal flux gradients near the interfaces of highly-enriched MOX fuel assemblies. A new method is presented here in which the neutron flux is represented by a non-seperable, two-dimensional, semi-analytic flux expansion. The main features of this method are (1) the leakage terms from the node are modeled explicitly and therefore, the transverse integration procedure is not used, (2) the corner point flux values for each node are directly edited from the solution method, and a corner-point interpolation is not needed in the flux reconstruction, (3) the thermal flux expansion contains hyperbolic terms representing analytic solutions to the thermal flux diffusion equation, and (4) the thermal flux expansion contains a thermal to fast flux ratio term which reduces the number of polynomial expansion functions needed to represent the thermal flux. This new nodal method has been incorporated into the computer code COLOR2G and has been used to solve a two-dimensional, two-group colorset problem containing uranium and highly-enriched MOX fuel assemblies. The results from this calculation are compared to the results found using a code based on the traditional transverse integration procedure

  11. Coherent state methods for semi-classical heavy-ion physics

    International Nuclear Information System (INIS)

    Remaud, B.; Sebille, F.; Raffray, Y.

    1985-01-01

    A semi-classical model of many fermion systems is developed in view of solving the Vlasov equation; it provides an unified description of both static and dynamic properties of the system. The phase space distribution functions are written as convolution products of generalized coherent state distributions with semi-probabilistic weight functions. The generalized coherent states are defined from the local constants of motion of the dynamical system; they may reduce to the usuel ones (eigen states of the annihilation operator) only at the harmonic limit. Solving the Vlasov equation consists in two steps: (i) search for weight functions which properly describe the initial density distributions (ii) calculation of the evolutions of the coherent state set which acts as a moving basis for the Vlasov equation solutions. Sample applications to statics are analyzed: fermions in a harmonic field, self-consistent nuclear slabs. Outlooks of dynamical applications are discussed with a special attention to the fast nucleon emission in heavy-ion reactions

  12. Semi-continuous detection of mercury in gases

    Science.gov (United States)

    Granite, Evan J [Wexford, PA; Pennline, Henry W [Bethel Park, PA

    2011-12-06

    A new method for the semi-continuous detection of heavy metals and metalloids including mercury in gaseous streams. The method entails mass measurement of heavy metal oxides and metalloid oxides with a surface acoustic wave (SAW) sensor having an uncoated substrate. An array of surface acoustic wave (SAW) sensors can be used where each sensor is for the semi-continuous emission monitoring of a particular heavy metal or metalloid.

  13. Analysis of secretome of breast cancer cell line with an optimized semi-shotgun method

    International Nuclear Information System (INIS)

    Tang Xiaorong; Yao Ling; Chen Keying; Hu Xiaofang; Xu Lisa; Fan Chunhai

    2009-01-01

    Secretome, the totality of secreted proteins, is viewed as a promising pool of candidate cancer biomarkers. Simple and reliable methods for identifying secreted proteins are highly desired. We used an optimized semi-shotgun liquid chromatography followed by tandem mass spectrometry (LC-MS/MS) method to analyze the secretome of breast cancer cell line MDA-MB-231. A total of 464 proteins were identified. About 63% of the proteins were classified as secreted proteins, including many promising breast cancer biomarkers, which were thought to be correlated with tumorigenesis, tumor development and metastasis. These results suggest that the optimized method may be a powerful strategy for cell line secretome profiling, and can be used to find potential cancer biomarkers with great clinical significance. (authors)

  14. Electroless plating of PVC plastic through new surface modification method applying a semi-IPN hydrogel film

    International Nuclear Information System (INIS)

    Wang, Ming-Qiu; Yan, Jun; Du, Shi-Guo; Li, Hong-Guang

    2013-01-01

    A novel palladium-free surface activation process for electroless nickel plating was developed. This method applied a semi-Interpenetrating Polymer Network (semi-IPN) hydrogel film to modify the poly(vinyl chloride) (PVC) surface by chemical bonds. The activation process involved the formation of semi-IPN hydrogel film on the PVC surface and the immobilization of catalyst for electroless plating linking to the pretreated substrate via N-Ni chemical bond. The hydrogel layer was used as the chemisorption sites for nickel ions, and the catalyst could initiate the subsequent electroless nickel plating onto the PVC surface. Finally, a Ni–P layer was deposited on the nickel-activated PVC substrate by electroless plating technique. The composition and morphology of nickel-plated PVC foils were characterized by scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), energy dispersive X-ray spectroscopy (EDS) and X-ray diffraction (XRD). The results of SEM and XRD show that a compact and continuous Ni–P layer with amorphous nickel phase is formed on the PVC surface. EDS shows that the content of the nickel and the phosphorus in the deposits is 89.4 wt.% and 10.6 wt.%, respectively.

  15. Electroless plating of PVC plastic through new surface modification method applying a semi-IPN hydrogel film

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Ming-Qiu, E-mail: mqwang1514@163.com; Yan, Jun; Du, Shi-Guo; Li, Hong-Guang

    2013-07-15

    A novel palladium-free surface activation process for electroless nickel plating was developed. This method applied a semi-Interpenetrating Polymer Network (semi-IPN) hydrogel film to modify the poly(vinyl chloride) (PVC) surface by chemical bonds. The activation process involved the formation of semi-IPN hydrogel film on the PVC surface and the immobilization of catalyst for electroless plating linking to the pretreated substrate via N-Ni chemical bond. The hydrogel layer was used as the chemisorption sites for nickel ions, and the catalyst could initiate the subsequent electroless nickel plating onto the PVC surface. Finally, a Ni–P layer was deposited on the nickel-activated PVC substrate by electroless plating technique. The composition and morphology of nickel-plated PVC foils were characterized by scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), energy dispersive X-ray spectroscopy (EDS) and X-ray diffraction (XRD). The results of SEM and XRD show that a compact and continuous Ni–P layer with amorphous nickel phase is formed on the PVC surface. EDS shows that the content of the nickel and the phosphorus in the deposits is 89.4 wt.% and 10.6 wt.%, respectively.

  16. Electroless plating of PVC plastic through new surface modification method applying a semi-IPN hydrogel film

    Science.gov (United States)

    Wang, Ming-Qiu; Yan, Jun; Du, Shi-Guo; Li, Hong-Guang

    2013-07-01

    A novel palladium-free surface activation process for electroless nickel plating was developed. This method applied a semi-Interpenetrating Polymer Network (semi-IPN) hydrogel film to modify the poly(vinyl chloride) (PVC) surface by chemical bonds. The activation process involved the formation of semi-IPN hydrogel film on the PVC surface and the immobilization of catalyst for electroless plating linking to the pretreated substrate via Nsbnd Ni chemical bond. The hydrogel layer was used as the chemisorption sites for nickel ions, and the catalyst could initiate the subsequent electroless nickel plating onto the PVC surface. Finally, a Ni-P layer was deposited on the nickel-activated PVC substrate by electroless plating technique. The composition and morphology of nickel-plated PVC foils were characterized by scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), energy dispersive X-ray spectroscopy (EDS) and X-ray diffraction (XRD). The results of SEM and XRD show that a compact and continuous Ni-P layer with amorphous nickel phase is formed on the PVC surface. EDS shows that the content of the nickel and the phosphorus in the deposits is 89.4 wt.% and 10.6 wt.%, respectively.

  17. Canonical polyadic decomposition of third-order semi-nonnegative semi-symmetric tensors using LU and QR matrix factorizations

    Science.gov (United States)

    Wang, Lu; Albera, Laurent; Kachenoura, Amar; Shu, Huazhong; Senhadji, Lotfi

    2014-12-01

    Semi-symmetric three-way arrays are essential tools in blind source separation (BSS) particularly in independent component analysis (ICA). These arrays can be built by resorting to higher order statistics of the data. The canonical polyadic (CP) decomposition of such semi-symmetric three-way arrays allows us to identify the so-called mixing matrix, which contains the information about the intensities of some latent source signals present in the observation channels. In addition, in many applications, such as the magnetic resonance spectroscopy (MRS), the columns of the mixing matrix are viewed as relative concentrations of the spectra of the chemical components. Therefore, the two loading matrices of the three-way array, which are equal to the mixing matrix, are nonnegative. Most existing CP algorithms handle the symmetry and the nonnegativity separately. Up to now, very few of them consider both the semi-nonnegativity and the semi-symmetry structure of the three-way array. Nevertheless, like all the methods based on line search, trust region strategies, and alternating optimization, they appear to be dependent on initialization, requiring in practice a multi-initialization procedure. In order to overcome this drawback, we propose two new methods, called [InlineEquation not available: see fulltext.] and [InlineEquation not available: see fulltext.], to solve the problem of CP decomposition of semi-nonnegative semi-symmetric three-way arrays. Firstly, we rewrite the constrained optimization problem as an unconstrained one. In fact, the nonnegativity constraint of the two symmetric modes is ensured by means of a square change of variable. Secondly, a Jacobi-like optimization procedure is adopted because of its good convergence property. More precisely, the two new methods use LU and QR matrix factorizations, respectively, which consist in formulating high-dimensional optimization problems into several sequential polynomial and rational subproblems. By using both LU

  18. Semi-microdetermination of nitrogen in actinide compounds by Dumas method

    International Nuclear Information System (INIS)

    Nagar, M.S.; Ruikar, P.B.; Subramanian, M.S.

    1986-01-01

    This report describes the application of the Dumas method for the semi-micro determination of nitrogen in actinide compounds and actinide complexes with organic ligands. The usual set up has been modified to make it adaptable for glove box operations. The carbon dioxide generator and nitrometer assemblies were located outside the glove box while the reaction tube and combustion furnaces were housed inside. The nitrogen gas collected in the nitrometer was read with the help of a travelling microscope with a vernier attachment fitted in front of the nitrometer burette. The set up was standardised using acetanilide and employed for the determination of nirtogen in various substances such as uranium nitride, and a variety of substituted quinoline and pyrazolone derivatives of actinides as well as some ternary uranium-PMBR-sulphoxide complexes. Full details of the technique and the analytical data obtained are contained in this report. (author)

  19. Application of empirical mode decomposition method for characterization of random vibration signals

    Directory of Open Access Journals (Sweden)

    Setyamartana Parman

    2016-07-01

    Full Text Available Characterization of finite measured signals is a great of importance in dynamical modeling and system identification. This paper addresses an approach for characterization of measured random vibration signals where the approach rests on a method called empirical mode decomposition (EMD. The applicability of proposed approach is tested in one numerical and experimental data from a structural system, namely spar platform. The results are three main signal components, comprising: noise embedded in the measured signal as the first component, first intrinsic mode function (IMF called as the wave frequency response (WFR as the second component and second IMF called as the low frequency response (LFR as the third component while the residue is the trend. Band-pass filter (BPF method is taken as benchmark for the results obtained from EMD method.

  20. Semi-Analytical method for the pricing of barrier options in case of time-dependent parameters (with Matlab® codes

    Directory of Open Access Journals (Sweden)

    Guardasoni C.

    2018-03-01

    Full Text Available A Semi-Analytical method for pricing of Barrier Options (SABO is presented. The method is based on the foundations of Boundary Integral Methods which is recast here for the application to barrier option pricing in the Black-Scholes model with time-dependent interest rate, volatility and dividend yield. The validity of the numerical method is illustrated by several numerical examples and comparisons.

  1. Semi-analytical MBS Pricing

    DEFF Research Database (Denmark)

    Rom-Poulsen, Niels

    2007-01-01

    This paper presents a multi-factor valuation model for fixed-rate callable mortgage backed securities (MBS). The model yields semi-analytic solutions for the value of MBS in the sense that the MBS value is found by solving a system of ordinary differential equations. Instead of modelling the cond......This paper presents a multi-factor valuation model for fixed-rate callable mortgage backed securities (MBS). The model yields semi-analytic solutions for the value of MBS in the sense that the MBS value is found by solving a system of ordinary differential equations. Instead of modelling...... interest rate model. However, if the pool size is specified in a way that makes the expectations solvable using transform methods, semi-analytic pricing formulas are achieved. The affine and quadratic pricing frameworks are combined to get flexible and sophisticated prepayment functions. We show...

  2. Semi-empirical relationship between the hardness, grain size and mean free path of WC-Co

    CSIR Research Space (South Africa)

    Makhele-Lekala, L

    2001-01-01

    Full Text Available , grain size of WC and mean free path in Co was obtained. It was found that the empirical formula fitted our measured hardness well. However, when used against results of other researchers, it did not reproduce them satisfactorily at values higher than...

  3. A Four-Stage Fifth-Order Trigonometrically Fitted Semi-Implicit Hybrid Method for Solving Second-Order Delay Differential Equations

    Directory of Open Access Journals (Sweden)

    Sufia Zulfa Ahmad

    2016-01-01

    Full Text Available We derived a two-step, four-stage, and fifth-order semi-implicit hybrid method which can be used for solving special second-order ordinary differential equations. The method is then trigonometrically fitted so that it is suitable for solving problems which are oscillatory in nature. The methods are then used for solving oscillatory delay differential equations. Numerical results clearly show the efficiency of the new method when compared to the existing explicit and implicit methods in the scientific literature.

  4. An empirical Bayes method for updating inferences in analysis of quantitative trait loci using information from related genome scans.

    Science.gov (United States)

    Zhang, Kui; Wiener, Howard; Beasley, Mark; George, Varghese; Amos, Christopher I; Allison, David B

    2006-08-01

    Individual genome scans for quantitative trait loci (QTL) mapping often suffer from low statistical power and imprecise estimates of QTL location and effect. This lack of precision yields large confidence intervals for QTL location, which are problematic for subsequent fine mapping and positional cloning. In prioritizing areas for follow-up after an initial genome scan and in evaluating the credibility of apparent linkage signals, investigators typically examine the results of other genome scans of the same phenotype and informally update their beliefs about which linkage signals in their scan most merit confidence and follow-up via a subjective-intuitive integration approach. A method that acknowledges the wisdom of this general paradigm but formally borrows information from other scans to increase confidence in objectivity would be a benefit. We developed an empirical Bayes analytic method to integrate information from multiple genome scans. The linkage statistic obtained from a single genome scan study is updated by incorporating statistics from other genome scans as prior information. This technique does not require that all studies have an identical marker map or a common estimated QTL effect. The updated linkage statistic can then be used for the estimation of QTL location and effect. We evaluate the performance of our method by using extensive simulations based on actual marker spacing and allele frequencies from available data. Results indicate that the empirical Bayes method can account for between-study heterogeneity, estimate the QTL location and effect more precisely, and provide narrower confidence intervals than results from any single individual study. We also compared the empirical Bayes method with a method originally developed for meta-analysis (a closely related but distinct purpose). In the face of marked heterogeneity among studies, the empirical Bayes method outperforms the comparator.

  5. Developing a Clustering-Based Empirical Bayes Analysis Method for Hotspot Identification

    Directory of Open Access Journals (Sweden)

    Yajie Zou

    2017-01-01

    Full Text Available Hotspot identification (HSID is a critical part of network-wide safety evaluations. Typical methods for ranking sites are often rooted in using the Empirical Bayes (EB method to estimate safety from both observed crash records and predicted crash frequency based on similar sites. The performance of the EB method is highly related to the selection of a reference group of sites (i.e., roadway segments or intersections similar to the target site from which safety performance functions (SPF used to predict crash frequency will be developed. As crash data often contain underlying heterogeneity that, in essence, can make them appear to be generated from distinct subpopulations, methods are needed to select similar sites in a principled manner. To overcome this possible heterogeneity problem, EB-based HSID methods that use common clustering methodologies (e.g., mixture models, K-means, and hierarchical clustering to select “similar” sites for building SPFs are developed. Performance of the clustering-based EB methods is then compared using real crash data. Here, HSID results, when computed on Texas undivided rural highway cash data, suggest that all three clustering-based EB analysis methods are preferred over the conventional statistical methods. Thus, properly classifying the road segments for heterogeneous crash data can further improve HSID accuracy.

  6. Semi-analytic solution to planar Helmholtz equation

    Directory of Open Access Journals (Sweden)

    Tukač M.

    2013-06-01

    Full Text Available Acoustic solution of interior domains is of great interest. Solving acoustic pressure fields faster with lower computational requirements is demanded. A novel solution technique based on the analytic solution to the Helmholtz equation in rectangular domain is presented. This semi-analytic solution is compared with the finite element method, which is taken as the reference. Results show that presented method is as precise as the finite element method. As the semi-analytic method doesn’t require spatial discretization, it can be used for small and very large acoustic problems with the same computational costs.

  7. 'Semi-realistic'F-term inflation model building in supergravity

    International Nuclear Information System (INIS)

    Kain, Ben

    2008-01-01

    We describe methods for building 'semi-realistic' models of F-term inflation. By semi-realistic we mean that they are built in, and obey the requirements of, 'semi-realistic' particle physics models. The particle physics models are taken to be effective supergravity theories derived from orbifold compactifications of string theory, and their requirements are taken to be modular invariance, absence of mass terms and stabilization of moduli. We review the particle physics models, their requirements and tools and methods for building inflation models

  8. Label Information Guided Graph Construction for Semi-Supervised Learning.

    Science.gov (United States)

    Zhuang, Liansheng; Zhou, Zihan; Gao, Shenghua; Yin, Jingwen; Lin, Zhouchen; Ma, Yi

    2017-09-01

    In the literature, most existing graph-based semi-supervised learning methods only use the label information of observed samples in the label propagation stage, while ignoring such valuable information when learning the graph. In this paper, we argue that it is beneficial to consider the label information in the graph learning stage. Specifically, by enforcing the weight of edges between labeled samples of different classes to be zero, we explicitly incorporate the label information into the state-of-the-art graph learning methods, such as the low-rank representation (LRR), and propose a novel semi-supervised graph learning method called semi-supervised low-rank representation. This results in a convex optimization problem with linear constraints, which can be solved by the linearized alternating direction method. Though we take LRR as an example, our proposed method is in fact very general and can be applied to any self-representation graph learning methods. Experiment results on both synthetic and real data sets demonstrate that the proposed graph learning method can better capture the global geometric structure of the data, and therefore is more effective for semi-supervised learning tasks.

  9. Cross-Domain Semi-Supervised Learning Using Feature Formulation.

    Science.gov (United States)

    Xingquan Zhu

    2011-12-01

    Semi-Supervised Learning (SSL) traditionally makes use of unlabeled samples by including them into the training set through an automated labeling process. Such a primitive Semi-Supervised Learning (pSSL) approach suffers from a number of disadvantages including false labeling and incapable of utilizing out-of-domain samples. In this paper, we propose a formative Semi-Supervised Learning (fSSL) framework which explores hidden features between labeled and unlabeled samples to achieve semi-supervised learning. fSSL regards that both labeled and unlabeled samples are generated from some hidden concepts with labeling information partially observable for some samples. The key of the fSSL is to recover the hidden concepts, and take them as new features to link labeled and unlabeled samples for semi-supervised learning. Because unlabeled samples are only used to generate new features, but not to be explicitly included in the training set like pSSL does, fSSL overcomes the inherent disadvantages of the traditional pSSL methods, especially for samples not within the same domain as the labeled instances. Experimental results and comparisons demonstrate that fSSL significantly outperforms pSSL-based methods for both within-domain and cross-domain semi-supervised learning.

  10. SSEL-ADE: A semi-supervised ensemble learning framework for extracting adverse drug events from social media.

    Science.gov (United States)

    Liu, Jing; Zhao, Songzheng; Wang, Gang

    2018-01-01

    With the development of Web 2.0 technology, social media websites have become lucrative but under-explored data sources for extracting adverse drug events (ADEs), which is a serious health problem. Besides ADE, other semantic relation types (e.g., drug indication and beneficial effect) could hold between the drug and adverse event mentions, making ADE relation extraction - distinguishing ADE relationship from other relation types - necessary. However, conducting ADE relation extraction in social media environment is not a trivial task because of the expertise-dependent, time-consuming and costly annotation process, and the feature space's high-dimensionality attributed to intrinsic characteristics of social media data. This study aims to develop a framework for ADE relation extraction using patient-generated content in social media with better performance than that delivered by previous efforts. To achieve the objective, a general semi-supervised ensemble learning framework, SSEL-ADE, was developed. The framework exploited various lexical, semantic, and syntactic features, and integrated ensemble learning and semi-supervised learning. A series of experiments were conducted to verify the effectiveness of the proposed framework. Empirical results demonstrate the effectiveness of each component of SSEL-ADE and reveal that our proposed framework outperforms most of existing ADE relation extraction methods The SSEL-ADE can facilitate enhanced ADE relation extraction performance, thereby providing more reliable support for pharmacovigilance. Moreover, the proposed semi-supervised ensemble methods have the potential of being applied to effectively deal with other social media-based problems. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Stability and mobility of self-interstitials and small interstitial clusters in α-iron: ab initio and empirical potential calculations

    International Nuclear Information System (INIS)

    Willaime, F.; Fu, C.C.; Marinica, M.C.; Dalla Torre, J.

    2005-01-01

    The stability and mobility of self-interstitials and small interstitial clusters, I n , in α-Fe is investigated by means of calculations performed in the framework of the density functional theory using the SIESTA code. The mono-, di- and tri-interstitials are shown to be made of (parallel) dumbbells and to migrate by nearest-neighbor translation-rotation jumps, according to Johnson's mechanism. The orientation of the dumbbells becomes energetically more favourable for I 5 and larger clusters. The performance of a semi-empirical potential recently developed for Fe, including ab initio self-interstitial data in the fitted properties, is evaluated over the present results. The superiority over previous semi-empirical potentials is confirmed. Finally the impact of the present results on the formation mechanism of loops, observed experimentally in α-Fe is discussed

  12. SEMI-EMPIRICAL MODELING OF THE PHOTOSPHERE, CHROMOPSHERE, TRANSITION REGION, AND CORONA OF THE M-DWARF HOST STAR GJ 832

    Energy Technology Data Exchange (ETDEWEB)

    Fontenla, J. M. [NorthWest Research Associates, Boulder, CO 80301 (United States); Linsky, Jeffrey L. [JILA, University of Colorado and NIST, Boulder, CO 80309-0440 (United States); Witbrod, Jesse [University of Colorado Boulder, CO 80309 (United States); France, Kevin [LASP, University of Colorado Boulder, CO 80309-0600 (United States); Buccino, A.; Mauas, Pablo; Vieytes, Mariela [Instituto de Astronomía y Física del Espacio (CONICET-UBA), C.C. 67, Sucursal 28, C1428EHA, Buenos Aires (Argentina); Walkowicz, Lucianne M., E-mail: johnf@digidyna.com, E-mail: jlinsky@jila.colorado.edu, E-mail: jesse.witbrod@colorado.edu, E-mail: kevin.france@lasp.colorado.edu, E-mail: abuccino@iafe.uba.ar, E-mail: pablo@iafe.uba.ar, E-mail: mariela@iafe.uba.ar, E-mail: LWalkowicz@adlerplanetarium.org [The Adler Planetarium, Chicago, IL 60605 (United States)

    2016-10-20

    Stellar radiation from X-rays to the visible provides the energy that controls the photochemistry and mass loss from exoplanet atmospheres. The important extreme ultraviolet (EUV) region (10–91.2 nm) is inaccessible and should be computed from a reliable stellar model. It is essential to understand the formation regions and physical processes responsible for the various stellar emission features to predict how the spectral energy distribution varies with age and activity levels. We compute a state-of-the-art semi-empirical atmospheric model and the emergent high-resolution synthetic spectrum of the moderately active M2 V star GJ 832 as the first of a series of models for stars with different activity levels. We construct a one-dimensional simple model for the physical structure of the star’s chromosphere, chromosphere-corona transition region, and corona using non-LTE radiative transfer techniques and many molecular lines. The synthesized spectrum for this model fits the continuum and lines across the UV-to-optical spectrum. Particular emphasis is given to the emission lines at wavelengths that are shorter than 300 nm observed with the Hubble Space Telescope , which have important effects on the photochemistry of the exoplanet atmospheres. The FUV line ratios indicate that the transition region of GJ 832 is more biased to hotter material than that of the quiet Sun. The excellent agreement of our computed EUV luminosity with that obtained by two other techniques indicates that our model predicts reliable EUV emission from GJ 832. We find that the unobserved EUV flux of GJ 832, which heats the outer atmospheres of exoplanets and drives their mass loss, is comparable to the active Sun.

  13. NetFCM: A Semi-Automated Web-Based Method for Flow Cytometry Data Analysis

    DEFF Research Database (Denmark)

    Frederiksen, Juliet Wairimu; Buggert, Marcus; Karlsson, Annika C.

    2014-01-01

    data analysis has become more complex and labor-intensive than previously. We have therefore developed a semi-automatic gating strategy (NetFCM) that uses clustering and principal component analysis (PCA) together with other statistical methods to mimic manual gating approaches. NetFCM is an online...... tool both for subset identification as well as for quantification of differences between samples. Additionally, NetFCM can classify and cluster samples based on multidimensional data. We tested the method using a data set of peripheral blood mononuclear cells collected from 23 HIV-infected individuals...... corresponding to those obtained by manual gating strategies. These data demonstrate that NetFCM has the potential to identify relevant T cell populations by mimicking classical FCM data analysis and reduce the subjectivity and amount of time associated with such analysis. (c) 2014 International Society...

  14. A Computational Realization of a Semi-Lagrangian Method for Solving the Advection Equation

    Directory of Open Access Journals (Sweden)

    Alexander Efremov

    2014-01-01

    Full Text Available A parallel implementation of a method of the semi-Lagrangian type for the advection equation on a hybrid architecture computation system is discussed. The difference scheme with variable stencil is constructed on the base of an integral equality between the neighboring time levels. The proposed approach allows one to avoid the Courant-Friedrichs-Lewy restriction on the relation between time step and mesh size. The theoretical results are confirmed by numerical experiments. Performance of a sequential algorithm and several parallel implementations with the OpenMP and CUDA technologies in the C language has been studied.

  15. Predicting membrane flux decline from complex mixtures using flow-field flow fractionation measurements and semi-empirical theory.

    Science.gov (United States)

    Pellegrino, J; Wright, S; Ranvill, J; Amy, G

    2005-01-01

    Flow-Field Flow Fractionation (FI-FFF) is an idealization of the cross flow membrane filtration process in that, (1) the filtration flux and crossflow velocity are constant from beginning to end of the device, (2) the process is a relatively well-defined laminar-flow hydrodynamic condition, and (3) the solutes are introduced as a pulse-input that spreads due to interactions with each other and the membrane in the dilute-solution limit. We have investigated the potential for relating FI-FFF measurements to membrane fouling. An advection-dispersion transport model was used to provide 'ideal' (defined as spherical, non-interacting solutes) solute residence time distributions (RTDs) for comparison with 'real' RTDs obtained experimentally at different cross-field velocities and solution ionic strength. An RTD moment analysis based on a particle diameter probability density function was used to extract "effective" characteristic properties, rather than uniquely defined characteristics, of the standard solute mixture. A semi-empirical unsteady-state, flux decline model was developed that uses solute property parameters. Three modes of flux decline are included: (1) concentration polarization, (2) cake buildup, and (3) adsorption on/in pores, We have used this model to test the hypothesis-that an analysis of a residence time distribution using FI-FFF can describe 'effective' solute properties or indices that can be related to membrane flux decline in crossflow membrane filtration. Constant flux filtration studies included the changes of transport hydrodynamics (solvent flux to solute back diffusion (J/k) ratios), solution ionic strength, and feed water composition for filtration using a regenerated cellulose ultrafiltration membrane. Tests of the modeling hypothesis were compared with experimental results from the filtration measurements using several correction parameters based on the mean and variance of the solute RTDs. The corrections used to modify the boundary layer

  16. Semi-solid electrodes having high rate capability

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Yet-Ming; Duduta, Mihai; Holman, Richard; Limthongkul, Pimpa; Tan, Taison

    2017-11-28

    Embodiments described herein relate generally to electrochemical cells having high rate capability, and more particularly to devices, systems and methods of producing high capacity and high rate capability batteries having relatively thick semi-solid electrodes. In some embodiments, an electrochemical cell includes an anode and a semi-solid cathode. The semi-solid cathode includes a suspension of an active material of about 35% to about 75% by volume of an active material and about 0.5% to about 8% by volume of a conductive material in a non-aqueous liquid electrolyte. An ion-permeable membrane is disposed between the anode and the semi-solid cathode. The semi-solid cathode has a thickness of about 250 .mu.m to about 2,000 .mu.m, and the electrochemical cell has an area specific capacity of at least about 7 mAh/cm.sup.2 at a C-rate of C/4. In some embodiments, the semi-solid cathode slurry has a mixing index of at least about 0.9.

  17. A semi-automatic method for peak and valley detection in free-breathing respiratory waveforms

    International Nuclear Information System (INIS)

    Lu Wei; Nystrom, Michelle M.; Parikh, Parag J.; Fooshee, David R.; Hubenschmidt, James P.; Bradley, Jeffrey D.; Low, Daniel A.

    2006-01-01

    The existing commercial software often inadequately determines respiratory peaks for patients in respiration correlated computed tomography. A semi-automatic method was developed for peak and valley detection in free-breathing respiratory waveforms. First the waveform is separated into breath cycles by identifying intercepts of a moving average curve with the inspiration and expiration branches of the waveform. Peaks and valleys were then defined, respectively, as the maximum and minimum between pairs of alternating inspiration and expiration intercepts. Finally, automatic corrections and manual user interventions were employed. On average for each of the 20 patients, 99% of 307 peaks and valleys were automatically detected in 2.8 s. This method was robust for bellows waveforms with large variations

  18. A simple semi-empirical approach to model thickness of ash-deposits for different eruption scenarios

    Directory of Open Access Journals (Sweden)

    A. O. González-Mellado

    2010-11-01

    Full Text Available The impact of ash-fall on people, buildings, crops, water resources, and infrastructure depends on several factors such as the thickness of the deposits, grain size distribution and others. Preparedness against tephra falls over large regions around an active volcano requires an understanding of all processes controlling those factors, and a working model capable of predicting at least some of them. However, the complexity of tephra dispersion and sedimentation makes the search of an integral solution an almost unapproachable problem in the absence of highly efficient computing facilities due to the large number of equations and unknown parameters that control the process. An alternative attempt is made here to address the problem of modeling the thickness of ash deposits as a primary impact factor that can be easily communicated to the public and decision-makers. We develop a semi-empirical inversion model to estimate the thickness of non-compacted deposits produced by an explosive eruption around a volcano in the distance range 4–150 km from the eruptive source.

    The model was elaborated from the analysis of the geometric distribution of deposit thickness of 14 world-wide well-documented eruptions. The model was initially developed to depict deposits of potential eruptions of Popocatépetl and Colima volcanoes in México, but it can be applied to any volcano. It has been designed to provide planners and Civil Protection authorities of an accurate perception of the ash-fall deposit thickness that may be expected for different eruption scenarios. The model needs to be fed with a few easy-to-obtain parameters, namely, height of the eruptive column, duration of the explosive phase, and wind speed and direction, and its simplicity allows it to run in any platform, including a personal computers and even a notebook. The results may be represented as tables, two dimensional thickness-distance plots, or isopach maps using any available

  19. A wavelet-MRA-based adaptive semi-Lagrangian method for the relativistic Vlasov-Maxwell system

    International Nuclear Information System (INIS)

    Besse, Nicolas; Latu, Guillaume; Ghizzo, Alain; Sonnendruecker, Eric; Bertrand, Pierre

    2008-01-01

    In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strong laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to

  20. Empirical pseudo-potential studies on electronic structure

    Indian Academy of Sciences (India)

    Theoretical investigations of electronic structure of quantum dots is of current interest in nanophase materials. Empirical theories such as effective mass approximation, tight binding methods and empirical pseudo-potential method are capable of explaining the experimentally observed optical properties. We employ the ...

  1. Method of semi-automatic high precision potentiometric titration for characterization of uranium compounds; Metodo de titulacao potenciometrica de alta precisao semi-automatizado para a caracterizacao de compostos de uranio

    Energy Technology Data Exchange (ETDEWEB)

    Cristiano, Barbara Fernandes G.; Dias, Fabio C.; Barros, Pedro D. de; Araujo, Radier Mario S. de; Delgado, Jose Ubiratan; Silva, Jose Wanderley S. da, E-mail: barbara@ird.gov.b, E-mail: fabio@ird.gov.b, E-mail: pedrodio@ird.gov.b, E-mail: radier@ird.gov.b, E-mail: delgado@ird.gov.b, E-mail: wanderley@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Lopes, Ricardo T., E-mail: ricardo@lin.ufrj.b [Universidade Federal do Rio de Janeiro (LIN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Lab. de Instrumentacao Nuclear

    2011-10-26

    The method of high precision potentiometric titration is widely used in the certification and characterization of uranium compounds. In order to reduce the analysis and diminish the influence if the annalist, a semi-automatic version of the method was developed at the safeguards laboratory of the CNEN-RJ, Brazil. The method was applied with traceability guaranteed by use of primary standard of potassium dichromate. The standard uncertainty combined in the determination of concentration of total uranium was of the order of 0.01%, which is better related to traditionally methods used by the nuclear installations which is of the order of 0.1%

  2. Sensitivity of ab Initio vs Empirical Methods in Computing Structural Effects on NMR Chemical Shifts for the Example of Peptides.

    Science.gov (United States)

    Sumowski, Chris Vanessa; Hanni, Matti; Schweizer, Sabine; Ochsenfeld, Christian

    2014-01-14

    The structural sensitivity of NMR chemical shifts as computed by quantum chemical methods is compared to a variety of empirical approaches for the example of a prototypical peptide, the 38-residue kaliotoxin KTX comprising 573 atoms. Despite the simplicity of empirical chemical shift prediction programs, the agreement with experimental results is rather good, underlining their usefulness. However, we show in our present work that they are highly insensitive to structural changes, which renders their use for validating predicted structures questionable. In contrast, quantum chemical methods show the expected high sensitivity to structural and electronic changes. This appears to be independent of the quantum chemical approach or the inclusion of solvent effects. For the latter, explicit solvent simulations with increasing number of snapshots were performed for two conformers of an eight amino acid sequence. In conclusion, the empirical approaches neither provide the expected magnitude nor the patterns of NMR chemical shifts determined by the clearly more costly ab initio methods upon structural changes. This restricts the use of empirical prediction programs in studies where peptide and protein structures are utilized for the NMR chemical shift evaluation such as in NMR refinement processes, structural model verifications, or calculations of NMR nuclear spin relaxation rates.

  3. Safe semi-supervised learning based on weighted likelihood.

    Science.gov (United States)

    Kawakita, Masanori; Takeuchi, Jun'ichi

    2014-05-01

    We are interested in developing a safe semi-supervised learning that works in any situation. Semi-supervised learning postulates that n(') unlabeled data are available in addition to n labeled data. However, almost all of the previous semi-supervised methods require additional assumptions (not only unlabeled data) to make improvements on supervised learning. If such assumptions are not met, then the methods possibly perform worse than supervised learning. Sokolovska, Cappé, and Yvon (2008) proposed a semi-supervised method based on a weighted likelihood approach. They proved that this method asymptotically never performs worse than supervised learning (i.e., it is safe) without any assumption. Their method is attractive because it is easy to implement and is potentially general. Moreover, it is deeply related to a certain statistical paradox. However, the method of Sokolovska et al. (2008) assumes a very limited situation, i.e., classification, discrete covariates, n(')→∞ and a maximum likelihood estimator. In this paper, we extend their method by modifying the weight. We prove that our proposal is safe in a significantly wide range of situations as long as n≤n('). Further, we give a geometrical interpretation of the proof of safety through the relationship with the above-mentioned statistical paradox. Finally, we show that the above proposal is asymptotically safe even when n(')methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Application Of A New Semi-Empirical Model For Forming Limit Prediction Of Sheet Material Including Superposed Loads Of Bending And Shearing

    Science.gov (United States)

    Held, Christian; Liewald, Mathias; Schleich, Ralf; Sindel, Manfred

    2010-06-01

    The use of lightweight materials offers substantial strength and weight advantages in car body design. Unfortunately such kinds of sheet material are more susceptible to wrinkling, spring back and fracture during press shop operations. For characterization of capability of sheet material dedicated to deep drawing processes in the automotive industry, mainly Forming Limit Diagrams (FLD) are used. However, new investigations at the Institute for Metal Forming Technology have shown that High Strength Steel Sheet Material and Aluminum Alloys show increased formability in case of bending loads are superposed to stretching loads. Likewise, by superposing shearing on in plane uniaxial or biaxial tension formability changes because of materials crystallographic texture. Such mixed stress and strain conditions including bending and shearing effects can occur in deep-drawing processes of complex car body parts as well as subsequent forming operations like flanging. But changes in formability cannot be described by using the conventional FLC. Hence, for purpose of improvement of failure prediction in numerical simulation codes significant failure criteria for these strain conditions are missing. Considering such aspects in defining suitable failure criteria which is easy to implement into FEA a new semi-empirical model has been developed considering the effect of bending and shearing in sheet metals formability. This failure criterion consists of the combination of the so called cFLC (combined Forming Limit Curve), which considers superposed bending load conditions and the SFLC (Shear Forming Limit Curve), which again includes the effect of shearing on sheet metal's formability.

  5. Application Of A New Semi-Empirical Model For Forming Limit Prediction Of Sheet Material Including Superposed Loads Of Bending And Shearing

    International Nuclear Information System (INIS)

    Held, Christian; Liewald, Mathias; Schleich, Ralf; Sindel, Manfred

    2010-01-01

    The use of lightweight materials offers substantial strength and weight advantages in car body design. Unfortunately such kinds of sheet material are more susceptible to wrinkling, spring back and fracture during press shop operations. For characterization of capability of sheet material dedicated to deep drawing processes in the automotive industry, mainly Forming Limit Diagrams (FLD) are used. However, new investigations at the Institute for Metal Forming Technology have shown that High Strength Steel Sheet Material and Aluminum Alloys show increased formability in case of bending loads are superposed to stretching loads. Likewise, by superposing shearing on in plane uniaxial or biaxial tension formability changes because of materials crystallographic texture. Such mixed stress and strain conditions including bending and shearing effects can occur in deep-drawing processes of complex car body parts as well as subsequent forming operations like flanging. But changes in formability cannot be described by using the conventional FLC. Hence, for purpose of improvement of failure prediction in numerical simulation codes significant failure criteria for these strain conditions are missing. Considering such aspects in defining suitable failure criteria which is easy to implement into FEA a new semi-empirical model has been developed considering the effect of bending and shearing in sheet metals formability. This failure criterion consists of the combination of the so called cFLC (combined Forming Limit Curve), which considers superposed bending load conditions and the SFLC (Shear Forming Limit Curve), which again includes the effect of shearing on sheet metal's formability.

  6. A high-order method for the integration of the Galerkin semi-discretized nuclear reactor kinetics equations

    International Nuclear Information System (INIS)

    Vargas, L.

    1988-01-01

    The numerical approximate solution of the space-time nuclear reactor kinetics equation is investigated using a finite-element discretization of the space variable and a high order integration scheme for the resulting semi-discretized parabolic equation. The Galerkin method with spatial piecewise polynomial Lagrange basis functions are used to obtained a continuous time semi-discretized form of the space-time reactor kinetics equation. A temporal discretization is then carried out with a numerical scheme based on the Iterated Defect Correction (IDC) method using piecewise quadratic polynomials or exponential functions. The kinetics equations are thus solved with in a general finite element framework with respect to space as well as time variables in which the order of convergence of the spatial and temporal discretizations is consistently high. A computer code GALFEM/IDC is developed, to implement the numerical schemes described above. This issued to solve a one space dimensional benchmark problem. The results of the numerical experiments confirm the theoretical arguments and show that the convergence is very fast and the overall procedure is quite efficient. This is due to the good asymptotic properties of the numerical scheme which is of third order in the time interval

  7. Measurement of two-particle semi-inclusive rapidity distributions at the CERN ISR

    CERN Document Server

    Amendolia, S R; Bosisio, L; Braccini, Pier Luigi; Bradaschia, C; Castaldi, R; Cavasinni, V; Cerri, C; Del Prete, T; Finocchiaro, G; Foà, L; Giromini, P; Grannis, P; Green, D; Jöstlein, H; Kephart, R; Laurelli, P; Menzione, A; Ristori, L; Sanguinetti, G; Thun, R; Valdata, M

    1976-01-01

    Data are presented on the semi-inclusive distributions of rapidities of secondary particles produced in pp collisions at very high energies. The experiment was performed at the CERN Intersecting Storage Rings (ISR). The data given, at centre-of-mass energies of square root s=23 and 62 GeV, include the single-particle distributions and two-particle correlations. The semi-inclusive correlations show pronounced short-range correlation effects which have a width considerably narrower than in the case of inclusive correlations. It is shown that these short-range effects can be understood empirically in terms of three parameters whose energy and multiplicity dependence are studied. The data support the picture of multiparticle production in which clusters of small multiplicity and small dispersion are emitted with subsequent decay into hadrons. (32 refs).

  8. Value and depreciation of mineral resources over the very long run: An empirical contrast of different methods

    OpenAIRE

    Rubio Varas, M. del Mar

    2005-01-01

    The paper contrasts empirically the results of alternative methods for estimating the value and the depreciation of mineral resources. The historical data of Mexico and Venezuela, covering the period 1920s-1980s, is used to contrast the results of several methods. These are the present value, the net price method, the user cost method and the imputed income method. The paper establishes that the net price and the user cost are not competing methods as such, but alternative adjustments to diff...

  9. Semi-Smooth Newton Method for Solving 2D Contact Problems with Tresca and Coulomb Friction

    Directory of Open Access Journals (Sweden)

    Kristina Motyckova

    2013-01-01

    Full Text Available The contribution deals with contact problems for two elastic bodies with friction. After the description of the problem we present its discretization based on linear or bilinear finite elements. The semi--smooth Newton method is used to find the solution, from which we derive active sets algorithms. Finally, we arrive at the globally convergent dual implementation of the algorithms in terms of the Langrange multipliers for the Tresca problem. Numerical experiments conclude the paper.

  10. Sequence selectivity of azinomycin B in DNA alkylation and cross-linking: a QM/MM study.

    Science.gov (United States)

    Senthilnathan, Dhurairajan; Kalaiselvan, Anbarasan; Venuvanalingam, Ponnambalam

    2013-01-01

    Azinomycin B--a well-known antitumor drug--forms cross-links with DNA through alkylation of purine bases and blocks tumor cell growth. This reaction has been modeled using the ONIOM (B3LYP/6-31+g(d):UFF) method to understand the mechanism and sequence selectivity. ONIOM results have been checked for reliability by comparing them with full quantum mechanics calculations for selected paths. Calculations reveal that, among the purine bases, guanine is more reactive and is alkylated by aziridine ring through the C10 position, followed by alkylation of the epoxide ring through the C21 position of Azinomycin B. While the mono alkylation is controlled kinetically, bis-alkylation is controlled thermodynamically. Solvent effects were included using polarized-continuum-model calculations and no significant change from gas phase results was observed.

  11. Performance of an integrated approach for prediction of bond dissociation enthalpies of phenols extracted from ginger and tea

    Science.gov (United States)

    Nam, Pham Cam; Chandra, Asit K.; Nguyen, Minh Tho

    2013-01-01

    Integration of the (RO)B3LYP/6-311++G(2df,2p) with the PM6 method into a two-layer ONIOM is found to produce reasonably accurate BDE(O-H)s of phenolic compounds. The chosen ONIOM model contains only two atoms of the breaking bond as the core zone and is able to provide reliable evaluation for BDE(O-H) for phenols and tocopherol. Deviation of calculated values from experiment is ±(1-2) kcal/mol. BDE(O-H) of several curcuminoids and flavanoids extracted from ginger and tea are computed using the proposed model. The BDE(O-H) values of enol curcumin and epigallocatechin gallate are predicted to be 83.3 ± 2.0 and 76.0 ± 2.0 kcal/mol, respectively.

  12. Mydriatics release from solid and semi-solid ophthalmic formulations using different in vitro methods.

    Science.gov (United States)

    Pescina, Silvia; Macaluso, Claudio; Gioia, Gloria Antonia; Padula, Cristina; Santi, Patrizia; Nicoli, Sara

    2017-09-01

    The aim of the present paper was the development of semi-solid (hydrogels) and solid (film) ophthalmic formulations for the controlled release of two mydriatics: phenylephrine and tropicamide. The formulations - based on polyvinylalcohol and hyaluronic acid - were characterized, and release studies were performed with three different in vitro set-ups, i.e. Franz-type diffusion cell, vial method and inclined plane; for comparison, a solution and a commercial insert, both clinically used to induce mydriasis, were evaluated. Both gels and film allowed for a controlled release of drugs, appearing a useful alternative for mydriatics administration. However, the release kinetic was significantly influenced by the method used, highlighting the need for optimization and standardization of in vitro models for the evaluation of drug release from ophthalmic dosage forms.

  13. Semi-empirical model for optimising future heavy-ion luminosity of the LHC

    CERN Document Server

    Schaumann, M

    2014-01-01

    The wide spectrum of intensities and emittances imprinted on the LHC Pb bunches during the accumulation of bunch trains in the injector chain result in a significant spread in the single bunch luminosities and lifetimes in collision. Based on the data collected in the 2011 Pb-Pb run, an empirical model is derived to predict the single-bunch peak luminosity depending on the bunch’s position within the beam. In combination with this model, simulations of representative bunches are used to estimate the luminosity evolution for the complete ensemble of bunches. Several options are being considered to improve the injector performance and to increase the number of bunches in the LHC, leading to several potential injection scenarios, resulting in different peak and integrated luminosities. The most important options for after the long shutdown (LS) 1 and 2 are evaluated and compared.

  14. Semi-supervised prediction of gene regulatory networks using machine learning algorithms.

    Science.gov (United States)

    Patel, Nihir; Wang, Jason T L

    2015-10-01

    Use of computational methods to predict gene regulatory networks (GRNs) from gene expression data is a challenging task. Many studies have been conducted using unsupervised methods to fulfill the task; however, such methods usually yield low prediction accuracies due to the lack of training data. In this article, we propose semi-supervised methods for GRN prediction by utilizing two machine learning algorithms, namely, support vector machines (SVM) and random forests (RF). The semi-supervised methods make use of unlabelled data for training. We investigated inductive and transductive learning approaches, both of which adopt an iterative procedure to obtain reliable negative training data from the unlabelled data. We then applied our semi-supervised methods to gene expression data of Escherichia coli and Saccharomyces cerevisiae, and evaluated the performance of our methods using the expression data. Our analysis indicated that the transductive learning approach outperformed the inductive learning approach for both organisms. However, there was no conclusive difference identified in the performance of SVM and RF. Experimental results also showed that the proposed semi-supervised methods performed better than existing supervised methods for both organisms.

  15. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.

    2014-12-01

    In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.

  16. Empirical Music Aesthetics

    DEFF Research Database (Denmark)

    Grund, Cynthia M.

    The toolbox for empirically exploring the ways that artistic endeavors convey and activate meaning on the part of performers and audiences continues to expand. Current work employing methods at the intersection of performance studies, philosophy, motion capture and neuroscience to better understand...... musical performance and reception is inspired by traditional approaches within aesthetics, but it also challenges some of the presuppositions inherent in them. As an example of such work I present a research project in empirical music aesthetics begun last year and of which I am a team member....

  17. A comparison of some methods to estimate the fatigue life of plain dents

    Energy Technology Data Exchange (ETDEWEB)

    Martins, Ricardo R.; Noronha Junior, Dauro B. [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)

    2009-12-19

    This paper describes a method under development at PETROBRAS R and D Center (CENPES) to estimate the fatigue life of plain dents. This method uses the API Publication 1156 as a base to estimate the fatigue life of dome shaped plain dents and the Pipeline Defect Assessment Manual (PDAM) approach to take into account the uncertainty inherent in the fatigue phenomenon. CENPES method, an empirical and a semi-empirical method available in the literature were employed to estimate the fatigue lives of 10 plain dents specimens of Year 1 of an ongoing test program carried out by BMT Fleet Technology Limited, with the support of the Pipeline Research Council International (PRCI). The results obtained with the different methods are presented and compared. Furthermore some details are given on the numerical methodology proposed by PETROBRAS that have been used to describe the behavior of plain dents. (author)

  18. An evaluation of semi-automated methods for collecting ecosystem-level data in temperate marine systems.

    Science.gov (United States)

    Griffin, Kingsley J; Hedge, Luke H; González-Rivero, Manuel; Hoegh-Guldberg, Ove I; Johnston, Emma L

    2017-07-01

    Historically, marine ecologists have lacked efficient tools that are capable of capturing detailed species distribution data over large areas. Emerging technologies such as high-resolution imaging and associated machine-learning image-scoring software are providing new tools to map species over large areas in the ocean. Here, we combine a novel diver propulsion vehicle (DPV) imaging system with free-to-use machine-learning software to semi-automatically generate dense and widespread abundance records of a habitat-forming algae over ~5,000 m 2 of temperate reef. We employ replicable spatial techniques to test the effectiveness of traditional diver-based sampling, and better understand the distribution and spatial arrangement of one key algal species. We found that the effectiveness of a traditional survey depended on the level of spatial structuring, and generally 10-20 transects (50 × 1 m) were required to obtain reliable results. This represents 2-20 times greater replication than have been collected in previous studies. Furthermore, we demonstrate the usefulness of fine-resolution distribution modeling for understanding patterns in canopy algae cover at multiple spatial scales, and discuss applications to other marine habitats. Our analyses demonstrate that semi-automated methods of data gathering and processing provide more accurate results than traditional methods for describing habitat structure at seascape scales, and therefore represent vastly improved techniques for understanding and managing marine seascapes.

  19. Semi-solid electrode cell having a porous current collector and methods of manufacture

    Science.gov (United States)

    Chiang, Yet-Ming; Carter, William Craig; Cross, III, James C.; Bazzarella, Ricardo; Ota, Naoki

    2017-11-21

    An electrochemical cell includes an anode, a semi-solid cathode, and a separator disposed therebetween. The semi-solid cathode includes a porous current collector and a suspension of an active material and a conductive material disposed in a non-aqueous liquid electrolyte. The porous current collector is at least partially disposed within the suspension such that the suspension substantially encapsulates the porous current collector.

  20. Semi-classical signal analysis

    KAUST Repository

    Laleg-Kirati, Taous-Meriem

    2012-09-30

    This study introduces a new signal analysis method, based on a semi-classical approach. The main idea in this method is to interpret a pulse-shaped signal as a potential of a Schrödinger operator and then to use the discrete spectrum of this operator for the analysis of the signal. We present some numerical examples and the first results obtained with this method on the analysis of arterial blood pressure waveforms. © 2012 Springer-Verlag London Limited.

  1. Semi-analytical Model for Estimating Absorption Coefficients of Optically Active Constituents in Coastal Waters

    Science.gov (United States)

    Wang, D.; Cui, Y.

    2015-12-01

    The objectives of this paper are to validate the applicability of a multi-band quasi-analytical algorithm (QAA) in retrieval absorption coefficients of optically active constituents in turbid coastal waters, and to further improve the model using a proposed semi-analytical model (SAA). The ap(531) and ag(531) semi-analytically derived using SAA model are quite different from the retrievals procedures of QAA model that ap(531) and ag(531) are semi-analytically derived from the empirical retrievals results of a(531) and a(551). The two models are calibrated and evaluated against datasets taken from 19 independent cruises in West Florida Shelf in 1999-2003, provided by SeaBASS. The results indicate that the SAA model produces a superior performance to QAA model in absorption retrieval. Using of the SAA model in retrieving absorption coefficients of optically active constituents from West Florida Shelf decreases the random uncertainty of estimation by >23.05% from the QAA model. This study demonstrates the potential of the SAA model in absorption coefficients of optically active constituents estimating even in turbid coastal waters. Keywords: Remote sensing; Coastal Water; Absorption Coefficient; Semi-analytical Model

  2. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    Science.gov (United States)

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  3. Empirical Phenomenology: A Qualitative Research Approach (The ...

    African Journals Online (AJOL)

    Empirical Phenomenology: A Qualitative Research Approach (The Cologne Seminars) ... and practical application of empirical phenomenology in social research. ... and considers its implications for qualitative methods such as interviewing ...

  4. A highly accurate spectral method for the Navier–Stokes equations in a semi-infinite domain with flexible boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Matsushima, Toshiki; Ishioka, Keiichi, E-mail: matsushima@kugi.kyoto-u.ac.jp, E-mail: ishioka@gfd-dennou.org [Graduate School of Science, Kyoto University, Kitashirakawa Oiwake-cho, Sakyo-ku, Kyoto 606-8502 (Japan)

    2017-04-15

    This paper presents a spectral method for numerically solving the Navier–Stokes equations in a semi-infinite domain bounded by a flat plane: the aim is to obtain high accuracy with flexible boundary conditions. The proposed use is for numerical simulations of small-scale atmospheric phenomena near the ground. We introduce basis functions that fit the semi-infinite domain, and an integral condition for vorticity is used to reduce the computational cost when solving the partial differential equations that appear when the viscosity term is treated implicitly. Furthermore, in order to ensure high accuracy, two iteration techniques are applied when solving the system of linear equations and in determining boundary values. This significantly reduces numerical errors, and the proposed method enables high-resolution numerical experiments. This is demonstrated by numerical experiments showing the collision of a vortex ring into a wall; these were performed using numerical models based on the proposed method. It is shown that the time evolution of the flow field is successfully obtained not only near the boundary, but also in a region far from the boundary. The applicability of the proposed method and the integral condition is discussed. (paper)

  5. An alternative empirical likelihood method in missing response problems and causal inference.

    Science.gov (United States)

    Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao

    2016-11-30

    Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. What 'empirical turn in bioethics'?

    Science.gov (United States)

    Hurst, Samia

    2010-10-01

    Uncertainty as to how we should articulate empirical data and normative reasoning seems to underlie most difficulties regarding the 'empirical turn' in bioethics. This article examines three different ways in which we could understand 'empirical turn'. Using real facts in normative reasoning is trivial and would not represent a 'turn'. Becoming an empirical discipline through a shift to the social and neurosciences would be a turn away from normative thinking, which we should not take. Conducting empirical research to inform normative reasoning is the usual meaning given to the term 'empirical turn'. In this sense, however, the turn is incomplete. Bioethics has imported methodological tools from empirical disciplines, but too often it has not imported the standards to which researchers in these disciplines are held. Integrating empirical and normative approaches also represents true added difficulties. Addressing these issues from the standpoint of debates on the fact-value distinction can cloud very real methodological concerns by displacing the debate to a level of abstraction where they need not be apparent. Ideally, empirical research in bioethics should meet standards for empirical and normative validity similar to those used in the source disciplines for these methods, and articulate these aspects clearly and appropriately. More modestly, criteria to ensure that none of these standards are completely left aside would improve the quality of empirical bioethics research and partly clear the air of critiques addressing its theoretical justification, when its rigour in the particularly difficult context of interdisciplinarity is what should be at stake.

  7. Semi-analytical model for hollow-core anti-resonant fibers

    Directory of Open Access Journals (Sweden)

    Wei eDing

    2015-03-01

    Full Text Available We detailedly describe a recently-developed semi-analytical method to quantitatively calculate light transmission properties of hollow-core anti-resonant fibers (HC-ARFs. Formation of equiphase interface at fiber’s outermost boundary and outward light emission ruled by Helmholtz equation in fiber’s transverse plane constitute the basis of this method. Our semi-analytical calculation results agree well with those of precise simulations and clarify the light leakage dependences on azimuthal angle, geometrical shape and polarization. Using this method, we show investigations on HC-ARFs having various core shapes (e.g. polygon, hypocycloid with single- and multi-layered core-surrounds. The polarization properties of ARFs are also studied. Our semi-analytical method provides clear physical insights into the light guidance in ARF and can play as a fast and useful design aid for better ARFs.

  8. Essays on empirical likelihood in economics

    NARCIS (Netherlands)

    Gao, Z.

    2012-01-01

    This thesis intends to exploit the roots of empirical likelihood and its related methods in mathematical programming and computation. The roots will be connected and the connections will induce new solutions for the problems of estimation, computation, and generalization of empirical likelihood.

  9. Semi-Automated Discovery of Application Session Structure

    Energy Technology Data Exchange (ETDEWEB)

    Kannan, J.; Jung, J.; Paxson, V.; Koksal, C.

    2006-09-07

    While the problem of analyzing network traffic at the granularity of individual connections has seen considerable previous work and tool development, understanding traffic at a higher level---the structure of user-initiated sessions comprised of groups of related connections---remains much less explored. Some types of session structure, such as the coupling between an FTP control connection and the data connections it spawns, have prespecified forms, though the specifications do not guarantee how the forms appear in practice. Other types of sessions, such as a user reading email with a browser, only manifest empirically. Still other sessions might exist without us even knowing of their presence, such as a botnet zombie receiving instructions from its master and proceeding in turn to carry them out. We present algorithms rooted in the statistics of Poisson processes that can mine a large corpus of network connection logs to extract the apparent structure of application sessions embedded in the connections. Our methods are semi-automated in that we aim to present an analyst with high-quality information (expressed as regular expressions) reflecting different possible abstractions of an application's session structure. We develop and test our methods using traces from a large Internet site, finding diversity in the number of applications that manifest, their different session structures, and the presence of abnormal behavior. Our work has applications to traffic characterization and monitoring, source models for synthesizing network traffic, and anomaly detection.

  10. Sensitivity analysis of semi-intensive method of swine production:a ...

    African Journals Online (AJOL)

    Data were collected by means of structured questionnaire administered on twenty-one farms practicing semi-intensive technique of swine production with the aid of cluster sampling technique. Data collected was subjected to various measures of return on investment viz: Gross Margin, Benefit-Cost Ratio, Net Present Value, ...

  11. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  12. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  13. Semi-Dirac points in phononic crystals

    KAUST Repository

    Zhang, Xiujuan

    2014-01-01

    A semi-Dirac cone refers to a peculiar type of dispersion relation that is linear along the symmetry line but quadratic in the perpendicular direction. It was originally discovered in electron systems, in which the associated quasi-particles are massless along one direction, like those in graphene, but effective-mass-like along the other. It was reported that a semi-Dirac point is associated with the topological phase transition between a semi-metallic phase and a band insulator. Very recently, the classical analogy of a semi-Dirac cone has been reported in an electromagnetic system. Here, we demonstrate that, by accidental degeneracy, two-dimensional phononic crystals consisting of square arrays of elliptical cylinders embedded in water are also able to produce the particular dispersion relation of a semi-Dirac cone in the center of the Brillouin zone. A perturbation method is used to evaluate the linear slope and to affirm that the dispersion relation is a semi-Dirac type. If the scatterers are made of rubber, in which the acoustic wave velocity is lower than that in water, the semi-Dirac dispersion can be characterized by an effective medium theory. The effective medium parameters link the semi-Dirac point to a topological transition in the iso-frequency surface of the phononic crystal, in which an open hyperbola is changed into a closed ellipse. This topological transition results in drastic change in wave manipulation. On the other hand, the theory also reveals that the phononic crystal is a double-zero-index material along the x-direction and photonic-band-edge material along the perpendicular direction (y-direction). If the scatterers are made of steel, in which the acoustic wave velocity is higher than that in water, the effective medium description fails, even though the semi-Dirac dispersion relation looks similar to that in the previous case. Therefore different wave transport behavior is expected. The semi-Dirac points in phononic crystals described in

  14. Numerical simulation on void bubble dynamics using moving particle semi-implicit method

    International Nuclear Information System (INIS)

    Tian Wenxi; Ishiwatari, Yuki; Ikejiri, Satoshi; Yamakawa, Masanori; Oka, Yoshiaki

    2009-01-01

    In present study, the collapse of void bubble in liquid has been simulated using moving particle semi-implicit (MPS) code. The liquid is described using moving particles and the bubble-liquid interface was set to be vacuum pressure boundary without interfacial heat mass transfer. The topological shape of bubble can be traced according to the motion and location of interfacial particles. The time dependent bubble diameter, interfacial velocity and bubble collapse time were obtained under wide parametric range. The comparison with Rayleigh and Zababakhin's prediction showed a good agreement which validates the applicability and accuracy on MPS method in solving present momentum problems. The potential void induced water hammer pressure pulse was also evaluated which is instructive for further material erosion study. The bubble collapse with non-condensable gas has been further simulated and the rebound phenomenon was successfully captured which is similar with vapor-filled cavitation phenomenon. The present study exhibits some fundamental characteristics of void bubble hydrodynamics and it is also expected to be instructive for further applications of MPS method to complicated bubble dynamics problems.

  15. INTEGRATION OF SATELLITE RAINFALL DATA AND CURVE NUMBER METHOD FOR RUNOFF ESTIMATION UNDER SEMI-ARID WADI SYSTEM

    Directory of Open Access Journals (Sweden)

    E. O. Adam

    2017-11-01

    Full Text Available The arid and semi-arid catchments in dry lands in general require a special effective management as the scarcity of resources and information which is needed to leverage studies and investigations is the common characteristic. Hydrology is one of the most important elements in the management of resources. Deep understanding of hydrological responses is the key towards better planning and land management. Surface runoff quantification of such ungauged semi-arid catchments considered among the important challenges. A 7586 km2 catchment under investigation is located in semi-arid region in central Sudan where mean annual rainfall is around 250 mm and represent the ultimate source for water supply. The objective is to parameterize hydrological characteristics of the catchment and estimate surface runoff using suitable methods and hydrological models that suit the nature of such ungauged catchments with scarce geospatial information. In order to produce spatial runoff estimations, satellite rainfall was used. Remote sensing and GIS were incorporated in the investigations and the generation of landcover and soil information. Five days rainfall event (50.2 mm was used for the SCS CN model which is considered the suitable for this catchment, as SCS curve number (CN method is widely used for estimating infiltration characteristics depending on the landcover and soil property. Runoff depths of 3.6, 15.7 and 29.7 mm were estimated for the three different Antecedent Moisture Conditions (AMC-I, AMC-II and AMC-III. The estimated runoff depths of AMCII and AMCIII indicate the possibility of having small artificial surface reservoirs that could provide water for domestic and small household agricultural use.

  16. Positivity for Convective Semi-discretizations

    KAUST Repository

    Fekete, Imre; Ketcheson, David I.; Loczi, Lajos

    2017-01-01

    We propose a technique for investigating stability properties like positivity and forward invariance of an interval for method-of-lines discretizations, and apply the technique to study positivity preservation for a class of TVD semi-discretizations

  17. A Time-dependent Heliospheric Model Driven by Empirical Boundary Conditions

    Science.gov (United States)

    Kim, T. K.; Arge, C. N.; Pogorelov, N. V.

    2017-12-01

    Consisting of charged particles originating from the Sun, the solar wind carries the Sun's energy and magnetic field outward through interplanetary space. The solar wind is the predominant source of space weather events, and modeling the solar wind propagation to Earth is a critical component of space weather research. Solar wind models are typically separated into coronal and heliospheric parts to account for the different physical processes and scales characterizing each region. Coronal models are often coupled with heliospheric models to propagate the solar wind out to Earth's orbit and beyond. The Wang-Sheeley-Arge (WSA) model is a semi-empirical coronal model consisting of a potential field source surface model and a current sheet model that takes synoptic magnetograms as input to estimate the magnetic field and solar wind speed at any distance above the coronal region. The current version of the WSA model takes the Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model as input to provide improved time-varying solutions for the ambient solar wind structure. When heliospheric MHD models are coupled with the WSA model, density and temperature at the inner boundary are treated as free parameters that are tuned to optimal values. For example, the WSA-ENLIL model prescribes density and temperature assuming momentum flux and thermal pressure balance across the inner boundary of the ENLIL heliospheric MHD model. We consider an alternative approach of prescribing density and temperature using empirical correlations derived from Ulysses and OMNI data. We use our own modeling software (Multi-scale Fluid-kinetic Simulation Suite) to drive a heliospheric MHD model with ADAPT-WSA input. The modeling results using the two different approaches of density and temperature prescription suggest that the use of empirical correlations may be a more straightforward, consistent method.

  18. An environment-dependent semi-empirical tight binding model suitable for electron transport in bulk metals, metal alloys, metallic interfaces, and metallic nanostructures. I. Model and validation

    Energy Technology Data Exchange (ETDEWEB)

    Hegde, Ganesh, E-mail: ghegde@purdue.edu; Povolotskyi, Michael; Kubis, Tillmann; Klimeck, Gerhard, E-mail: gekco@purdue.edu [Network for Computational Nanotechnology (NCN), Department of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana 47907 (United States); Boykin, Timothy [Department of Electrical and Computer Engineering, University of Alabama, Huntsville, Alabama (United States)

    2014-03-28

    Semi-empirical Tight Binding (TB) is known to be a scalable and accurate atomistic representation for electron transport for realistically extended nano-scaled semiconductor devices that might contain millions of atoms. In this paper, an environment-aware and transferable TB model suitable for electronic structure and transport simulations in technologically relevant metals, metallic alloys, metal nanostructures, and metallic interface systems are described. Part I of this paper describes the development and validation of the new TB model. The new model incorporates intra-atomic diagonal and off-diagonal elements for implicit self-consistency and greater transferability across bonding environments. The dependence of the on-site energies on strain has been obtained by appealing to the Moments Theorem that links closed electron paths in the system to energy moments of angular momentum resolved local density of states obtained ab initio. The model matches self-consistent density functional theory electronic structure results for bulk face centered cubic metals with and without strain, metallic alloys, metallic interfaces, and metallic nanostructures with high accuracy and can be used in predictive electronic structure and transport problems in metallic systems at realistically extended length scales.

  19. An environment-dependent semi-empirical tight binding model suitable for electron transport in bulk metals, metal alloys, metallic interfaces, and metallic nanostructures. I. Model and validation

    International Nuclear Information System (INIS)

    Hegde, Ganesh; Povolotskyi, Michael; Kubis, Tillmann; Klimeck, Gerhard; Boykin, Timothy

    2014-01-01

    Semi-empirical Tight Binding (TB) is known to be a scalable and accurate atomistic representation for electron transport for realistically extended nano-scaled semiconductor devices that might contain millions of atoms. In this paper, an environment-aware and transferable TB model suitable for electronic structure and transport simulations in technologically relevant metals, metallic alloys, metal nanostructures, and metallic interface systems are described. Part I of this paper describes the development and validation of the new TB model. The new model incorporates intra-atomic diagonal and off-diagonal elements for implicit self-consistency and greater transferability across bonding environments. The dependence of the on-site energies on strain has been obtained by appealing to the Moments Theorem that links closed electron paths in the system to energy moments of angular momentum resolved local density of states obtained ab initio. The model matches self-consistent density functional theory electronic structure results for bulk face centered cubic metals with and without strain, metallic alloys, metallic interfaces, and metallic nanostructures with high accuracy and can be used in predictive electronic structure and transport problems in metallic systems at realistically extended length scales

  20. An environment-dependent semi-empirical tight binding model suitable for electron transport in bulk metals, metal alloys, metallic interfaces, and metallic nanostructures. I. Model and validation

    Science.gov (United States)

    Hegde, Ganesh; Povolotskyi, Michael; Kubis, Tillmann; Boykin, Timothy; Klimeck, Gerhard

    2014-03-01

    Semi-empirical Tight Binding (TB) is known to be a scalable and accurate atomistic representation for electron transport for realistically extended nano-scaled semiconductor devices that might contain millions of atoms. In this paper, an environment-aware and transferable TB model suitable for electronic structure and transport simulations in technologically relevant metals, metallic alloys, metal nanostructures, and metallic interface systems are described. Part I of this paper describes the development and validation of the new TB model. The new model incorporates intra-atomic diagonal and off-diagonal elements for implicit self-consistency and greater transferability across bonding environments. The dependence of the on-site energies on strain has been obtained by appealing to the Moments Theorem that links closed electron paths in the system to energy moments of angular momentum resolved local density of states obtained ab initio. The model matches self-consistent density functional theory electronic structure results for bulk face centered cubic metals with and without strain, metallic alloys, metallic interfaces, and metallic nanostructures with high accuracy and can be used in predictive electronic structure and transport problems in metallic systems at realistically extended length scales.

  1. Measurement and correlation study of silymarin solubility in supercritical carbon dioxide with and without a cosolvent using semi-empirical models and back-propagation artificial neural networks

    Directory of Open Access Journals (Sweden)

    Gang Yang

    2017-09-01

    Full Text Available The solubility data of compounds in supercritical fluids and the correlation between the experimental solubility data and predicted solubility data are crucial to the development of supercritical technologies. In the present work, the solubility data of silymarin (SM in both pure supercritical carbon dioxide (SCCO2 and SCCO2 with added cosolvent was measured at temperatures ranging from 308 to 338 K and pressures from 8 to 22 MPa. The experimental data were fit with three semi-empirical density-based models (Chrastil, Bartle and Mendez-Santiago and Teja models and a back-propagation artificial neural networks (BPANN model. Interaction parameters for the models were obtained and the percentage of average absolute relative deviation (AARD% in each calculation was determined. The correlation results were in good agreement with the experimental data. A comparison among the four models revealed that the experimental solubility data were more fit with the BPANN model with AARDs ranging from 1.14% to 2.15% for silymarin in pure SCCO2 and with added cosolvent. The results provide fundamental data for designing the extraction of SM or the preparation of its particle using SCCO2 techniques.

  2. X-ray spectrum analysis of multi-component samples by a method of fundamental parameters using empirical ratios

    International Nuclear Information System (INIS)

    Karmanov, V.I.

    1986-01-01

    A type of the fundamental parameter method based on empirical relation of corrections for absorption and additional-excitation with absorbing characteristics of samples is suggested. The method is used for X-ray fluorescence analysis of multi-component samples of charges of welded electrodes. It is shown that application of the method is justified only for determination of titanium, calcium and silicon content in charges taking into account only corrections for absorption. Irn and manganese content can be calculated by the simple method of the external standard

  3. SENSITIVITY ANALYSIS IN FLEXIBLE PAVEMENT PERFORMANCE USING MECHANISTIC EMPIRICAL METHOD (CASE STUDY: CIREBON–LOSARI ROAD SEGMENT, WEST JAVA

    Directory of Open Access Journals (Sweden)

    E. Samad

    2012-02-01

    Full Text Available Cirebon – Losari flexible pavement which is located on the North Coast of Java, Indonesia, is in the severe damage condition caused by overloading vehicles passing the road. The need for developing improved pavement design and analysis methods is very necessary. The increment of loads and quality of material properties can be evaluated through Mechanistic-Empirical (M-E method. M-E software like KENLAYER has been developed to facilitate the transition from empirical to mechanistic design methods. From the KENLAYER analysis, it can be concluded that the effect of overloading to the pavement structure performance is difficult to minimize even though the first two layers have relatively high modulus of elasticity. The occurrence of 150%, 200%, and 250% overloading have a very significant effect in reducing 84%, 95%, and 98% of the pavement design life, respectively. For the purpose of increasing the pavement service life, it is more effective to manage the allowable load.

  4. How rational should bioethics be? The value of empirical approaches.

    Science.gov (United States)

    Alvarez, A A

    2001-10-01

    Rational justification of claims with empirical content calls for empirical and not only normative philosophical investigation. Empirical approaches to bioethics are epistemically valuable, i.e., such methods may be necessary in providing and verifying basic knowledge about cultural values and norms. Our assumptions in moral reasoning can be verified or corrected using these methods. Moral arguments can be initiated or adjudicated by data drawn from empirical investigation. One may argue that individualistic informed consent, for example, is not compatible with the Asian communitarian orientation. But this normative claim uses an empirical assumption that may be contrary to the fact that some Asians do value and argue for informed consent. Is it necessary and factual to neatly characterize some cultures as individualistic and some as communitarian? Empirical investigation can provide a reasonable way to inform such generalizations. In a multi-cultural context, such as in the Philippines, there is a need to investigate the nature of the local ethos before making any appeal to authenticity. Otherwise we may succumb to the same ethical imperialism we are trying hard to resist. Normative claims that involve empirical premises cannot be reasonable verified or evaluated without utilizing empirical methods along with philosophical reflection. The integration of empirical methods to the standard normative approach to moral reasoning should be reasonably guided by the epistemic demands of claims arising from cross-cultural discourse in bioethics.

  5. Empirical method to calculate Clinch River Breeder Reactor (CRBR) inlet plenum transient temperatures

    International Nuclear Information System (INIS)

    Howarth, W.L.

    1976-01-01

    Sodium flow enters the CRBR inlet plenum via three loops or inlets. An empirical equation was developed to calculate transient temperatures in the CRBR inlet plenum from known loop flows and temperatures. The constants in the empirical equation were derived from 1/4 scale Inlet Plenum Model tests using water as the test fluid. The sodium temperature distribution was simulated by an electrolyte. Step electrolyte transients at 100 percent model flow were used to calculate the equation constants. Step electrolyte runs at 50 percent and 10 percent flow confirmed that the constants were independent of flow. Also, a transient was tested which varied simultaneously flow rate and electrolyte. Agreement of the test results with the empirical equation results was good which verifies the empirical equation

  6. A depth semi-averaged model for coastal dynamics

    Science.gov (United States)

    Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.

    2017-05-01

    The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.

  7. A semi-classical treatment of dissipative processes based on Feynman's influence functional method

    International Nuclear Information System (INIS)

    Moehring, K.; Smilansky, U.

    1980-01-01

    We develop a semi-classical treatment of dissipative processes based on Feynman's influence functional method. Applying it to deep inelastic collisions of heavy ions we study inclusive transition probabilities corresponding to a situation when only a set of collective variables is specified in the initial and final states. We show that the inclusive probabilities as well as the final energy distributions can be expressed in terms of properly defined classical paths and their corresponding stability fields. We present a uniform approximation for the study of quantal interference and focussing phenomena and discuss the conditions under which they are to be expected. For the dissipation mechanism we study three approximations - the harmonic model for the internal system, the weak coupling (diabatic) and the adiabatic coupling. We show that these three limits can be treated in the same manner. We finally compare the present formalism with other methodes as were introduced for the description of dissipation in deep inelastic collisions. (orig.)

  8. A Note on the Semi-Inverse Method and a Variational Principle for the Generalized KdV-mKdV Equation

    Directory of Open Access Journals (Sweden)

    Li Yao

    2013-01-01

    Full Text Available Ji-Huan He systematically studied the inverse problem of calculus of variations. This note reveals that the semi-inverse method also works for a generalized KdV-mKdV equation with nonlinear terms of any orders.

  9. Enhanced manifold regularization for semi-supervised classification.

    Science.gov (United States)

    Gan, Haitao; Luo, Zhizeng; Fan, Yingle; Sang, Nong

    2016-06-01

    Manifold regularization (MR) has become one of the most widely used approaches in the semi-supervised learning field. It has shown superiority by exploiting the local manifold structure of both labeled and unlabeled data. The manifold structure is modeled by constructing a Laplacian graph and then incorporated in learning through a smoothness regularization term. Hence the labels of labeled and unlabeled data vary smoothly along the geodesics on the manifold. However, MR has ignored the discriminative ability of the labeled and unlabeled data. To address the problem, we propose an enhanced MR framework for semi-supervised classification in which the local discriminative information of the labeled and unlabeled data is explicitly exploited. To make full use of labeled data, we firstly employ a semi-supervised clustering method to discover the underlying data space structure of the whole dataset. Then we construct a local discrimination graph to model the discriminative information of labeled and unlabeled data according to the discovered intrinsic structure. Therefore, the data points that may be from different clusters, though similar on the manifold, are enforced far away from each other. Finally, the discrimination graph is incorporated into the MR framework. In particular, we utilize semi-supervised fuzzy c-means and Laplacian regularized Kernel minimum squared error for semi-supervised clustering and classification, respectively. Experimental results on several benchmark datasets and face recognition demonstrate the effectiveness of our proposed method.

  10. Rheological behavior of semi-solid 7075 aluminum alloy at steady state

    Directory of Open Access Journals (Sweden)

    Li Yageng

    2014-03-01

    Full Text Available The further application of semi-solid processing lies in the in-depth fundamental study like rheological behavior. In this research, the apparent viscosity of the semi-solid slurry of 7075 alloy was measured using a Couette type viscometer. The effects of solid fraction and shearing rate on the apparent viscosity of this alloy were investigated under different processing conditions. It can be seen that the apparent viscosity increases with an increase in the solid fraction from 10% to 50% (temperature 620 篊 to 630 篊 at steady state. When the solid fraction was fixed, the apparent viscosity can be decreased by altering the shearing rate from 61.235 s-1 to 489.88 s-1 at steady state. An empirical equation that shows the effects of solid fraction and shearing rate on the apparent viscosity is fitted. The microstructure of quenched samples was examined to understand the alloy抯 rheological behavior.

  11. Empirical training for conditional random fields

    NARCIS (Netherlands)

    Zhu, Zhemin; Hiemstra, Djoerd; Apers, Peter M.G.; Wombacher, Andreas

    2013-01-01

    In this paper (Zhu et al., 2013), we present a practi- cally scalable training method for CRFs called Empir- ical Training (EP). We show that the standard train- ing with unregularized log likelihood can have many maximum likelihood estimations (MLEs). Empirical training has a unique closed form MLE

  12. The development of laboratory and semi-field methods to test the effects of pesticides on predatory beetles

    International Nuclear Information System (INIS)

    Chiverton, P.; Wallin, H.

    1997-01-01

    Following the sequential testing procedure adopted by the IOBC/WPRS Working Group Pesticides and Beneficial Organisms, two simple, robust methods are presented which were designed for testing the effects of pesticides on predatory beetles. In a laboratory initial toxicity test both DDT and lindane were found harmful to the carabid Pterostichus cupreus, whereas α-endosulfan was 'harmless'. DDT was found harmless to P. melanarius. Sub-lethal doses of both DDT and lindane incorporated in prey caused P. cupreus females to produce smaller eggs. In a semi-field test it was demonstrated that Lindane reduced the beneficial capacity of P. cupreus. Climatic conditions at the time of the test however were such that the majority of test animals in control treatments escaped. Caution was therefore advised in the choice of test animal and test design for the semi-field test. (author). 7 refs, 2 figs, 3 tabs

  13. Magneto-optical properties of semi-parabolic plus semi-inverse squared quantum wells

    Science.gov (United States)

    Tung, Luong V.; Vinh, Pham T.; Phuc, Huynh V.

    2018-06-01

    We theoretically study the optical absorption in a quantum well with the semi-parabolic potential plus the semi-inverse squared potential (SPSIS) in the presence of a static magnetic field in which both one- and two-photon absorption processes have been taken into account. The expression of the magneto-optical absorption coefficient (MOAC) is expressed by the second-order golden rule approximation including the electron-LO phonon interaction. We also use the profile method to obtain the full width at half maximum (FWHM) of the absorption peaks. Our numerical results show that either MOAC or FWHM strongly depends on the confinement frequency, temperature, and magnetic field but their dependence on the parameter β is very weak. The temperature dependence of FWHM is consistent with the previous theoretical and experimental works.

  14. A New Statistical Method to Determine the Degree of Validity of Health Economic Model Outcomes against Empirical Data.

    Science.gov (United States)

    Corro Ramos, Isaac; van Voorn, George A K; Vemer, Pepijn; Feenstra, Talitha L; Al, Maiwenn J

    2017-09-01

    The validation of health economic (HE) model outcomes against empirical data is of key importance. Although statistical testing seems applicable, guidelines for the validation of HE models lack guidance on statistical validation, and actual validation efforts often present subjective judgment of graphs and point estimates. To discuss the applicability of existing validation techniques and to present a new method for quantifying the degrees of validity statistically, which is useful for decision makers. A new Bayesian method is proposed to determine how well HE model outcomes compare with empirical data. Validity is based on a pre-established accuracy interval in which the model outcomes should fall. The method uses the outcomes of a probabilistic sensitivity analysis and results in a posterior distribution around the probability that HE model outcomes can be regarded as valid. We use a published diabetes model (Modelling Integrated Care for Diabetes based on Observational data) to validate the outcome "number of patients who are on dialysis or with end-stage renal disease." Results indicate that a high probability of a valid outcome is associated with relatively wide accuracy intervals. In particular, 25% deviation from the observed outcome implied approximately 60% expected validity. Current practice in HE model validation can be improved by using an alternative method based on assessing whether the model outcomes fit to empirical data at a predefined level of accuracy. This method has the advantage of assessing both model bias and parameter uncertainty and resulting in a quantitative measure of the degree of validity that penalizes models predicting the mean of an outcome correctly but with overly wide credible intervals. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  15. The implementation of a simplified spherical harmonics semi-analytic nodal method in PANTHER

    International Nuclear Information System (INIS)

    Hall, S.K.; Eaton, M.D.; Knight, M.P.

    2013-01-01

    Highlights: ► An SP N nodal method is proposed. ► Consistent CMFD derived and tested. ► Mark vacuum boundary conditions applied. ► Benchmarked against other diffusions and transport codes. - Abstract: In this paper an SP N nodal method is proposed which can utilise existing multi-group neutron diffusion solvers to obtain the solution. The semi-analytic nodal method is used in conjunction with a coarse mesh finite difference (CMFD) scheme to solve the resulting set of equations. This is compared against various nuclear benchmarks to show that the method is capable of computing an accurate solution for practical cases. A few different CMFD formulations are implemented and their performance compared. It is found that the effective diffusion coefficent (EDC) can provide additional stability and require less power iterations on a coarse mesh. A re-arrangement of the EDC is proposed that allows the iteration matrix to be computed at the beginning of a calculation. Successive nodal updates only modify the source term unlike existing CMFD methods which update the iteration matrix. A set of Mark vacuum boundary conditions are also derived which can be applied to the SP N nodal method extending its validity. This is possible due to a similarity transformation of the angular coupling matrix, which is used when applying the nodal method. It is found that the Marshak vacuum condition can also be derived, but would require the significant modification of existing neutron diffusion codes to implement it

  16. The Combine Use of Semi-destructive and Non-destructive Methods for Tiled Floor Diagnostics

    Science.gov (United States)

    Štainbruch, Jakub; Bayer, Karol; Jiroušek, Tomáš; Červinka, Josef

    2017-04-01

    The combination of semi-destructive and non-destructive methods was used to asset the conditions of a tiled floor in the historical monument Minaret, situated in the park complex of the Chateau Lednice (South Moravia Region, Czech Republic), before its renovation. Another set of measurements is going to be performed after the conservation works are finished. (The comparison of the results collected during pre- and post-remediation measurements will be known and presented during the General Assembly meeting in Wien.) The diagnostic complex of methods consisted of photogrammetry, resistivity drilling and georadar. The survey was aimed to contour extends of air gaps beneath the tiles and the efficiency of filling gaps by means of injection, consolidation and gluing individual layers. The state chateau Lednice creates a part of the Lednice-Valtice precinct, a UNESCO landmark, and belongs among the greatest historic monuments in Southern Moravia. In the chateau park there is a romantic observation tower in the shape of a minaret built according to the plans of Josef Hardtmuth between 1798-1804. The Minaret has been extensively renovated for many decades including the restoration of mosaic floors from Venetian terazzo. During the static works of the Minaret building between 1999-2000, the mosaic floors in the rooms on the second floor were transferred and put back onto concrete slabs. Specifically, the floor was cut up to tiles and these were glued to square slabs which were then attached to the base plate. The transfer was not successful and the floor restoration was finalized between 2016-2017. The damage consisted in separating the original floor from the concrete plate which led to creating gaps. Furthermore, the layers of the floor were not compact. It was necessary to fill the gaps and consolidate and glue the layers. The existence of air gap between individual layers of the tiles and their degradation was detected using two different diagnostic methods: semi

  17. An environment-dependent semi-empirical tight binding model suitable for electron transport in bulk metals, metal alloys, metallic interfaces, and metallic nanostructures. II. Application—Effect of quantum confinement and homogeneous strain on Cu conductance

    Science.gov (United States)

    Hegde, Ganesh; Povolotskyi, Michael; Kubis, Tillmann; Charles, James; Klimeck, Gerhard

    2014-03-01

    The Semi-Empirical tight binding model developed in Part I Hegde et al. [J. Appl. Phys. 115, 123703 (2014)] is applied to metal transport problems of current relevance in Part II. A systematic study of the effect of quantum confinement, transport orientation, and homogeneous strain on electronic transport properties of Cu is carried out. It is found that quantum confinement from bulk to nanowire boundary conditions leads to significant anisotropy in conductance of Cu along different transport orientations. Compressive homogeneous strain is found to reduce resistivity by increasing the density of conducting modes in Cu. The [110] transport orientation in Cu nanowires is found to be the most favorable for mitigating conductivity degradation since it shows least reduction in conductance with confinement and responds most favorably to compressive strain.

  18. An environment-dependent semi-empirical tight binding model suitable for electron transport in bulk metals, metal alloys, metallic interfaces, and metallic nanostructures. II. Application—Effect of quantum confinement and homogeneous strain on Cu conductance

    International Nuclear Information System (INIS)

    Hegde, Ganesh; Povolotskyi, Michael; Kubis, Tillmann; Charles, James; Klimeck, Gerhard

    2014-01-01

    The Semi-Empirical tight binding model developed in Part I Hegde et al. [J. Appl. Phys. 115, 123703 (2014)] is applied to metal transport problems of current relevance in Part II. A systematic study of the effect of quantum confinement, transport orientation, and homogeneous strain on electronic transport properties of Cu is carried out. It is found that quantum confinement from bulk to nanowire boundary conditions leads to significant anisotropy in conductance of Cu along different transport orientations. Compressive homogeneous strain is found to reduce resistivity by increasing the density of conducting modes in Cu. The [110] transport orientation in Cu nanowires is found to be the most favorable for mitigating conductivity degradation since it shows least reduction in conductance with confinement and responds most favorably to compressive strain

  19. An environment-dependent semi-empirical tight binding model suitable for electron transport in bulk metals, metal alloys, metallic interfaces, and metallic nanostructures. II. Application—Effect of quantum confinement and homogeneous strain on Cu conductance

    Energy Technology Data Exchange (ETDEWEB)

    Hegde, Ganesh, E-mail: ghegde@purdue.edu; Povolotskyi, Michael; Kubis, Tillmann; Charles, James; Klimeck, Gerhard, E-mail: gekco@purdue.edu [Network for Computational Nanotechnology (NCN), Department of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana 47907 (United States)

    2014-03-28

    The Semi-Empirical tight binding model developed in Part I Hegde et al. [J. Appl. Phys. 115, 123703 (2014)] is applied to metal transport problems of current relevance in Part II. A systematic study of the effect of quantum confinement, transport orientation, and homogeneous strain on electronic transport properties of Cu is carried out. It is found that quantum confinement from bulk to nanowire boundary conditions leads to significant anisotropy in conductance of Cu along different transport orientations. Compressive homogeneous strain is found to reduce resistivity by increasing the density of conducting modes in Cu. The [110] transport orientation in Cu nanowires is found to be the most favorable for mitigating conductivity degradation since it shows least reduction in conductance with confinement and responds most favorably to compressive strain.

  20. Semi-automatic logarithmic converter of logs

    International Nuclear Information System (INIS)

    Gol'dman, Z.A.; Bondar's, V.V.

    1974-01-01

    Semi-automatic logarithmic converter of logging charts. An original semi-automatic converter was developed for use in converting BK resistance logging charts and the time interval, ΔT, of acoustic logs from a linear to a logarithmic scale with a specific ratio for subsequent combining of them with neutron-gamma logging charts in operative interpretation of logging materials by a normalization method. The converter can be used to increase productivity by giving curves different from those obtained in manual, pointwise processing. The equipment operates reliably and is simple in use. (author)

  1. Semi-Supervised Transductive Hot Spot Predictor Working on Multiple Assumptions

    KAUST Repository

    Wang, Jim Jing-Yan; Almasri, Islam; Shi, Yuexiang; Gao, Xin

    2014-01-01

    of the transductive semi-supervised algorithms takes all the three semisupervised assumptions, i.e., smoothness, cluster and manifold assumptions, together into account during learning. In this paper, we propose a novel semi-supervised method for hot spot residue

  2. Sediment yield estimation in mountain catchments of the Camastra reservoir, southern Italy: a comparison among different empirical methods

    Science.gov (United States)

    Lazzari, Maurizio; Danese, Maria; Gioia, Dario; Piccarreta, Marco

    2013-04-01

    Sedimentary budget estimation is an important topic for both scientific and social community, because it is crucial to understand both dynamics of orogenic belts and many practical problems, such as soil conservation and sediment accumulation in reservoir. Estimations of sediment yield or denudation rates in southern-central Italy are generally obtained by simple empirical relationships based on statistical regression between geomorphic parameters of the drainage network and the measured suspended sediment yield at the outlet of several drainage basins or through the use of models based on sediment delivery ratio or on soil loss equations. In this work, we perform a study of catchment dynamics and an estimation of sedimentary yield for several mountain catchments of the central-western sector of the Basilicata region, southern Italy. Sediment yield estimation has been obtained through both an indirect estimation of suspended sediment yield based on the Tu index (mean annual suspension sediment yield, Ciccacci et al., 1980) and the application of the Rusle (Renard et al., 1997) and the USPED (Mitasova et al., 1996) empirical methods. The preliminary results indicate a reliable difference between the RUSLE and USPED methods and the estimation based on the Tu index; a critical data analysis of results has been carried out considering also the present-day spatial distribution of erosion, transport and depositional processes in relation to the maps obtained from the application of those different empirical methods. The studied catchments drain an artificial reservoir (i.e. the Camastra dam), where a detailed evaluation of the amount of historical sediment storage has been collected. Sediment yield estimation obtained by means of the empirical methods have been compared and checked with historical data of sediment accumulation measured in the artificial reservoir of the Camastra dam. The validation of such estimations of sediment yield at the scale of large catchments

  3. EMPIRICAL RESEARCH ON THE CHARACTERISTICS OF CLUSTERS IN ROMANIA AND THE IMPACT ON THE ENTREPRENEURIAL ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Tudor NISTORESCU

    2017-05-01

    Full Text Available The present research focuses on the empirical study about the features of clusters in Romania and its impact on business environment. In the scientific approach we tested five research hypotheses which have been validated. Methodological framework included as main instruments: questionnaires, semi-structured interviews with persons who represent the clusters, studies reported in the specialized literature, studies conducted in other projects, examples of best practice from countries with advanced economies. Findings which emerged from our empirical research, as a result of processing of data collection from respondents, may be useful to persons who manage clusters and decision-making authorities at regional and even national level.

  4. Robust Semi-Supervised Manifold Learning Algorithm for Classification

    Directory of Open Access Journals (Sweden)

    Mingxia Chen

    2018-01-01

    Full Text Available In the recent years, manifold learning methods have been widely used in data classification to tackle the curse of dimensionality problem, since they can discover the potential intrinsic low-dimensional structures of the high-dimensional data. Given partially labeled data, the semi-supervised manifold learning algorithms are proposed to predict the labels of the unlabeled points, taking into account label information. However, these semi-supervised manifold learning algorithms are not robust against noisy points, especially when the labeled data contain noise. In this paper, we propose a framework for robust semi-supervised manifold learning (RSSML to address this problem. The noisy levels of the labeled points are firstly predicted, and then a regularization term is constructed to reduce the impact of labeled points containing noise. A new robust semi-supervised optimization model is proposed by adding the regularization term to the traditional semi-supervised optimization model. Numerical experiments are given to show the improvement and efficiency of RSSML on noisy data sets.

  5. Optical properties of 1T and 2H phases of TaS2 and TaSe2

    Indian Academy of Sciences (India)

    10] have observed a phase transition to the H phase induced by a STM tip in ... Several semi-empirical band structure and ligand field models have been ... and Yoffe [12] have applied the semi-empirical tight binding (TB) method to calculate.

  6. Scaffolded Semi-Flipped General Chemistry Designed to Support Rural Students' Learning

    Science.gov (United States)

    Lenczewski, Mary S.

    2016-01-01

    Students who lack academic maturity can sometimes feel overwhelmed in a fully flipped classroom. Here an alternative, the Semi-Flipped method, is discussed. Rural students, who face unique challenges in transitioning from high school learning to college-level learning, can particularly profit from the use of the Semi-Flipped method in the General…

  7. Semi-Markov processes

    CERN Document Server

    Grabski

    2014-01-01

    Semi-Markov Processes: Applications in System Reliability and Maintenance is a modern view of discrete state space and continuous time semi-Markov processes and their applications in reliability and maintenance. The book explains how to construct semi-Markov models and discusses the different reliability parameters and characteristics that can be obtained from those models. The book is a useful resource for mathematicians, engineering practitioners, and PhD and MSc students who want to understand the basic concepts and results of semi-Markov process theory. Clearly defines the properties and

  8. Moment constrained semi-supervised LDA

    DEFF Research Database (Denmark)

    Loog, Marco

    2012-01-01

    This BNAIC compressed contribution provides a summary of the work originally presented at the First IAPR Workshop on Partially Supervised Learning and published in [5]. It outlines the idea behind supervised and semi-supervised learning and highlights the major shortcoming of many current methods...

  9. Evaluation of directional normalization methods for Landsat TM/ETM+ over primary Amazonian lowland forests

    Science.gov (United States)

    Van doninck, Jasper; Tuomisto, Hanna

    2017-06-01

    Biodiversity mapping in extensive tropical forest areas poses a major challenge for the interpretation of Landsat images, because floristically clearly distinct forest types may show little difference in reflectance. In such cases, the effects of the bidirectional reflection distribution function (BRDF) can be sufficiently strong to cause erroneous image interpretation and classification. Since the opening of the Landsat archive in 2008, several BRDF normalization methods for Landsat have been developed. The simplest of these consist of an empirical view angle normalization, whereas more complex approaches apply the semi-empirical Ross-Li BRDF model and the MODIS MCD43-series of products to normalize directional Landsat reflectance to standard view and solar angles. Here we quantify the effect of surface anisotropy on Landsat TM/ETM+ images over old-growth Amazonian forests, and evaluate five angular normalization approaches. Even for the narrow swath of the Landsat sensors, we observed directional effects in all spectral bands. Those normalization methods that are based on removing the surface reflectance gradient as observed in each image were adequate to normalize TM/ETM+ imagery to nadir viewing, but were less suitable for multitemporal analysis when the solar vector varied strongly among images. Approaches based on the MODIS BRDF model parameters successfully reduced directional effects in the visible bands, but removed only half of the systematic errors in the infrared bands. The best results were obtained when the semi-empirical BRDF model was calibrated using pairs of Landsat observation. This method produces a single set of BRDF parameters, which can then be used to operationally normalize Landsat TM/ETM+ imagery over Amazonian forests to nadir viewing and a standard solar configuration.

  10. Neuroanatomical heterogeneity of schizophrenia revealed by semi-supervised machine learning methods.

    Science.gov (United States)

    Honnorat, Nicolas; Dong, Aoyan; Meisenzahl-Lechner, Eva; Koutsouleris, Nikolaos; Davatzikos, Christos

    2017-12-20

    Schizophrenia is associated with heterogeneous clinical symptoms and neuroanatomical alterations. In this work, we aim to disentangle the patterns of neuroanatomical alterations underlying a heterogeneous population of patients using a semi-supervised clustering method. We apply this strategy to a cohort of patients with schizophrenia of varying extends of disease duration, and we describe the neuroanatomical, demographic and clinical characteristics of the subtypes discovered. We analyze the neuroanatomical heterogeneity of 157 patients diagnosed with Schizophrenia, relative to a control population of 169 subjects, using a machine learning method called CHIMERA. CHIMERA clusters the differences between patients and a demographically-matched population of healthy subjects, rather than clustering patients themselves, thereby specifically assessing disease-related neuroanatomical alterations. Voxel-Based Morphometry was conducted to visualize the neuroanatomical patterns associated with each group. The clinical presentation and the demographics of the groups were then investigated. Three subgroups were identified. The first two differed substantially, in that one involved predominantly temporal-thalamic-peri-Sylvian regions, whereas the other involved predominantly frontal regions and the thalamus. Both subtypes included primarily male patients. The third pattern was a mix of these two and presented milder neuroanatomic alterations and comprised a comparable number of men and women. VBM and statistical analyses suggest that these groups could correspond to different neuroanatomical dimensions of schizophrenia. Our analysis suggests that schizophrenia presents distinct neuroanatomical variants. This variability points to the need for a dimensional neuroanatomical approach using data-driven, mathematically principled multivariate pattern analysis methods, and should be taken into account in clinical studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Sustainable sanitary landfills for neglected small cities in developing countries: The semi-mechanized trench method from Villanueva, Honduras

    Energy Technology Data Exchange (ETDEWEB)

    Oakley, Stewart M., E-mail: soakley@csuchico.edu [Department of Civil Engineering, Chico State University, California State University, Chico, CA 95929 (United States); Jimenez, Ramon, E-mail: rjimenez1958@yahoo.com [Public Works, Municipality of Villanueva, Cortes (Honduras)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Open dumping is the most common form of waste disposal in neglected small cities. Black-Right-Pointing-Pointer Semi-mechanized landfills can be a sustainable option for small cities. Black-Right-Pointing-Pointer We present the theory of design and operation of semi-mechanized landfills. Black-Right-Pointing-Pointer Villanueva, Honduras has operated its semi-mechanized landfill for 15 years. Black-Right-Pointing-Pointer The cost of operation is US$4.60/ton with a land requirement of 0.2m{sup 2}/person-year. - Abstract: Open dumping is the most common practice for the disposal of urban solid wastes in the least developed regions of Africa, Asia and Latin America. Sanitary landfill design and operation has traditionally focused on large cities, but cities with fewer than 50,000 in population can comprise from 6% to 45% of a given country's total population. These thousands of small cities cannot afford to operate a sanitary landfill in the way it is proposed for large cities, where heavy equipment is used to spread and compact the waste in daily cells, and then to excavate, transport and apply daily cover, and leachate is managed with collection and treatment systems. This paper presents an alternative approach for small cities, known as the semi-mechanized trench method, which was developed in Villanueva, Honduras. In the semi-mechanized trench method a hydraulic excavator is used for 1-3 days to dig a trench that will last at least a month before it is filled with waste. Trucks can easily unload their wastes into the trench, and the wastes compact naturally due to semi-aerobic biodegradation, after which the trenches are refilled and covered. The exposed surface area is minimal since only the top surface of the wastes is exposed, the remainder being covered by the sides and bottom of the trench. The surplus material from trench excavation can be valorized for use as engineering fill onsite or off. The landfill in

  12. Sustainable sanitary landfills for neglected small cities in developing countries: The semi-mechanized trench method from Villanueva, Honduras

    International Nuclear Information System (INIS)

    Oakley, Stewart M.; Jimenez, Ramón

    2012-01-01

    Highlights: ► Open dumping is the most common form of waste disposal in neglected small cities. ► Semi-mechanized landfills can be a sustainable option for small cities. ► We present the theory of design and operation of semi-mechanized landfills. ► Villanueva, Honduras has operated its semi-mechanized landfill for 15 years. ► The cost of operation is US$4.60/ton with a land requirement of 0.2m 2 /person-year. - Abstract: Open dumping is the most common practice for the disposal of urban solid wastes in the least developed regions of Africa, Asia and Latin America. Sanitary landfill design and operation has traditionally focused on large cities, but cities with fewer than 50,000 in population can comprise from 6% to 45% of a given country’s total population. These thousands of small cities cannot afford to operate a sanitary landfill in the way it is proposed for large cities, where heavy equipment is used to spread and compact the waste in daily cells, and then to excavate, transport and apply daily cover, and leachate is managed with collection and treatment systems. This paper presents an alternative approach for small cities, known as the semi-mechanized trench method, which was developed in Villanueva, Honduras. In the semi-mechanized trench method a hydraulic excavator is used for 1–3 days to dig a trench that will last at least a month before it is filled with waste. Trucks can easily unload their wastes into the trench, and the wastes compact naturally due to semi-aerobic biodegradation, after which the trenches are refilled and covered. The exposed surface area is minimal since only the top surface of the wastes is exposed, the remainder being covered by the sides and bottom of the trench. The surplus material from trench excavation can be valorized for use as engineering fill onsite or off. The landfill in Villanueva has operated for 15 years, using a total land area of approximately 11 ha for a population that grew from 23,000 to 48

  13. A comparison of entropy balance and probability weighting methods to generalize observational cohorts to a population: a simulation and empirical example.

    Science.gov (United States)

    Harvey, Raymond A; Hayden, Jennifer D; Kamble, Pravin S; Bouchard, Jonathan R; Huang, Joanna C

    2017-04-01

    We compared methods to control bias and confounding in observational studies including inverse probability weighting (IPW) and stabilized IPW (sIPW). These methods often require iteration and post-calibration to achieve covariate balance. In comparison, entropy balance (EB) optimizes covariate balance a priori by calibrating weights using the target's moments as constraints. We measured covariate balance empirically and by simulation by using absolute standardized mean difference (ASMD), absolute bias (AB), and root mean square error (RMSE), investigating two scenarios: the size of the observed (exposed) cohort exceeds the target (unexposed) cohort and vice versa. The empirical application weighted a commercial health plan cohort to a nationally representative National Health and Nutrition Examination Survey target on the same covariates and compared average total health care cost estimates across methods. Entropy balance alone achieved balance (ASMD ≤ 0.10) on all covariates in simulation and empirically. In simulation scenario I, EB achieved the lowest AB and RMSE (13.64, 31.19) compared with IPW (263.05, 263.99) and sIPW (319.91, 320.71). In scenario II, EB outperformed IPW and sIPW with smaller AB and RMSE. In scenarios I and II, EB achieved the lowest mean estimate difference from the simulated population outcome ($490.05, $487.62) compared with IPW and sIPW, respectively. Empirically, only EB differed from the unweighted mean cost indicating IPW, and sIPW weighting was ineffective. Entropy balance demonstrated the bias-variance tradeoff achieving higher estimate accuracy, yet lower estimate precision, compared with IPW methods. EB weighting required no post-processing and effectively mitigated observed bias and confounding. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Economic Growth and Transboundary Pollution in Europe. An Empirical Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ansuategi, A. [Ekonomi Analisiaren Oinarriak I Saila, Ekonomi Zientzien Fakultatea, Lehendakari Agirre Etorbidea, 83, 48015 Bilbao (Spain)

    2003-10-01

    The existing empirical evidence suggests that environmental Kuznets curves only exist for pollutants with semi-local and medium term impacts. Ansuategi and Perrings (2000) have considered the behavioral basis for the correlation observed between different spatial incidence of environmental degradation and the relation between economic growth and environmental quality. They show that self-interested planners following a Nash-type strategy tend to address environmental effects sequentially: addressing those with the most immediate costs first, and those whose costs are displaced in space later. This paper tests such behavioral basis in the context of sulphur dioxide emissions in Europe.

  15. Economic Growth and Transboundary Pollution in Europe. An Empirical Analysis

    International Nuclear Information System (INIS)

    Ansuategi, A.

    2003-01-01

    The existing empirical evidence suggests that environmental Kuznets curves only exist for pollutants with semi-local and medium term impacts. Ansuategi and Perrings (2000) have considered the behavioral basis for the correlation observed between different spatial incidence of environmental degradation and the relation between economic growth and environmental quality. They show that self-interested planners following a Nash-type strategy tend to address environmental effects sequentially: addressing those with the most immediate costs first, and those whose costs are displaced in space later. This paper tests such behavioral basis in the context of sulphur dioxide emissions in Europe

  16. A sensitivity analysis of centrifugal compressors' empirical models

    International Nuclear Information System (INIS)

    Yoon, Sung Ho; Baek, Je Hyun

    2001-01-01

    The mean-line method using empirical models is the most practical method of predicting off-design performance. To gain insight into the empirical models, the influence of empirical models on the performance prediction results is investigated. We found that, in the two-zone model, the secondary flow mass fraction has a considerable effect at high mass flow-rates on the performance prediction curves. In the TEIS model, the first element changes the slope of the performance curves as well as the stable operating range. The second element makes the performance curves move up and down as it increases or decreases. It is also discovered that the slip factor affects pressure ratio, but it has little effect on efficiency. Finally, this study reveals that the skin friction coefficient has significant effect on both the pressure ratio curve and the efficiency curve. These results show the limitations of the present empirical models, and more reasonable empirical models are reeded

  17. Pressure Autoregulation Measurement Techniques in Adult Traumatic Brain Injury, Part I: A Scoping Review of Intermittent/Semi-Intermittent Methods.

    Science.gov (United States)

    Zeiler, Frederick A; Donnelly, Joseph; Calviello, Leanne; Menon, David K; Smielewski, Peter; Czosnyka, Marek

    2017-12-01

    The purpose of this study was to perform a systematic, scoping review of commonly described intermittent/semi-intermittent autoregulation measurement techniques in adult traumatic brain injury (TBI). Nine separate systematic reviews were conducted for each intermittent technique: computed tomographic perfusion (CTP)/Xenon-CT (Xe-CT), positron emission tomography (PET), magnetic resonance imaging (MRI), arteriovenous difference in oxygen (AVDO 2 ) technique, thigh cuff deflation technique (TCDT), transient hyperemic response test (THRT), orthostatic hypotension test (OHT), mean flow index (Mx), and transfer function autoregulation index (TF-ARI). MEDLINE ® , BIOSIS, EMBASE, Global Health, Scopus, Cochrane Library (inception to December 2016), and reference lists of relevant articles were searched. A two tier filter of references was conducted. The total number of articles utilizing each of the nine searched techniques for intermittent/semi-intermittent autoregulation techniques in adult TBI were: CTP/Xe-CT (10), PET (6), MRI (0), AVDO 2 (10), ARI-based TCDT (9), THRT (6), OHT (3), Mx (17), and TF-ARI (6). The premise behind all of the intermittent techniques is manipulation of systemic blood pressure/blood volume via either chemical (such as vasopressors) or mechanical (such as thigh cuffs or carotid compression) means. Exceptionally, Mx and TF-ARI are based on spontaneous fluctuations of cerebral perfusion pressure (CPP) or mean arterial pressure (MAP). The method for assessing the cerebral circulation during these manipulations varies, with both imaging-based techniques and TCD utilized. Despite the limited literature for intermittent/semi-intermittent techniques in adult TBI (minus Mx), it is important to acknowledge the availability of such tests. They have provided fundamental insight into human autoregulatory capacity, leading to the development of continuous and more commonly applied techniques in the intensive care unit (ICU). Numerous methods of

  18. Some features and applications of an empirical approach to the treatment of measurement data (DoD-method)

    International Nuclear Information System (INIS)

    Beyrich, W.; Golly, W.; Spannagel, G.

    1981-01-01

    An empirical method of data evaluation is described which allows the derivation of meaningful estimates of the variances of data groups even if they comprise extreme values (outliers). It can be applied to problems usually treated by variance analysis and seems to be suitable to investigate and describe the state of the art of the various analytical methods applied in international safeguards. Some examples are given to illustrate this procedure; they are based on data of the SALE program

  19. Using PWE/FE method to calculate the band structures of the semi-infinite beam-like PCs: Periodic in z-direction and finite in x–y plane

    Energy Technology Data Exchange (ETDEWEB)

    Qian, Denghui, E-mail: qdhsd318@163.com; Shi, Zhiyu, E-mail: zyshi@nuaa.edu.cn

    2017-05-03

    This paper couples the plane wave expansion (PWE) and finite element (FE) methods to calculate the band structures of the semi-infinite beam-like phononic crystals (PCs) with the infinite periodicity in z-direction and finiteness in x–y plane. Explicit matrix formulations are developed for the calculation of band structures. In order to illustrate the applicability and accuracy of the proposed coupled plane wave expansion and finite element (PWE/FE) method to beam-like PCs, several examples are displayed. At first, PWE/FE method is applied to calculate the band structures of the Pb/rubber beam-like PCs with circular and rectangular cross sections, respectively. Then, it is used to calculate the band structures of steel/epoxy and steel/aluminum beam-like PCs with the same geometric parameters. Last, the band structure of the three-component beam-like PC is also calculated by the proposed method. Moreover, all the results calculated by PWE/FE method are compared with those calculated by finite element (FE) method, and the corresponding results are in good agreement. - Highlights: • The concept of the semi-infinite beam-like phononic crystals (PCs) is proposed. • The PWE/FE method is proposed and formulized to calculate the band structures of the semi-infinite beam-like PCs. • The strong applicability and high accuracy of PWE/FE method are verified.

  20. Projected estimators for robust semi-supervised classification

    DEFF Research Database (Denmark)

    Krijthe, Jesse H.; Loog, Marco

    2017-01-01

    For semi-supervised techniques to be applied safely in practice we at least want methods to outperform their supervised counterparts. We study this question for classification using the well-known quadratic surrogate loss function. Unlike other approaches to semi-supervised learning, the procedure...... specifically, we prove that, measured on the labeled and unlabeled training data, this semi-supervised procedure never gives a lower quadratic loss than the supervised alternative. To our knowledge this is the first approach that offers such strong, albeit conservative, guarantees for improvement over...... the supervised solution. The characteristics of our approach are explicated using benchmark datasets to further understand the similarities and differences between the quadratic loss criterion used in the theoretical results and the classification accuracy typically considered in practice....

  1. Semi-solid electrodes having high rate capability

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Yet-Ming; Duduta, Mihai; Holman, Richard; Limthongkul, Pimpa; Tan, Taison

    2017-11-28

    Embodiments described herein relate generally to electrochemical cells having high rate capability, and more particularly to devices, systems and methods of producing high capacity and high rate capability batteries having relatively thick semi-solid electrodes. In some embodiments, an electrochemical cell includes an anode, a semi-solid cathode that includes a suspension of an active material and a conductive material in a liquid electrolyte, and an ion permeable membrane disposed between the anode and the cathode. The semi-solid cathode has a thickness in the range of about 250 .mu.m-2,500 .mu.m, and the electrochemical cell has an area specific capacity of at least 5 mAh/cm.sup.2 at a C-rate of C/2.

  2. PWR surveillance based on correspondence between empirical models and physical

    International Nuclear Information System (INIS)

    Zwingelstein, G.; Upadhyaya, B.R.; Kerlin, T.W.

    1976-01-01

    An on line surveillance method based on the correspondence between empirical models and physicals models is proposed for pressurized water reactors. Two types of empirical models are considered as well as the mathematical models defining the correspondence between the physical and empirical parameters. The efficiency of this method is illustrated for the surveillance of the Doppler coefficient for Oconee I (an 886 MWe PWR) [fr

  3. Semi-Autonomous Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — VisionThe Semi-Autonomous Systems Lab focuses on developing a comprehensive framework for semi-autonomous coordination of networked robotic systems. Semi-autonomous...

  4. Systematic methodological review: developing a framework for a qualitative semi-structured interview guide.

    Science.gov (United States)

    Kallio, Hanna; Pietilä, Anna-Maija; Johnson, Martin; Kangasniemi, Mari

    2016-12-01

    To produce a framework for the development of a qualitative semi-structured interview guide. Rigorous data collection procedures fundamentally influence the results of studies. The semi-structured interview is a common data collection method, but methodological research on the development of a semi-structured interview guide is sparse. Systematic methodological review. We searched PubMed, CINAHL, Scopus and Web of Science for methodological papers on semi-structured interview guides from October 2004-September 2014. Having examined 2,703 titles and abstracts and 21 full texts, we finally selected 10 papers. We analysed the data using the qualitative content analysis method. Our analysis resulted in new synthesized knowledge on the development of a semi-structured interview guide, including five phases: (1) identifying the prerequisites for using semi-structured interviews; (2) retrieving and using previous knowledge; (3) formulating the preliminary semi-structured interview guide; (4) pilot testing the guide; and (5) presenting the complete semi-structured interview guide. Rigorous development of a qualitative semi-structured interview guide contributes to the objectivity and trustworthiness of studies and makes the results more plausible. Researchers should consider using this five-step process to develop a semi-structured interview guide and justify the decisions made during it. © 2016 John Wiley & Sons Ltd.

  5. A semi-Lagrangian transport method for kinetic problems with application to dense-to-dilute polydisperse reacting spray flows

    Energy Technology Data Exchange (ETDEWEB)

    Doisneau, François, E-mail: fdoisne@sandia.gov; Arienti, Marco, E-mail: marient@sandia.gov; Oefelein, Joseph C., E-mail: oefelei@sandia.gov

    2017-01-15

    For sprays, as described by a kinetic disperse phase model strongly coupled to the Navier–Stokes equations, the resolution strategy is constrained by accuracy objectives, robustness needs, and the computing architecture. In order to leverage the good properties of the Eulerian formalism, we introduce a deterministic particle-based numerical method to solve transport in physical space, which is simple to adapt to the many types of closures and moment systems. The method is inspired by the semi-Lagrangian schemes, developed for Gas Dynamics. We show how semi-Lagrangian formulations are relevant for a disperse phase far from equilibrium and where the particle–particle coupling barely influences the transport; i.e., when particle pressure is negligible. The particle behavior is indeed close to free streaming. The new method uses the assumption of parcel transport and avoids to compute fluxes and their limiters, which makes it robust. It is a deterministic resolution method so that it does not require efforts on statistical convergence, noise control, or post-processing. All couplings are done among data under the form of Eulerian fields, which allows one to use efficient algorithms and to anticipate the computational load. This makes the method both accurate and efficient in the context of parallel computing. After a complete verification of the new transport method on various academic test cases, we demonstrate the overall strategy's ability to solve a strongly-coupled liquid jet with fine spatial resolution and we apply it to the case of high-fidelity Large Eddy Simulation of a dense spray flow. A fuel spray is simulated after atomization at Diesel engine combustion chamber conditions. The large, parallel, strongly coupled computation proves the efficiency of the method for dense, polydisperse, reacting spray flows.

  6. Cost-effective computational method for radiation heat transfer in semi-crystalline polymers

    Science.gov (United States)

    Boztepe, Sinan; Gilblas, Rémi; de Almeida, Olivier; Le Maoult, Yannick; Schmidt, Fabrice

    2018-05-01

    This paper introduces a cost-effective numerical model for infrared (IR) heating of semi-crystalline polymers. For the numerical and experimental studies presented here semi-crystalline polyethylene (PE) was used. The optical properties of PE were experimentally analyzed under varying temperature and the obtained results were used as input in the numerical studies. The model was built based on optically homogeneous medium assumption whereas the strong variation in the thermo-optical properties of semi-crystalline PE under heating was taken into account. Thus, the change in the amount radiative energy absorbed by the PE medium was introduced in the model induced by its temperature-dependent thermo-optical properties. The computational study was carried out considering an iterative closed-loop computation, where the absorbed radiation was computed using an in-house developed radiation heat transfer algorithm -RAYHEAT- and the computed results was transferred into the commercial software -COMSOL Multiphysics- for solving transient heat transfer problem to predict temperature field. The predicted temperature field was used to iterate the thermo-optical properties of PE that varies under heating. In order to analyze the accuracy of the numerical model experimental analyses were carried out performing IR-thermographic measurements during the heating of the PE plate. The applicability of the model in terms of computational cost, number of numerical input and accuracy was highlighted.

  7. Major Mergers in CANDELS up to z=3: Calibrating the Close-Pair Method Using Semi-Analytic Models and Baryonic Mass Ratio Estimates

    Science.gov (United States)

    Mantha, Kameswara; McIntosh, Daniel H.; Conselice, Christopher; Cook, Joshua S.; Croton, Darren J.; Dekel, Avishai; Ferguson, Henry C.; Hathi, Nimish; Kodra, Dritan; Koo, David C.; Lotz, Jennifer M.; Newman, Jeffrey A.; Popping, Gergo; Rafelski, Marc; Rodriguez-Gomez, Vicente; Simmons, Brooke D.; Somerville, Rachel; Straughn, Amber N.; Snyder, Gregory; Wuyts, Stijn; Yu, Lu; Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey (CANDELS) Team

    2018-01-01

    Cosmological simulations predict that the rate of merging between similar-mass massive galaxies should increase towards early cosmic-time. We study the incidence of major (stellar mass ratio SMR 10.3 galaxies spanning 01.5 in strong disagreement with theoretical merger rate predictions. On the other hand, if we compare to a simulation-tuned, evolving timescale prescription from Snyder et al., 2017, we find that the merger rate evolution agrees with theory out to z=3. These results highlight the need for robust calibrations on the complex and presumably redshift-dependent pair-to-merger-rate conversion factors to improve constraints of the empirical merger history. To address this, we use a unique compilation of mock datasets produced by three independent state-of-the-art Semi-Analytic Models (SAMs). We present preliminary calibrations of the close-pair observability timescale and outlier fraction as a function of redshift, stellar-mass, mass-ratio, and local over-density. Furthermore, to verify the hypothesis by previous empirical studies that SMR-selection of major pairs may be biased, we present a new analysis of the baryonic (gas+stars) mass ratios of a subset of close pairs in our sample. For the first time, our preliminary analysis highlights that a noticeable fraction of SMR-selected minor pairs (SMR>4) have major baryonic-mass ratios (BMR<4), which indicate that merger rates based on SMR selection may be under-estimated.

  8. Maillard reaction products in bread: A novel semi-quantitative method for evaluating melanoidins in bread.

    Science.gov (United States)

    Helou, Cynthia; Jacolot, Philippe; Niquet-Léridon, Céline; Gadonna-Widehem, Pascale; Tessier, Frédéric J

    2016-01-01

    The aim of this study was to test the methods currently in use and to develop a new protocol for the evaluation of melanoidins in bread. Markers of the early and advanced stages of the Maillard reaction were also followed in the crumb and the crust of bread throughout baking, and in a crust model system. The crumb of the bread contained N(ε)-fructoselysine and N(ε)-carboxymethyllysine but at levels 7 and 5 times lower than the crust, respectively. 5-Hydroxymethylfurfural was detected only in the crust and its model system. The available methods for the semi-quantification of melanoidins were found to be unsuitable for their analysis in bread. Our new method based on size exclusion chromatography and fluorescence measures soluble fluorescent melanoidins in bread. These melanoidin macromolecules (1.7-5.6 kDa) were detected intact in both crust and model system. They appear to contribute to the dietary fibre in bread. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Convergence and stability of the exponential Euler method for semi-linear stochastic delay differential equations.

    Science.gov (United States)

    Zhang, Ling

    2017-01-01

    The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.

  10. Convergence and stability of the exponential Euler method for semi-linear stochastic delay differential equations

    Directory of Open Access Journals (Sweden)

    Ling Zhang

    2017-10-01

    Full Text Available Abstract The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs. It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order 1 2 $\\frac{1}{2}$ to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.

  11. Semi-quantitative evaluation of gallium-67 scintigraphy in lupus nephritis

    International Nuclear Information System (INIS)

    Lin Wanyu; Hsieh Jihfang; Tsai Shihchuan; Lan Joungliang; Cheng Kaiyuan; Wang Shyhjen

    2000-01-01

    Within nuclear medicine there is a trend towards quantitative analysis. Gallium renal scan has been reported to be useful in monitoring the disease activity of lupus nephritis. However, only visual interpretation using a four-grade scale has been performed in previous studies, and this method is not sensitive enough for follow-up. In this study, we developed a semi-quantitative method for gallium renal scintigraphy to find a potential parameter for the evaluation of lupus nephritis. Forty-eight patients with lupus nephritis underwent renal biopsy to determine World Health Organization classification, activity index (AI) and chronicity index (CI). A delayed 48-h gallium scan was also performed and interpreted by visual and semi-quantitative methods. For semi-quantitative analysis of the gallium uptake in both kidneys, regions of interest (ROIs) were drawn over both kidneys, the right forearm and the adjacent spine. The uptake ratios between these ROIs were calculated and expressed as the ''kidney/spine ratio (K/S ratio)'' or the ''kidney/arm ratio (K/A ratio)''. Spearman's rank correlation test and Mann-Whitney U test were used for statistical analysis. Our data showed a good correlation between the semi-quantitative gallium scan and the results of visual interpretation. K/S ratios showed a better correlation with AI than did K/A ratios. Furthermore, the left K/S ratio displayed a better correlation with AI than did the right K/S ratio. In contrast, CI did not correlate well with the results of semi-quantitative gallium scan. In conclusion, semi-quantitative gallium renal scan is easy to perform and shows a good correlation with the results of visual interpretation and renal biopsy. The left K/S ratio from semi-quantitative renal gallium scintigraphy displays the best correlation with AI and is a useful parameter in evaluating the disease activity in lupus nephritis. (orig.)

  12. Molecular insight on the non-covalent interactions between carbapenems and uc(l,d)-transpeptidase 2 from Mycobacterium tuberculosis: ONIOM study

    Science.gov (United States)

    Ntombela, Thandokuhle; Fakhar, Zeynab; Ibeji, Collins U.; Govender, Thavendran; Maguire, Glenn E. M.; Lamichhane, Gyanu; Kruger, Hendrik G.; Honarparvar, Bahareh

    2018-05-01

    Tuberculosis remains a dreadful disease that has claimed many human lives worldwide and elimination of the causative agent Mycobacterium tuberculosis also remains elusive. Multidrug-resistant TB is rapidly increasing worldwide; therefore, there is an urgent need for improving the current antibiotics and novel drug targets to successfully curb the TB burden. uc(l,d)-Transpeptidase 2 is an essential protein in Mtb that is responsible for virulence and growth during the chronic stage of the disease. Both uc(d,d)- and uc(l,d)-transpeptidases are inhibited concurrently to eradicate the bacterium. It was recently discovered that classic penicillins only inhibit uc(d,d)-transpeptidases, while uc(l,d)-transpeptidases are blocked by carbapenems. This has contributed to drug resistance and persistence of tuberculosis. Herein, a hybrid two-layered ONIOM (B3LYP/6-31G+(d): AMBER) model was used to extensively investigate the binding interactions of LdtMt2 complexed with four carbapenems (biapenem, imipenem, meropenem, and tebipenem) to ascertain molecular insight of the drug-enzyme complexation event. In the studied complexes, the carbapenems together with catalytic triad active site residues of LdtMt2 (His187, Ser188 and Cys205) were treated at with QM [B3LYP/6-31+G(d)], while the remaining part of the complexes were treated at MM level (AMBER force field). The resulting Gibbs free energy (ΔG), enthalpy (ΔH) and entropy (ΔS) for all complexes showed that the carbapenems exhibit reasonable binding interactions towards LdtMt2. Increasing the number of amino acid residues that form hydrogen bond interactions in the QM layer showed significant impact in binding interaction energy differences and the stabilities of the carbapenems inside the active pocket of LdtMt2. The theoretical binding free energies obtained in this study reflect the same trend of the experimental observations. The electrostatic, hydrogen bonding and Van der Waals interactions between the carbapenems and Ldt

  13. A Cluster-then-label Semi-supervised Learning Approach for Pathology Image Classification.

    Science.gov (United States)

    Peikari, Mohammad; Salama, Sherine; Nofech-Mozes, Sharon; Martel, Anne L

    2018-05-08

    Completely labeled pathology datasets are often challenging and time-consuming to obtain. Semi-supervised learning (SSL) methods are able to learn from fewer labeled data points with the help of a large number of unlabeled data points. In this paper, we investigated the possibility of using clustering analysis to identify the underlying structure of the data space for SSL. A cluster-then-label method was proposed to identify high-density regions in the data space which were then used to help a supervised SVM in finding the decision boundary. We have compared our method with other supervised and semi-supervised state-of-the-art techniques using two different classification tasks applied to breast pathology datasets. We found that compared with other state-of-the-art supervised and semi-supervised methods, our SSL method is able to improve classification performance when a limited number of labeled data instances are made available. We also showed that it is important to examine the underlying distribution of the data space before applying SSL techniques to ensure semi-supervised learning assumptions are not violated by the data.

  14. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Directory of Open Access Journals (Sweden)

    John (Jack P. Riegel III

    2016-04-01

    Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a

  15. The Functional Resonance Analysis Method for a systemic risk based environmental auditing in a sinter plant: A semi-quantitative approach

    International Nuclear Information System (INIS)

    Patriarca, Riccardo; Di Gravio, Giulio; Costantino, Francesco; Tronci, Massimo

    2017-01-01

    Environmental auditing is a main issue for any production plant and assessing environmental performance is crucial to identify risks factors. The complexity of current plants arises from interactions among technological, human and organizational system components, which are often transient and not easily detectable. The auditing thus requires a systemic perspective, rather than focusing on individual behaviors, as emerged in recent research in the safety domain for socio-technical systems. We explore the significance of modeling the interactions of system components in everyday work, by the application of a recent systemic method, i.e. the Functional Resonance Analysis Method (FRAM), in order to define dynamically the system structure. We present also an innovative evolution of traditional FRAM following a semi-quantitative approach based on Monte Carlo simulation. This paper represents the first contribution related to the application of FRAM in the environmental context, moreover considering a consistent evolution based on Monte Carlo simulation. The case study of an environmental risk auditing in a sinter plant validates the research, showing the benefits in terms of identifying potential critical activities, related mitigating actions and comprehensive environmental monitoring indicators. - Highlights: • We discuss the relevance of a systemic risk based environmental audit. • We present FRAM to represent functional interactions of the system. • We develop a semi-quantitative FRAM framework to assess environmental risks. • We apply the semi-quantitative FRAM framework to build a model for a sinter plant.

  16. The Functional Resonance Analysis Method for a systemic risk based environmental auditing in a sinter plant: A semi-quantitative approach

    Energy Technology Data Exchange (ETDEWEB)

    Patriarca, Riccardo, E-mail: riccardo.patriarca@uniroma1.it; Di Gravio, Giulio; Costantino, Francesco; Tronci, Massimo

    2017-03-15

    Environmental auditing is a main issue for any production plant and assessing environmental performance is crucial to identify risks factors. The complexity of current plants arises from interactions among technological, human and organizational system components, which are often transient and not easily detectable. The auditing thus requires a systemic perspective, rather than focusing on individual behaviors, as emerged in recent research in the safety domain for socio-technical systems. We explore the significance of modeling the interactions of system components in everyday work, by the application of a recent systemic method, i.e. the Functional Resonance Analysis Method (FRAM), in order to define dynamically the system structure. We present also an innovative evolution of traditional FRAM following a semi-quantitative approach based on Monte Carlo simulation. This paper represents the first contribution related to the application of FRAM in the environmental context, moreover considering a consistent evolution based on Monte Carlo simulation. The case study of an environmental risk auditing in a sinter plant validates the research, showing the benefits in terms of identifying potential critical activities, related mitigating actions and comprehensive environmental monitoring indicators. - Highlights: • We discuss the relevance of a systemic risk based environmental audit. • We present FRAM to represent functional interactions of the system. • We develop a semi-quantitative FRAM framework to assess environmental risks. • We apply the semi-quantitative FRAM framework to build a model for a sinter plant.

  17. Semi-Dried Fruits and Vegetables

    Directory of Open Access Journals (Sweden)

    Gamze Uysal Seçkin

    2015-12-01

    Full Text Available Since ancient times, the preservation of fruit and vegetables is an ancient method of drying. Sun drying method has been used more widely. In general, consumer-ready products are dried fruits, while the dried vegetables are the foods subjected to the rehydration processes such as boiling, heating and baking before consumption. In recent years, new products with high eating quality have been attempted to achieve without losing characteristic of raw material. With the improving of food technology, using developed methods (pH reduction with reducing aw, slight heating, preservatives use etc. as protective agent, and using a combination of a low rate as an alternative to traditional food preservation process, products have been obtained without changing original characteristics of food. ‘Semi-dried 'or 'medium moist 'products with little difference between the taste and texture of the product with a damp have gained importance in recent years in terms of consumer preferences. Vegetables or fruits, which have water activity levels between 0.50 and 0.95 and the moisture content of between 26% and 60%, are called 'medium moist fruit or vegetables'. Two different manufacturing process to obtain a semi-dried or intermediate moisture products are applied. First, fully dried fruits and vegetables to be rehydrated with water are brought to the desired level of their moisture content. Second, in the first drying process, when the product moisture content is reduced to the desired level, the drying process is finished. The semi-dried products are preferred by consumers because they have a softer texture in terms of eating quality and like fresh products texture.

  18. Semi-analytic techniques for calculating bubble wall profiles

    International Nuclear Information System (INIS)

    Akula, Sujeet; Balazs, Csaba; White, Graham A.

    2016-01-01

    We present semi-analytic techniques for finding bubble wall profiles during first order phase transitions with multiple scalar fields. Our method involves reducing the problem to an equation with a single field, finding an approximate analytic solution and perturbing around it. The perturbations can be written in a semi-analytic form. We assert that our technique lacks convergence problems and demonstrate the speed of convergence on an example potential. (orig.)

  19. Numerical analysis of droplet impingement using the moving particle semi-implicit method

    International Nuclear Information System (INIS)

    Xiong, Jinbiao; Koshizuka, Seiichi; Sakai, Mikio

    2010-01-01

    Droplet impingement onto a rigid wall is simulated in two and three dimensions using the moving particle semi-implicit method. In two-dimensional calculations, the convergence is achieved and the propagation of a shockwave in a droplet is captured. The average pressure on the contact area decreases gradually after the maximum value. The numerically obtained maximum average impact pressure agrees with the Heymann correlation. A large shear stress appears at the contact edge due to jetting. A parametric study shows that the droplet diameter has only a minor effect on the pressure load due to droplet impingement. When the impingement takes place from an impact angle of π/4 rad, the pressure load and shear stress show a dependence only on the normal velocity to the wall. A comparison between the three-dimensional and two-dimensional results shows that consideration of the three-dimensional effect can decrease the average impact pressure by about 12%. (author)

  20. Development of a semi-automated method for subspecialty case distribution and prediction of intraoperative consultations in surgical pathology

    Directory of Open Access Journals (Sweden)

    Raul S Gonzalez

    2015-01-01

    Full Text Available Background: In many surgical pathology laboratories, operating room schedules are prospectively reviewed to determine specimen distribution to different subspecialty services and to predict the number and nature of potential intraoperative consultations for which prior medical records and slides require review. At our institution, such schedules were manually converted into easily interpretable, surgical pathology-friendly reports to facilitate these activities. This conversion, however, was time-consuming and arguably a non-value-added activity. Objective: Our goal was to develop a semi-automated method of generating these reports that improved their readability while taking less time to perform than the manual method. Materials and Methods: A dynamic Microsoft Excel workbook was developed to automatically convert published operating room schedules into different tabular formats. Based on the surgical procedure descriptions in the schedule, a list of linked keywords and phrases was utilized to sort cases by subspecialty and to predict potential intraoperative consultations. After two trial-and-optimization cycles, the method was incorporated into standard practice. Results: The workbook distributed cases to appropriate subspecialties and accurately predicted intraoperative requests. Users indicated that they spent 1-2 h fewer per day on this activity than before, and team members preferred the formatting of the newer reports. Comparison of the manual and semi-automatic predictions showed that the mean daily difference in predicted versus actual intraoperative consultations underwent no statistically significant changes before and after implementation for most subspecialties. Conclusions: A well-designed, lean, and simple information technology solution to determine subspecialty case distribution and prediction of intraoperative consultations in surgical pathology is approximately as accurate as the gold standard manual method and requires less

  1. Merging expert and empirical data for rare event frequency estimation: Pool homogenisation for empirical Bayes models

    International Nuclear Information System (INIS)

    Quigley, John; Hardman, Gavin; Bedford, Tim; Walls, Lesley

    2011-01-01

    Empirical Bayes provides one approach to estimating the frequency of rare events as a weighted average of the frequencies of an event and a pool of events. The pool will draw upon, for example, events with similar precursors. The higher the degree of homogeneity of the pool, then the Empirical Bayes estimator will be more accurate. We propose and evaluate a new method using homogenisation factors under the assumption that events are generated from a Homogeneous Poisson Process. The homogenisation factors are scaling constants, which can be elicited through structured expert judgement and used to align the frequencies of different events, hence homogenising the pool. The estimation error relative to the homogeneity of the pool is examined theoretically indicating that reduced error is associated with larger pool homogeneity. The effects of misspecified expert assessments of the homogenisation factors are examined theoretically and through simulation experiments. Our results show that the proposed Empirical Bayes method using homogenisation factors is robust under different degrees of misspecification.

  2. Semi-empirical equivalent field method for dose determination in midline block fields for cobalt - 60 beam

    International Nuclear Information System (INIS)

    Tagoe, S.N.A.; Nani, E.K.; Yarney, J.; Edusa, C.; Quayson-Sackey, K.; Nyamadi, K.M.; Sasu, E.

    2012-01-01

    For teletherapy treatment time calculations, midline block fields are resolved into two fields, but neglecting scattering from other fields, the effective equivalent square field size of the midline block is assumed to the resultant field. Such approach is underestimation, and may be detrimental in achieving the recommended uncertainty of ± 5 % for patient's radiation dose delivery. By comparison, the deviations of effective equivalent square field sizes by calculations and experiments were within 13.2 % for cobalt 60 beams of GWGP80 cobalt 60 teletherapy. Therefore, a modified method incorporating the scatter contributions was adopted to estimate the effective equivalent square field size for midline block field. The measured outputs of radiation beams with the block were compared with outputs of square fields without the blocks (only the block tray) at depths of 5 and 10 cm for the teletherapy machine employing isocentric technique, and the accuracy was within ± 3 % for the cobalt 60 beams. (au)

  3. Semi-supervised spectral algorithms for community detection in complex networks based on equivalence of clustering methods

    Science.gov (United States)

    Ma, Xiaoke; Wang, Bingbo; Yu, Liang

    2018-01-01

    Community detection is fundamental for revealing the structure-functionality relationship in complex networks, which involves two issues-the quantitative function for community as well as algorithms to discover communities. Despite significant research on either of them, few attempt has been made to establish the connection between the two issues. To attack this problem, a generalized quantification function is proposed for community in weighted networks, which provides a framework that unifies several well-known measures. Then, we prove that the trace optimization of the proposed measure is equivalent with the objective functions of algorithms such as nonnegative matrix factorization, kernel K-means as well as spectral clustering. It serves as the theoretical foundation for designing algorithms for community detection. On the second issue, a semi-supervised spectral clustering algorithm is developed by exploring the equivalence relation via combining the nonnegative matrix factorization and spectral clustering. Different from the traditional semi-supervised algorithms, the partial supervision is integrated into the objective of the spectral algorithm. Finally, through extensive experiments on both artificial and real world networks, we demonstrate that the proposed method improves the accuracy of the traditional spectral algorithms in community detection.

  4. A hybrid filtering method based on a novel empirical mode decomposition for friction signals

    International Nuclear Information System (INIS)

    Li, Chengwei; Zhan, Liwei

    2015-01-01

    During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods. (paper)

  5. Suitability of semi-automated tumor response assessment of liver metastases using a dedicated software package

    International Nuclear Information System (INIS)

    Kalkmann, Janine; Ladd, S.C.; Greiff, A. de; Forsting, M.; Stattaus, J.

    2010-01-01

    Purpose: to evaluate the suitability of semi-automated compared to manual tumor response assessment (TRA) of liver metastases. Materials and methods: in total, 32 patients with colorectal cancer and liver metastases were followed by an average of 2.8 contrast-enhanced CT scans. Two observers (O1, O2) measured the longest diameter (LD) of 269 liver metastases manually and semi-automatically using software installed as thin-client on a PACS workstation (LMS-Liver, MEDIAN Technologies). LD and TRA (''progressive'', ''stable'', ''partial remission'') were performed according to RECIST (Response Evaluation Criteria in Solid Tumors) and analyzed for between-method, interobserver and intraobserver variability. The time needed for evaluation was compared for both methods. Results: all measurements correlated excellently (r ≥ 0.96). Intraobserver (semi-automated), interobserver (manual) and between-method differences (by O1) in LD of 1.4 ± 2.6 mm, 1.9 ± 1.9 mm and 2.1 ± 2.0 mm, respectively, were not significant. Interobserver (semi-automated) and between-method (by O2) differences in LD of 3.0 ± 3.0 mm and 2.6 ± 2.0 mm, respectively, reflected a significant variability (p < 0.01). The interobserver agreement in manual and semi-automated TRA was 91.4%. The intraobserver agreement in semi-automated TRA was 84.5%. Between both methods a TRA agreement of 86.2% was obtained. Semi-automated evaluation (2.7 min) took slightly more time than manual evaluation (2.3 min). Conclusion: semi-automated and manual evaluation of liver metastases yield comparable results in response assessments and require comparable effort. (orig.)

  6. Empirical Research In Engineering Design

    DEFF Research Database (Denmark)

    Ahmed, Saeema

    2007-01-01

    Increasingly engineering design research involves the use of empirical studies that are conducted within an industrial environment [Ahmed, 2001; Court 1995; Hales 1987]. Research into the use of information by designers or understanding how engineers build up experience are examples of research...... of research issues. This paper describes case studies of empirical research carried out within industry in engineering design focusing upon information, knowledge and experience in engineering design. The paper describes the research methods employed, their suitability for the particular research aims...

  7. Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions.

    Science.gov (United States)

    Chen, Ke; Wang, Shihai

    2011-01-01

    Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work.

  8. Potential application of a semi-quantitative method for mercury determination in soils, sediments and gold mining residues

    International Nuclear Information System (INIS)

    Yallouz, A.V.; Cesar, R.G.; Egler, S.G.

    2008-01-01

    An alternative, low cost method for analyzing mercury in soil, sediment and gold mining residues was developed, optimized and applied to 30 real samples. It is semiquantitative, performed using an acid extraction pretreatment step, followed by mercury reduction and collection in a detecting paper containing cuprous iodide. A complex is formed with characteristic color whose intensity is proportional to mercury concentration in the original sample. The results are reported as range of concentration and the minimum detectable is 100 ng/g. Method quality assurance was performed by comparing results obtained using the alternative method and the Cold Vapor Atomic Absorption Spectrometry techniques. The average results from duplicate analysis by CVAAS were 100% coincident with alternative method results. The method is applicable for screening tests and can be used in regions where a preliminary diagnosis is necessary, at programs of environmental surveillance or by scientists interested in investigating mercury geochemistry. - Semi-quantitative low-cost method for mercury determination in soil, sediments and mining residues

  9. Empirical modeling of drying kinetics and microwave assisted extraction of bioactive compounds from Adathoda vasica

    Directory of Open Access Journals (Sweden)

    Prithvi Simha

    2016-03-01

    Full Text Available To highlight the shortcomings in conventional methods of extraction, this study investigates the efficacy of Microwave Assisted Extraction (MAE toward bioactive compound recovery from pharmaceutically-significant medicinal plants, Adathoda vasica and Cymbopogon citratus. Initially, the microwave (MW drying behavior of the plant leaves was investigated at different sample loadings, MW power and drying time. Kinetics was analyzed through empirical modeling of drying data against 10 conventional thin-layer drying equations that were further improvised through the incorporation of Arrhenius, exponential and linear-type expressions. 81 semi-empirical Midilli equations were derived and subjected to non-linear regression to arrive at the characteristic drying equations. Bioactive compounds recovery from the leaves was examined under various parameters through a comparative approach that studied MAE against Soxhlet extraction. MAE of A. vasica reported similar yields although drastic reduction in extraction time (210 s as against the average time of 10 h in the Soxhlet apparatus. Extract yield for MAE of C. citratus was higher than the conventional process with optimal parameters determined to be 20 g sample load, 1:20 sample/solvent ratio, extraction time of 150 s and 300 W output power. Scanning Electron Microscopy and Fourier Transform Infrared Spectroscopy were performed to depict changes in internal leaf morphology.

  10. Transfer of radionuclides by terrestrial food products from semi-natural ecosystems to humans

    International Nuclear Information System (INIS)

    Howard, B.J.

    1996-01-01

    The potential radiological significance of radionuclide transfer to humans via foodstuffs derived from semi-natural ecosystems has become apparent since the Chernobyl accident. Foodchain models developed before this time usually did not take such transfers into account. The processes leading to contamination of food in these environments are complex and current understanding of the transfer mechanisms is incomplete. For these reasons the approach adopted in Chapter 3 is to represent, by means of aggregated parameters, the empirical relationships between ground deposits and concentration in the food product. 107 refs, 2 figs, 9 tabs

  11. Empirical method to measure stochasticity and multifractality in nonlinear time series

    Science.gov (United States)

    Lin, Chih-Hao; Chang, Chia-Seng; Li, Sai-Ping

    2013-12-01

    An empirical algorithm is used here to study the stochastic and multifractal nature of nonlinear time series. A parameter can be defined to quantitatively measure the deviation of the time series from a Wiener process so that the stochasticity of different time series can be compared. The local volatility of the time series under study can be constructed using this algorithm, and the multifractal structure of the time series can be analyzed by using this local volatility. As an example, we employ this method to analyze financial time series from different stock markets. The result shows that while developed markets evolve very much like an Ito process, the emergent markets are far from efficient. Differences about the multifractal structures and leverage effects between developed and emergent markets are discussed. The algorithm used here can be applied in a similar fashion to study time series of other complex systems.

  12. Analysis of the thoracic aorta using a semi-automated post processing tool

    International Nuclear Information System (INIS)

    Entezari, Pegah; Kino, Aya; Honarmand, Amir R.; Galizia, Mauricio S.; Yang, Yan; Collins, Jeremy; Yaghmai, Vahid; Carr, James C.

    2013-01-01

    Objective: To evaluates a semi-automated method for Thoracic Aortic Aneurysm (TAA) measurement using ECG-gated Dual Source CT Angiogram (DSCTA). Methods: This retrospective HIPAA compliant study was approved by our IRB. Transaxial maximum diameters of outer wall to outer wall were studied in fifty patients at seven anatomic locations of the thoracic aorta: annulus, sinus, sinotubular junction (STJ), mid ascending aorta (MAA) at the level of right pulmonary artery, proximal aortic arch (PROX) immediately proximal to innominate artery, distal aortic arch (DIST) immediately distal to left subclavian artery, and descending aorta (DESC) at the level of diaphragm. Measurements were performed using a manual method and semi-automated software. All readers repeated their measurements. Inter-method, intra-observer and inter-observer agreements were evaluated according to intraclass correlation coefficient (ICC) and Bland–Altman plot. The number of cases with manual contouring or center line adjustment for the semi-automated method and also the post-processing time for each method were recorded. Results: The mean difference between semi-automated and manual methods was less than 1.3 mm at all seven points. Strong inter-method, inter-observer and intra-observer agreement was recorded at all levels (ICC ≥ 0.9). The maximum rate of manual adjustment of center line and contour was at the level of annulus. The average time for manual post-processing of the aorta was 19 ± 0.3 min, while it took 8.26 ± 2.1 min to do the measurements with the semi-automated tool (Vitrea version 6.0.0.1 software). The center line was edited manually at all levels, with most corrections at the level of annulus (60%), while the contour was adjusted at all levels with highest and lowest number of corrections at the levels of annulus and DESC (75% and 0.07% of the cases), respectively. Conclusion: Compared to the commonly used manual method, semi-automated measurement of vessel dimensions is

  13. A review of neutron scattering correction for the calibration of neutron survey meters using the shadow cone method

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang In; Kim, Bong Hwan; Kim, Jang Lyul; Lee, Jung Il [Health Physics Team, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-12-15

    The calibration methods of neutron-measuring devices such as the neutron survey meter have advantages and disadvantages. To compare the calibration factors obtained by the shadow cone method and semi-empirical method, 10 neutron survey meters of five different types were used in this study. This experiment was performed at the Korea Atomic Energy Research Institute (KAERI; Daejeon, South Korea), and the calibration neutron fields were constructed using a {sup 252}Californium ({sup 252}Cf) neutron source, which was positioned in the center of the neutron irradiation room. The neutron spectra of the calibration neutron fields were measured by a europium-activated lithium iodide scintillator in combination with KAERI's Bonner sphere system. When the shadow cone method was used, 10 single moderator-based survey meters exhibited a smaller calibration factor by as much as 3.1 - 9.3% than that of the semi-empirical method. This finding indicates that neutron survey meters underestimated the scattered neutrons and attenuated neutrons (i.e., the total scatter corrections). This underestimation of the calibration factor was attributed to the fact that single moderator-based survey meters have an under-ambient dose equivalent response in the thermal or thermal-dominant neutron field. As a result, when the shadow cone method is used for a single moderator-based survey meter, an additional correction and the International Organization for Standardization standard 8529-2 for room-scattered neutrons should be considered.

  14. A review of neutron scattering correction for the calibration of neutron survey meters using the shadow cone method

    International Nuclear Information System (INIS)

    Kim, Sang In; Kim, Bong Hwan; Kim, Jang Lyul; Lee, Jung Il

    2015-01-01

    The calibration methods of neutron-measuring devices such as the neutron survey meter have advantages and disadvantages. To compare the calibration factors obtained by the shadow cone method and semi-empirical method, 10 neutron survey meters of five different types were used in this study. This experiment was performed at the Korea Atomic Energy Research Institute (KAERI; Daejeon, South Korea), and the calibration neutron fields were constructed using a 252 Californium ( 252 Cf) neutron source, which was positioned in the center of the neutron irradiation room. The neutron spectra of the calibration neutron fields were measured by a europium-activated lithium iodide scintillator in combination with KAERI's Bonner sphere system. When the shadow cone method was used, 10 single moderator-based survey meters exhibited a smaller calibration factor by as much as 3.1 - 9.3% than that of the semi-empirical method. This finding indicates that neutron survey meters underestimated the scattered neutrons and attenuated neutrons (i.e., the total scatter corrections). This underestimation of the calibration factor was attributed to the fact that single moderator-based survey meters have an under-ambient dose equivalent response in the thermal or thermal-dominant neutron field. As a result, when the shadow cone method is used for a single moderator-based survey meter, an additional correction and the International Organization for Standardization standard 8529-2 for room-scattered neutrons should be considered

  15. Analysis of the neutrons dispersion in a semi-infinite medium based in transport theory and the Monte Carlo method

    International Nuclear Information System (INIS)

    Arreola V, G.; Vazquez R, R.; Guzman A, J. R.

    2012-10-01

    In this work a comparative analysis of the results for the neutrons dispersion in a not multiplicative semi-infinite medium is presented. One of the frontiers of this medium is located in the origin of coordinates, where a neutrons source in beam form, i.e., μο=1 is also. The neutrons dispersion is studied on the statistical method of Monte Carlo and through the unidimensional transport theory and for an energy group. The application of transport theory gives a semi-analytic solution for this problem while the statistical solution for the flow was obtained applying the MCNPX code. The dispersion in light water and heavy water was studied. A first remarkable result is that both methods locate the maximum of the neutrons distribution to less than two mean free trajectories of transport for heavy water, while for the light water is less than ten mean free trajectories of transport; the differences between both methods is major for the light water case. A second remarkable result is that the tendency of both distributions is similar in small mean free trajectories, while in big mean free trajectories the transport theory spreads to an asymptote value and the solution in base statistical method spreads to zero. The existence of a neutron current of low energy and toward the source is demonstrated, in contrary sense to the neutron current of high energy coming from the own source. (Author)

  16. Frames and semi-frames

    International Nuclear Information System (INIS)

    Antoine, Jean-Pierre; Balazs, Peter

    2011-01-01

    Loosely speaking, a semi-frame is a generalized frame for which one of the frame bounds is absent. More precisely, given a total sequence in a Hilbert space, we speak of an upper (resp. lower) semi-frame if only the upper (resp. lower) frame bound is valid. Equivalently, for an upper semi-frame, the frame operator is bounded, but has an unbounded inverse, whereas a lower semi-frame has an unbounded frame operator, with a bounded inverse. We study mostly upper semi-frames, both in the continuous and discrete case, and give some remarks for the dual situation. In particular, we show that reconstruction is still possible in certain cases.

  17. CO2 forcing induces semi-direct effects with consequences for climate feedback interpretations

    Science.gov (United States)

    Andrews, Timothy; Forster, Piers M.

    2008-02-01

    Climate forcing and feedbacks are diagnosed from seven slab-ocean GCMs for 2 × CO2 using a regression method. Results are compared to those using conventional methodologies to derive a semi-direct forcing due to tropospheric adjustment, analogous to the semi-direct effect of absorbing aerosols. All models show a cloud semi-direct effect, indicating a rapid cloud response to CO2; cloud typically decreases, enhancing the warming. Similarly there is evidence of semi-direct effects from water-vapour, lapse-rate, ice and snow. Previous estimates of climate feedbacks are unlikely to have taken these semi-direct effects into account and so misinterpret processes as feedbacks that depend only on the forcing, but not the global surface temperature. We show that the actual cloud feedback is smaller than what previous methods suggest and that a significant part of the cloud response and the large spread between previous model estimates of cloud feedback is due to the semi-direct forcing.

  18. Teaching Empirical Software Engineering Using Expert Teams

    DEFF Research Database (Denmark)

    Kuhrmann, Marco

    2017-01-01

    Empirical software engineering aims at making software engineering claims measurable, i.e., to analyze and understand phenomena in software engineering and to evaluate software engineering approaches and solutions. Due to the involvement of humans and the multitude of fields for which software...... is crucial, software engineering is considered hard to teach. Yet, empirical software engineering increases this difficulty by adding the scientific method as extra dimension. In this paper, we present a Master-level course on empirical software engineering in which different empirical instruments...... an extra specific expertise that they offer as service to other teams, thus, fostering cross-team collaboration. The paper outlines the general course setup, topics addressed, and it provides initial lessons learned....

  19. Calibration of semi-stochastic procedure for simulating high-frequency ground motions

    Science.gov (United States)

    Seyhan, Emel; Stewart, Jonathan P.; Graves, Robert

    2013-01-01

    Broadband ground motion simulation procedures typically utilize physics-based modeling at low frequencies, coupled with semi-stochastic procedures at high frequencies. The high-frequency procedure considered here combines deterministic Fourier amplitude spectra (dependent on source, path, and site models) with random phase. Previous work showed that high-frequency intensity measures from this simulation methodology attenuate faster with distance and have lower intra-event dispersion than in empirical equations. We address these issues by increasing crustal damping (Q) to reduce distance attenuation bias and by introducing random site-to-site variations to Fourier amplitudes using a lognormal standard deviation ranging from 0.45 for Mw  100 km).

  20. Calibration and evaluation of the FAO56-Penman-Monteith, FAO24-radiation, and Priestly-Taylor reference evapotranspiration models using the spatially measured solar radiation across a large arid and semi-arid area in southern Iran

    Science.gov (United States)

    Didari, Shohreh; Ahmadi, Seyed Hamid

    2018-05-01

    Crop evapotranspiration (ET) is one of the main components in calculating the water balance in agricultural, hydrological, environmental, and climatological studies. Solar radiation (Rs) supplies the available energy for ET, and therefore, precise measurement of Rs is required for accurate ET estimation. However, measured Rs and ET and are not available in many areas and they should be estimated indirectly by the empirical methods. The Angström-Prescott (AP) is the most popular method for estimating Rs in areas where there are no measured data. In addition, the locally calibrated coefficients of AP are not yet available in many locations, and instead, the default coefficients are used. In this study, we investigated different approaches for Rs and ET calculations. The daily measured Rs values in 14 stations across arid and semi-arid areas of Fars province in south of Iran were used for calibrating the coefficients of the AP model. Results revealed that the calibrated AP coefficients were very different and higher than the default values. In addition, the reference ET (ET o ) was estimated by the FAO56 Penman-Monteith (FAO56 PM) and FAO24-radiation methods by using the measured Rs and were then compared with the measured pan evaporation as an indication of the potential atmospheric demand. Interestingly and unlike many previous studies, which have suggested the FAO56 PM as the standard method in calculation of ET o , the FAO24-radiation with the measured Rs showed better agreement with the mean pan evaporation. Therefore, the FAO24-radiation with the measured Rs was used as the reference method for the study area, which was also confirmed by the previous studies based on the lysimeter data. Moreover, the accuracy of calibrated Rs in the estimation of ET o by the FAO56 PM and FAO24-radiation was investigated. Results showed that the calibrated Rs improved the accuracy of the estimated ET o by the FAO24-radiation compared with the FAO24-radiation using the measured

  1. Semi-empirical model to determine pure β--emitters in closed waste packages using Bremsstrahlung radiation

    International Nuclear Information System (INIS)

    Takacs, S.; Hermanne, A.

    2001-01-01

    in the measured activity for a certain isotope. On the basis of experiences we gathered a semi-empirical model was set up to establish the eps*beta(E) detector efficiency function to measure pure beta activity in sealed waste packages through Bremsstrahlung radiation. The model is based on the following criteria: Only one type of β - -emitter isotope is allowed in one waste package; Uniform activity distribution is supposed inside the waste drum; Constant waste matrix composition is assumed for each waste drum; Each component of the matrix material is supposed to be evenly distributed over the whole volume of the waste drum

  2. Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2014-01-01

    Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.

  3. A semi-empirical model for the prediction of fouling in railway ballast using GPR

    Science.gov (United States)

    Bianchini Ciampoli, Luca; Tosti, Fabio; Benedetto, Andrea; Alani, Amir M.; Loizos, Andreas; D'Amico, Fabrizio; Calvi, Alessandro

    2016-04-01

    The first step in the planning for a renewal of a railway network consists in gathering information, as effectively as possible, about the state of the railway tracks. Nowadays, this activity is mostly carried out by digging trenches at regular intervals along the whole network, to evaluate both geometrical and geotechnical properties of the railway track bed. This involves issues, mainly concerning the invasiveness of the operations, the impacts on the rail traffic, the high costs, and the low levels of significance concerning such discrete data set. Ground-penetrating radar (GPR) can represent a useful technique for overstepping these issues, as it can be directly mounted onto a train crossing the railway, and collect continuous information along the network. This study is aimed at defining an empirical model for the prediction of fouling in railway ballast, by using GPR. With this purpose, a thorough laboratory campaign was implemented within the facilities of Roma Tre University. In more details, a 1.47 m long × 1.47 m wide × 0.48 m height plexiglass framework, accounting for the domain of investigation, was laid over a perfect electric conductor, and filled up with several configuration of railway ballast and fouling material (clayey sand), thereby representing different levels of fouling. Then, the set of fouling configurations was surveyed with several GPR systems. In particular, a ground-coupled multi-channel radar (600 MHz and 1600 MHz center frequency antennas) and three air-launched radar systems (1000 MHz and 2000 MHz center frequency antennas) were employed for surveying the materials. By observing the results both in terms of time and frequency domains, interesting insights are highlighted and an empirical model, relating in particular the shape of the frequency spectrum of the signal and the percentage of fouling characterizing the surveyed material, is finally proposed. Acknowledgement The Authors thank COST, for funding the Action TU1208 "Civil

  4. Empirical comparison of four baseline covariate adjustment methods in analysis of continuous outcomes in randomized controlled trials

    Directory of Open Access Journals (Sweden)

    Zhang S

    2014-07-01

    Full Text Available Shiyuan Zhang,1 James Paul,2 Manyat Nantha-Aree,2 Norman Buckley,2 Uswa Shahzad,2 Ji Cheng,2 Justin DeBeer,5 Mitchell Winemaker,5 David Wismer,5 Dinshaw Punthakee,5 Victoria Avram,5 Lehana Thabane1–41Department of Clinical Epidemiology and Biostatistics, 2Department of Anesthesia, McMaster University, Hamilton, ON, Canada; 3Biostatistics Unit/Centre for Evaluation of Medicines, St Joseph's Healthcare - Hamilton, Hamilton, ON, Canada; 4Population Health Research Institute, Hamilton Health Science/McMaster University, 5Department of Surgery, Division of Orthopaedics, McMaster University, Hamilton, ON, CanadaBackground: Although seemingly straightforward, the statistical comparison of a continuous variable in a randomized controlled trial that has both a pre- and posttreatment score presents an interesting challenge for trialists. We present here empirical application of four statistical methods (posttreatment scores with analysis of variance, analysis of covariance, change in scores, and percent change in scores, using data from a randomized controlled trial of postoperative pain in patients following total joint arthroplasty (the Morphine COnsumption in Joint Replacement Patients, With and Without GaBapentin Treatment, a RandomIzed ControlLEd Study [MOBILE] trials.Methods: Analysis of covariance (ANCOVA was used to adjust for baseline measures and to provide an unbiased estimate of the mean group difference of the 1-year postoperative knee flexion scores in knee arthroplasty patients. Robustness tests were done by comparing ANCOVA with three comparative methods: the posttreatment scores, change in scores, and percentage change from baseline.Results: All four methods showed similar direction of effect; however, ANCOVA (-3.9; 95% confidence interval [CI]: -9.5, 1.6; P=0.15 and the posttreatment score (-4.3; 95% CI: -9.8, 1.2; P=0.12 method provided the highest precision of estimate compared with the change score (-3.0; 95% CI: -9.9, 3.8; P=0

  5. Weak form implementation of the semi-analytical finite element (SAFE) method for a variety of elastodynamic waveguides

    Science.gov (United States)

    Hakoda, Christopher; Lissenden, Clifford; Rose, Joseph L.

    2018-04-01

    Dispersion curves are essential to any guided wave NDE project. The Semi-Analytical Finite Element (SAFE) method has significantly increased the ease by which these curves can be calculated. However, due to misconceptions regarding theory and fragmentation based on different finite-element software, the theory has stagnated, and adoption by researchers who are new to the field has been slow. This paper focuses on the relationship between the SAFE formulation and finite element theory, and the implementation of the SAFE method in a weak form for plates, pipes, layered waveguides/composites, curved waveguides, and arbitrary cross-sections is shown. The benefits of the weak form are briefly described, as is implementation in open-source and commercial finite element software.

  6. Semi-metallic polymers

    DEFF Research Database (Denmark)

    Bubnova, Olga; Khan, Zia Ullah; Wang, Hui

    2014-01-01

    Polymers are lightweight, flexible, solution-processable materials that are promising for low-cost printed electronics as well as for mass-produced and large-area applications. Previous studies demonstrated that they can possess insulating, semiconducting or metallic properties; here we report...... that polymers can also be semi-metallic. Semi-metals, exemplified by bismuth, graphite and telluride alloys, have no energy bandgap and a very low density of states at the Fermi level. Furthermore, they typically have a higher Seebeck coefficient and lower thermal conductivities compared with metals, thus being...... a Fermi glass to a semi-metal. The high Seebeck value, the metallic conductivity at room temperature and the absence of unpaired electron spins makes polymer semi-metals attractive for thermoelectrics and spintronics....

  7. A multistage, semi-automated procedure for analyzing the morphology of nanoparticles

    KAUST Repository

    Park, Chiwoo

    2012-07-01

    This article presents a multistage, semi-automated procedure that can expedite the morphology analysis of nanoparticles. Material scientists have long conjectured that the morphology of nanoparticles has a profound impact on the properties of the hosting material, but a bottleneck is the lack of a reliable and automated morphology analysis of the particles based on their image measurements. This article attempts to fill in this critical void. One particular challenge in nanomorphology analysis is how to analyze the overlapped nanoparticles, a problem not well addressed by the existing methods but effectively tackled by the method proposed in this article. This method entails multiple stages of operations, executed sequentially, and is considered semi-automated due to the inclusion of a semi-supervised clustering step. The proposed method is applied to several images of nanoparticles, producing the needed statistical characterization of their morphology. © 2012 "IIE".

  8. A multistage, semi-automated procedure for analyzing the morphology of nanoparticles

    KAUST Repository

    Park, Chiwoo; Huang, Jianhua Z.; Huitink, David; Kundu, Subrata; Mallick, Bani K.; Liang, Hong; Ding, Yu

    2012-01-01

    This article presents a multistage, semi-automated procedure that can expedite the morphology analysis of nanoparticles. Material scientists have long conjectured that the morphology of nanoparticles has a profound impact on the properties of the hosting material, but a bottleneck is the lack of a reliable and automated morphology analysis of the particles based on their image measurements. This article attempts to fill in this critical void. One particular challenge in nanomorphology analysis is how to analyze the overlapped nanoparticles, a problem not well addressed by the existing methods but effectively tackled by the method proposed in this article. This method entails multiple stages of operations, executed sequentially, and is considered semi-automated due to the inclusion of a semi-supervised clustering step. The proposed method is applied to several images of nanoparticles, producing the needed statistical characterization of their morphology. © 2012 "IIE".

  9. Derivation of a semi-empirical formula for the quantum efficiency of forward secondary electron emission from γ-irradiated metals. 2

    International Nuclear Information System (INIS)

    Nakamura, Masamoto; Katoh, Yoh

    1994-01-01

    An empirical formula for the quantum efficiency of electrons irradiated with 60 Co γ-rays was reported in a previous paper, but its physical meaning was not made clear. Then, a simple model was assumed, from which a formula for calculating the efficiency was theoretically derived. Some parameters in the formula were determined so that the calculated results might fit the experimental data. The above empirical formula was shown to be the same as the formula physically derived this time. Results from the semiempirical formula and experimental data for Al and Pb sample were in agreement within the limits of 5%. (author)

  10. A semi-automatic calibration method for seismic arrays applied to an Alaskan array

    Science.gov (United States)

    Lindquist, K. G.; Tibuleac, I. M.; Hansen, R. A.

    2001-12-01

    Well-calibrated, small (less than 22 km) aperture seismic arrays are of great importance for event location and characterization. We have implemented the crosscorrelation method of Tibuleac and Herrin (Seis. Res. Lett. 1997) as a semi-automatic procedure, applicable to any seismic array. With this we are able to process thousands of phases with several days of computer time on a Sun Blade 1000 workstation. Complicated geology beneath elements and elevation differences amonst the array stations made station corrections necessary. 328 core phases (including PcP, PKiKP, PKP, PKKP) were used in order to determine the static corrections. To demonstrate this application and method, we have analyzed P and PcP arrivals at the ILAR array (Eielson, Alaska) between years 1995-2000. The arrivals were picked by PIDC, for events (mb>4.0) well located by the USGS. We calculated backazimuth and horizontal velocity residuals for all events. We observed large backazimuth residuals for regional and near-regional phases. We are discussing the possibility of a dipping Moho (strike E-W, dip N) beneath the array versus other local structure that would produce the residuals.

  11. Numerical simulation on single bubble rising behavior in liquid metal using moving particle semi-implicit method

    International Nuclear Information System (INIS)

    Zuo Juanli; Tian Wenxi; Qiu Suizheng; Chen Ronghua; Su Guanghui

    2011-01-01

    The gas-lift pump in liquid metal cooling fast reactor (LMFR) is an innovational conceptual design to enhance the natural circulation ability of reactor core. The two-phase flow character of gas-liquid metal makes significant improvement of the natural circulation capacity and reactor safety. In present basic study, the rising behavior of a single nitrogen bubble in five kinds of liquid metals (lead bismuth alloy, liquid kalium, sodium, potassium sodium alloy and lithium lead alloy) was numerically simulated using moving particle semi-implicit (MPS) method. The whole growing process of single nitrogen bubble in liquid metal was captured. The bubble shape and rising speed of single nitrogen bubble in each liquid metal were compared. The comparison between simulation results using MPS method and Grace graphical correlation shows a good agreement. (authors)

  12. Levels of reduction in van Manen's phenomenological hermeneutic method: an empirical example.

    Science.gov (United States)

    Heinonen, Kristiina

    2015-05-01

    To describe reduction as a method using van Manen's phenomenological hermeneutic research approach. Reduction involves several levels that can be distinguished for their methodological usefulness. Researchers can use reduction in different ways and dimensions for their methodological needs. A study of Finnish multiple-birth families in which open interviews (n=38) were conducted with public health nurses, family care workers and parents of twins. A systematic literature and knowledge review showed there were no articles on multiple-birth families that used van Manen's method. Discussion The phenomena of the 'lifeworlds' of multiple-birth families consist of three core essential themes as told by parents: 'a state of constant vigilance', 'ensuring that they can continue to cope' and 'opportunities to share with other people'. Reduction provides the opportunity to carry out in-depth phenomenological hermeneutic research and understand people's lives. It helps to keep research stages separate but also enables a consolidated view. Social care and healthcare professionals have to hear parents' voices better to comprehensively understand their situation; they need further tools and training to be able to empower parents of twins. This paper adds an empirical example to the discussion of phenomenology, hermeneutic study and reduction as a method. It opens up reduction for researchers to exploit.

  13. Coupled Semi-Supervised Learning

    Science.gov (United States)

    2010-05-01

    Additionally, specify the expected category of each relation argument to enable type-checking. Subsystem components and the KI can benefit from methods that...confirm that our coupled semi-supervised learning approaches can scale to hun- dreds of predicates and can benefit from using a diverse set of...organization yes California Institute of Technology vegetable food yes carrots vehicle item yes airplanes vertebrate animal yes videoGame product yes

  14. Semi-Supervised Multi-View Ensemble Learning Based On Extracting Cross-View Correlation

    Directory of Open Access Journals (Sweden)

    ZALL, R.

    2016-05-01

    Full Text Available Correlated information between different views incorporate useful for learning in multi view data. Canonical correlation analysis (CCA plays important role to extract these information. However, CCA only extracts the correlated information between paired data and cannot preserve correlated information between within-class samples. In this paper, we propose a two-view semi-supervised learning method called semi-supervised random correlation ensemble base on spectral clustering (SS_RCE. SS_RCE uses a multi-view method based on spectral clustering which takes advantage of discriminative information in multiple views to estimate labeling information of unlabeled samples. In order to enhance discriminative power of CCA features, we incorporate the labeling information of both unlabeled and labeled samples into CCA. Then, we use random correlation between within-class samples from cross view to extract diverse correlated features for training component classifiers. Furthermore, we extend a general model namely SSMV_RCE to construct ensemble method to tackle semi-supervised learning in the presence of multiple views. Finally, we compare the proposed methods with existing multi-view feature extraction methods using multi-view semi-supervised ensembles. Experimental results on various multi-view data sets are presented to demonstrate the effectiveness of the proposed methods.

  15. Semi-quantitative evaluation of gallium-67 scintigraphy in lupus nephritis

    Energy Technology Data Exchange (ETDEWEB)

    Lin Wanyu [Dept. of Nuclear Medicine, Taichung Veterans General Hospital, Taichung (Taiwan); Dept. of Radiological Technology, Chung-Tai College of Medical Technology, Taichung (Taiwan); Hsieh Jihfang [Section of Nuclear Medicine, Chi-Mei Foundation Hospital, Yunk Kang City, Tainan (Taiwan); Tsai Shihchuan [Dept. of Nuclear Medicine, Show Chwan Memorial Hospital, Changhua (Taiwan); Lan Joungliang [Dept. of Internal Medicine, Taichung Veterans General Hospital, Taichung (Taiwan); Cheng Kaiyuan [Dept. of Radiological Technology, Chung-Tai College of Medical Technology, Taichung (Taiwan); Wang Shyhjen [Dept. of Nuclear Medicine, Taichung Veterans General Hospital, Taichung (Taiwan)

    2000-11-01

    Within nuclear medicine there is a trend towards quantitative analysis. Gallium renal scan has been reported to be useful in monitoring the disease activity of lupus nephritis. However, only visual interpretation using a four-grade scale has been performed in previous studies, and this method is not sensitive enough for follow-up. In this study, we developed a semi-quantitative method for gallium renal scintigraphy to find a potential parameter for the evaluation of lupus nephritis. Forty-eight patients with lupus nephritis underwent renal biopsy to determine World Health Organization classification, activity index (AI) and chronicity index (CI). A delayed 48-h gallium scan was also performed and interpreted by visual and semi-quantitative methods. For semi-quantitative analysis of the gallium uptake in both kidneys, regions of interest (ROIs) were drawn over both kidneys, the right forearm and the adjacent spine. The uptake ratios between these ROIs were calculated and expressed as the ''kidney/spine ratio (K/S ratio)'' or the ''kidney/arm ratio (K/A ratio)''. Spearman's rank correlation test and Mann-Whitney U test were used for statistical analysis. Our data showed a good correlation between the semi-quantitative gallium scan and the results of visual interpretation. K/S ratios showed a better correlation with AI than did K/A ratios. Furthermore, the left K/S ratio displayed a better correlation with AI than did the right K/S ratio. In contrast, CI did not correlate well with the results of semi-quantitative gallium scan. In conclusion, semi-quantitative gallium renal scan is easy to perform and shows a good correlation with the results of visual interpretation and renal biopsy. The left K/S ratio from semi-quantitative renal gallium scintigraphy displays the best correlation with AI and is a useful parameter in evaluating the disease activity in lupus nephritis. (orig.)

  16. The optimization of essential oils supercritical CO2 extraction from Lavandula hybrida through static-dynamic steps procedure and semi-continuous technique using response surface method

    Science.gov (United States)

    Kamali, Hossein; Aminimoghadamfarouj, Noushin; Golmakani, Ebrahim; Nematollahi, Alireza

    2015-01-01

    Aim: The aim of this study was to examine and evaluate crucial variables in essential oils extraction process from Lavandula hybrida through static-dynamic and semi-continuous techniques using response surface method. Materials and Methods: Essential oil components were extracted from Lavandula hybrida (Lavandin) flowers using supercritical carbon dioxide via static-dynamic steps (SDS) procedure, and semi-continuous (SC) technique. Results: Using response surface method the optimum extraction yield (4.768%) was obtained via SDS at 108.7 bar, 48.5°C, 120 min (static: 8×15), 24 min (dynamic: 8×3 min) in contrast to the 4.620% extraction yield for the SC at 111.6 bar, 49.2°C, 14 min (static), 121.1 min (dynamic). Conclusion: The results indicated that a substantial reduction (81.56%) solvent usage (kg CO2/g oil) is observed in the SDS method versus the conventional SC method. PMID:25598636

  17. Guiding modes of semi-infinite nanowire and their dispersion character

    International Nuclear Information System (INIS)

    Sun, Yuming; Su, Yuehua; Dai, Zhenhong; Wang, Weitian

    2014-01-01

    Conventionally, the optical properties of finite semiconductor nanowires have been understood and explained in terms of an infinite nanowire. This work describes completely different photonic modes for a semi-finite nanowire based on a rigorous theoretical method, and the implications for the finite one. First, the special eigenvalue problem charactered by the end results in a distinctive mode spectrum for the semi-infinite dielectric nanowire. Meanwhile, the results show hybrid degenerate modes away from cutoff frequency, and transverse electric–transverse magnetic (TE–TM) degeneracy. Second, accompanying a different mode spectrum, a semi-finite nanowire also shows a distinctive dispersion relation compared to an infinite nanowire. Taking a semi-infinite, ZnO nanowire as an example, we find that the ℏω−k z space is not continuous in the interested photon energy window, implying that there is no uniform polariton dispersion relation for semi-infinite nanowire. Our method is shown correct through a field-reconstruction for a thin ZnO nanowire (55 nm in radius) and position determination of FP modes for a ZnO nanowire (200 nm in diameter). The results are of great significance to correctly understand the guiding and lasing mechanisms of semiconductor nanowires. (paper)

  18. A fast semi-discrete Kansa method to solve the two-dimensional spatiotemporal fractional diffusion equation

    Science.gov (United States)

    Sun, HongGuang; Liu, Xiaoting; Zhang, Yong; Pang, Guofei; Garrard, Rhiannon

    2017-09-01

    Fractional-order diffusion equations (FDEs) extend classical diffusion equations by quantifying anomalous diffusion frequently observed in heterogeneous media. Real-world diffusion can be multi-dimensional, requiring efficient numerical solvers that can handle long-term memory embedded in mass transport. To address this challenge, a semi-discrete Kansa method is developed to approximate the two-dimensional spatiotemporal FDE, where the Kansa approach first discretizes the FDE, then the Gauss-Jacobi quadrature rule solves the corresponding matrix, and finally the Mittag-Leffler function provides an analytical solution for the resultant time-fractional ordinary differential equation. Numerical experiments are then conducted to check how the accuracy and convergence rate of the numerical solution are affected by the distribution mode and number of spatial discretization nodes. Applications further show that the numerical method can efficiently solve two-dimensional spatiotemporal FDE models with either a continuous or discrete mixing measure. Hence this study provides an efficient and fast computational method for modeling super-diffusive, sub-diffusive, and mixed diffusive processes in large, two-dimensional domains with irregular shapes.

  19. Evaluation of evapotranspiration methods for model validation in a semi-arid watershed in northern China

    Directory of Open Access Journals (Sweden)

    K. Schneider

    2007-05-01

    Full Text Available This study evaluates the performance of four evapotranspiration methods (Priestley-Taylor, Penman-Monteith, Hargreaves and Makkink of differing complexity in a semi-arid environment in north China. The results are compared to observed water vapour fluxes derived from eddy flux measurements. The analysis became necessary after discharge simulations using an automatically calibrated version of the Soil and Water Assessment Tool (SWAT failed to reproduce runoff measurements. Although the study area receives most of the annual rainfall during the vegetation period, high temperatures can cause water scarcity. We investigate which evapotranspiration method is most suitable for this environment and whether the model performance of SWAT can be improved with the most adequate evapotranspiration method.

    The evapotranspiration models were tested in two consecutive years with different rainfall amounts. In general, the simple Hargreaves and Makkink equations outmatch the more complex Priestley-Taylor and Penman-Monteith methods, although their performance depended on water availability. Effects on the quality of SWAT runoff simulations, however, remained minor. Although evapotranspiration is an important process in the hydrology of this steppe environment, our analysis indicates that other driving factors still need to be identified to improve SWAT simulations.

  20. Exactly energy conserving semi-implicit particle in cell formulation

    International Nuclear Information System (INIS)

    Lapenta, Giovanni

    2017-01-01

    We report a new particle in cell (PIC) method based on the semi-implicit approach. The novelty of the new method is that unlike any of its semi-implicit predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. Recent research has presented fully implicit methods where energy conservation is obtained as part of a non-linear iteration procedure. The new method (referred to as Energy Conserving Semi-Implicit Method, ECSIM), instead, does not require any non-linear iteration and its computational cycle is similar to that of explicit PIC. The properties of the new method are: i) it conserves energy exactly to round-off for any time step or grid spacing; ii) it is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency and allowing the user to select any desired time step; iii) it eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length; iv) the particle mover has a computational complexity identical to that of the explicit PIC, only the field solver has an increased computational cost. The new ECSIM is tested in a number of benchmarks where accuracy and computational performance are tested. - Highlights: • We present a new fully energy conserving semi-implicit particle in cell (PIC) method based on the implicit moment method (IMM). The new method is called Energy Conserving Implicit Moment Method (ECIMM). • The novelty of the new method is that unlike any of its predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. • The new method is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency. • The new method eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length. • These

  1. Exactly energy conserving semi-implicit particle in cell formulation

    Energy Technology Data Exchange (ETDEWEB)

    Lapenta, Giovanni, E-mail: giovanni.lapenta@kuleuven.be

    2017-04-01

    We report a new particle in cell (PIC) method based on the semi-implicit approach. The novelty of the new method is that unlike any of its semi-implicit predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. Recent research has presented fully implicit methods where energy conservation is obtained as part of a non-linear iteration procedure. The new method (referred to as Energy Conserving Semi-Implicit Method, ECSIM), instead, does not require any non-linear iteration and its computational cycle is similar to that of explicit PIC. The properties of the new method are: i) it conserves energy exactly to round-off for any time step or grid spacing; ii) it is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency and allowing the user to select any desired time step; iii) it eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length; iv) the particle mover has a computational complexity identical to that of the explicit PIC, only the field solver has an increased computational cost. The new ECSIM is tested in a number of benchmarks where accuracy and computational performance are tested. - Highlights: • We present a new fully energy conserving semi-implicit particle in cell (PIC) method based on the implicit moment method (IMM). The new method is called Energy Conserving Implicit Moment Method (ECIMM). • The novelty of the new method is that unlike any of its predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. • The new method is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency. • The new method eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length. • These

  2. Empirical research through design

    NARCIS (Netherlands)

    Keyson, D.V.; Bruns, M.

    2009-01-01

    This paper describes the empirical research through design method (ERDM), which differs from current approaches to research through design by enforcing the need for the designer, after a series of pilot prototype based studies, to a-priori develop a number of testable interaction design hypothesis

  3. Convergence Analysis of Semi-Implicit Euler Methods for Solving Stochastic Age-Dependent Capital System with Variable Delays and Random Jump Magnitudes

    Directory of Open Access Journals (Sweden)

    Qinghui Du

    2014-01-01

    Full Text Available We consider semi-implicit Euler methods for stochastic age-dependent capital system with variable delays and random jump magnitudes, and investigate the convergence of the numerical approximation. It is proved that the numerical approximate solutions converge to the analytical solutions in the mean-square sense under given conditions.

  4. Dispersions in Semi-Classical Dynamics

    International Nuclear Information System (INIS)

    Zielinska-Pfabe, M.; Gregoire, C.

    1987-01-01

    Dispersions around mean values of one-body observables are obtained by restoring classical many-body correlations in Vlasov and Landau-Vlasov dynamics. The method is applied to the calculation of fluctuations in mass, charge and linear momentum in heavy-ion collisions. Results are compared to those obtained by the Balian-Veneroni variational principle in semi-classical approximation

  5. Generative Adversarial Networks-Based Semi-Supervised Learning for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Zhi He

    2017-10-01

    Full Text Available Classification of hyperspectral image (HSI is an important research topic in the remote sensing community. Significant efforts (e.g., deep learning have been concentrated on this task. However, it is still an open issue to classify the high-dimensional HSI with a limited number of training samples. In this paper, we propose a semi-supervised HSI classification method inspired by the generative adversarial networks (GANs. Unlike the supervised methods, the proposed HSI classification method is semi-supervised, which can make full use of the limited labeled samples as well as the sufficient unlabeled samples. Core ideas of the proposed method are twofold. First, the three-dimensional bilateral filter (3DBF is adopted to extract the spectral-spatial features by naturally treating the HSI as a volumetric dataset. The spatial information is integrated into the extracted features by 3DBF, which is propitious to the subsequent classification step. Second, GANs are trained on the spectral-spatial features for semi-supervised learning. A GAN contains two neural networks (i.e., generator and discriminator trained in opposition to one another. The semi-supervised learning is achieved by adding samples from the generator to the features and increasing the dimension of the classifier output. Experimental results obtained on three benchmark HSI datasets have confirmed the effectiveness of the proposed method , especially with a limited number of labeled samples.

  6. A semi-analytical method to evaluate the dielectric response of a tokamak plasma accounting for drift orbit effects

    Science.gov (United States)

    Van Eester, Dirk

    2005-03-01

    A semi-analytical method is proposed to evaluate the dielectric response of a plasma to electromagnetic waves in the ion cyclotron domain of frequencies in a D-shaped but axisymmetric toroidal geometry. The actual drift orbit of the particles is accounted for. The method hinges on subdividing the orbit into elementary segments in which the integrations can be performed analytically or by tabulation, and it relies on the local book-keeping of the relation between the toroidal angular momentum and the poloidal flux function. Depending on which variables are chosen, the method allows computation of elementary building blocks for either the wave or the Fokker-Planck equation, but the accent is mainly on the latter. Two types of tangent resonance are distinguished.

  7. The Socratic Method: Empirical Assessment of a Psychology Capstone Course

    Science.gov (United States)

    Burns, Lawrence R.; Stephenson, Paul L.; Bellamy, Katy

    2016-01-01

    Although students make some epistemological progress during college, most graduate without developing meaning-making strategies that reflect an understanding that knowledge is socially constructed. Using a pre-test-post-test design and a within-subjects 2 × 2 mixed-design ANOVA, this study reports on empirical findings which support the Socratic…

  8. Dynamic analysis of ultrasonically levitated droplet with moving particle semi-implicit and distributed point source method

    Science.gov (United States)

    Wada, Yuji; Yuge, Kohei; Nakamura, Ryohei; Tanaka, Hiroki; Nakamura, Kentaro

    2015-07-01

    Numerical analysis of an ultrasonically levitated droplet with a free surface boundary is discussed. The droplet is known to change its shape from sphere to spheroid when it is suspended in a standing wave owing to the acoustic radiation force. However, few studies on numerical simulation have been reported in association with this phenomenon including fluid dynamics inside the droplet. In this paper, coupled analysis using the distributed point source method (DPSM) and the moving particle semi-implicit (MPS) method, both of which do not require grids or meshes to handle the moving boundary with ease, is suggested. A droplet levitated in a plane standing wave field between a piston-vibrating ultrasonic transducer and a reflector is simulated with the DPSM-MPS coupled method. The dynamic change in the spheroidal shape of the droplet is successfully reproduced numerically, and the gravitational center and the change in the spheroidal aspect ratio are discussed and compared with the previous literature.

  9. Computerized breast cancer analysis system using three stage semi-supervised learning method.

    Science.gov (United States)

    Sun, Wenqing; Tseng, Tzu-Liang Bill; Zhang, Jianying; Qian, Wei

    2016-10-01

    A large number of labeled medical image data is usually a requirement to train a well-performed computer-aided detection (CAD) system. But the process of data labeling is time consuming, and potential ethical and logistical problems may also present complications. As a result, incorporating unlabeled data into CAD system can be a feasible way to combat these obstacles. In this study we developed a three stage semi-supervised learning (SSL) scheme that combines a small amount of labeled data and larger amount of unlabeled data. The scheme was modified on our existing CAD system using the following three stages: data weighing, feature selection, and newly proposed dividing co-training data labeling algorithm. Global density asymmetry features were incorporated to the feature pool to reduce the false positive rate. Area under the curve (AUC) and accuracy were computed using 10 fold cross validation method to evaluate the performance of our CAD system. The image dataset includes mammograms from 400 women who underwent routine screening examinations, and each pair contains either two cranio-caudal (CC) or two mediolateral-oblique (MLO) view mammograms from the right and the left breasts. From these mammograms 512 regions were extracted and used in this study, and among them 90 regions were treated as labeled while the rest were treated as unlabeled. Using our proposed scheme, the highest AUC observed in our research was 0.841, which included the 90 labeled data and all the unlabeled data. It was 7.4% higher than using labeled data only. With the increasing amount of labeled data, AUC difference between using mixed data and using labeled data only reached its peak when the amount of labeled data was around 60. This study demonstrated that our proposed three stage semi-supervised learning can improve the CAD performance by incorporating unlabeled data. Using unlabeled data is promising in computerized cancer research and may have a significant impact for future CAD system

  10. Improved Formula for the Stress Intensity Factor of Semi-Elliptical Surface Cracks in Welded Joints under Bending Stress

    Science.gov (United States)

    Peng, Yang; Wu, Chao; Zheng, Yifu; Dong, Jun

    2017-01-01

    Welded joints are prone to fatigue cracking with the existence of welding defects and bending stress. Fracture mechanics is a useful approach in which the fatigue life of the welded joint can be predicted. The key challenge of such predictions using fracture mechanics is how to accurately calculate the stress intensity factor (SIF). An empirical formula for calculating the SIF of welded joints under bending stress was developed by Baik, Yamada and Ishikawa based on the hybrid method. However, when calculating the SIF of a semi-elliptical crack, this study found that the accuracy of the Baik-Yamada formula was poor when comparing the benchmark results, experimental data and numerical results. The reasons for the reduced accuracy of the Baik-Yamada formula were identified and discussed in this paper. Furthermore, a new correction factor was developed and added to the Baik-Yamada formula by using theoretical analysis and numerical regression. Finally, the predictions using the modified Baik-Yamada formula were compared with the benchmark results, experimental data and numerical results. It was found that the accuracy of the modified Baik-Yamada formula was greatly improved. Therefore, it is proposed that this modified formula is used to conveniently and accurately calculate the SIF of semi-elliptical cracks in welded joints under bending stress. PMID:28772527

  11. An optimized one-tube, semi-nested PCR assay for Paracoccidioides brasiliensis detection

    Directory of Open Access Journals (Sweden)

    Amanda de Faveri Pitz

    2013-12-01

    Full Text Available Introduction Herein, we report a one-tube, semi-nested-polymerase chain reaction (OTsn-PCR assay for the detection of Paracoccidioides brasiliensis. Methods We developed the OTsn-PCR assay for the detection of P. brasiliensis in clinical specimens and compared it with other PCR methods. Results The OTsn-PCR assay was positive for all clinical samples, and the detection limit was better or equivalent to the other nested or semi-nested PCR methods for P. brasiliensis detection. Conclusions The OTsn-PCR assay described in this paper has a detection limit similar to other reactions for the molecular detection of P. brasiliensis, but this approach is faster and less prone to contamination than other conventional nested or semi-nested PCR assays.

  12. Semi-phenomenological method for applying microdosimetry in estimating biological response

    International Nuclear Information System (INIS)

    Higgins, P.D.; DeLuca, P.M. Jr.; Pearson, D.W.; Gould, M.N.

    1981-01-01

    A semi-phenomenological approach has been used to estimate cell survival on the basis of microdosimetrically obtained measurements of beam quality, together with determinations of the biological cytotoxic response parameters of V79 Chinese hamster cells. Cells were exposed to a field of minimally ionizing radiation and to fields at least partially comprised of high LET radiation. We show that for widely varying experimental conditions, we can predict, with good reliability, cell survival for any arbitrary known beam quality and with a minimum of biological input

  13. A non-conventional watershed partitioning method for semi-distributed hydrological modelling: the package ALADHYN

    Science.gov (United States)

    Menduni, Giovanni; Pagani, Alessandro; Rulli, Maria Cristina; Rosso, Renzo

    2002-02-01

    The extraction of the river network from a digital elevation model (DEM) plays a fundamental role in modelling spatially distributed hydrological processes. The present paper deals with a new two-step procedure based on the preliminary identification of an ideal drainage network (IDN) from contour lines through a variable mesh size, and the further extraction of the actual drainage network (AND) from the IDN using land morphology. The steepest downslope direction search is used to identify individual channels, which are further merged into a network path draining to a given node of the IDN. The contributing area, peaks and saddles are determined by means of a steepest upslope direction search. The basin area is thus partitioned into physically based finite elements enclosed by irregular polygons. Different methods, i.e. the constant and variable threshold area methods, the contour line curvature method, and a topologic method descending from the Hortonian ordering scheme, are used to extract the ADN from the IDN. The contour line curvature method is shown to provide the most appropriate method from a comparison with field surveys. Using the ADN one can model the hydrological response of any sub-basin using a semi-distributed approach. The model presented here combines storm abstraction by the SCS-CN method with surface runoff routing as a geomorphological dispersion process. This is modelled using the gamma instantaneous unit hydrograph as parameterized by river geomorphology. The results are implemented using a project-oriented software facility for the Analysis of LAnd Digital HYdrological Networks (ALADHYN).

  14. Standard Guide for Selection and Use of Mathematical Methods for Calculating Absorbed Dose in Radiation Processing Applications

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide describes different mathematical methods that may be used to calculate absorbed dose and criteria for their selection. Absorbed-dose calculations can determine the effectiveness of the radiation process, estimate the absorbed-dose distribution in product, or supplement or complement, or both, the measurement of absorbed dose. 1.2 Radiation processing is an evolving field and annotated examples are provided in Annex A6 to illustrate the applications where mathematical methods have been successfully applied. While not limited by the applications cited in these examples, applications specific to neutron transport, radiation therapy and shielding design are not addressed in this document. 1.3 This guide covers the calculation of radiation transport of electrons and photons with energies up to 25 MeV. 1.4 The mathematical methods described include Monte Carlo, point kernel, discrete ordinate, semi-empirical and empirical methods. 1.5 General purpose software packages are available for the calcul...

  15. Semi-analytical solutions of the Schnakenberg model of a reaction-diffusion cell with feedback

    Science.gov (United States)

    Al Noufaey, K. S.

    2018-06-01

    This paper considers the application of a semi-analytical method to the Schnakenberg model of a reaction-diffusion cell. The semi-analytical method is based on the Galerkin method which approximates the original governing partial differential equations as a system of ordinary differential equations. Steady-state curves, bifurcation diagrams and the region of parameter space in which Hopf bifurcations occur are presented for semi-analytical solutions and the numerical solution. The effect of feedback control, via altering various concentrations in the boundary reservoirs in response to concentrations in the cell centre, is examined. It is shown that increasing the magnitude of feedback leads to destabilization of the system, whereas decreasing this parameter to negative values of large magnitude stabilizes the system. The semi-analytical solutions agree well with numerical solutions of the governing equations.

  16. Synthesis and characterization of semi-IPNs based on PVP and PLLA; Sintese e caracterizacao de semi-IPNs envolvendo os homopolimeros PVP e PLLA

    Energy Technology Data Exchange (ETDEWEB)

    Camilo, A.P.R.; Mano, V., E-mail: mano@ufsj.edu.b [Universidade Federal de Sao Joao del Rei (UFSJ), MG (Brazil). Dept. de Ciencias Naturais; Felisberti, M.I. [Universidade Estadual de Campinas (IQ/UNICAMP), SP (Brazil). Inst. de Quimica

    2010-07-01

    The specific interest in the synthesis of semi-IPNs based on PLLA and PVP homopolymers due to the fact these are biodegradable and biocompatible, which allows us to infer applications in the medical field as sutures, implants, matrices for controlled release of drugs etc. The objective was to prepare a multicomponent material amphiphile in the form of semi-interpenetrating polymer networks, based on poly (L-lactide), PLLA, hydrophobic homopolymer, and poly (vinylpyrrolidone), PVP, hydrophilic component. The preparation of semi-IPN combined the polymerization and crosslinking of N-vinylpyrrolidone in the presence of poly (L-lactide). The products were characterized by spectroscopic and thermal methods. (author)

  17. Optimizing area under the ROC curve using semi-supervised learning.

    Science.gov (United States)

    Wang, Shijun; Li, Diana; Petrick, Nicholas; Sahiner, Berkman; Linguraru, Marius George; Summers, Ronald M

    2015-01-01

    Receiver operating characteristic (ROC) analysis is a standard methodology to evaluate the performance of a binary classification system. The area under the ROC curve (AUC) is a performance metric that summarizes how well a classifier separates two classes. Traditional AUC optimization techniques are supervised learning methods that utilize only labeled data (i.e., the true class is known for all data) to train the classifiers. In this work, inspired by semi-supervised and transductive learning, we propose two new AUC optimization algorithms hereby referred to as semi-supervised learning receiver operating characteristic (SSLROC) algorithms, which utilize unlabeled test samples in classifier training to maximize AUC. Unlabeled samples are incorporated into the AUC optimization process, and their ranking relationships to labeled positive and negative training samples are considered as optimization constraints. The introduced test samples will cause the learned decision boundary in a multidimensional feature space to adapt not only to the distribution of labeled training data, but also to the distribution of unlabeled test data. We formulate the semi-supervised AUC optimization problem as a semi-definite programming problem based on the margin maximization theory. The proposed methods SSLROC1 (1-norm) and SSLROC2 (2-norm) were evaluated using 34 (determined by power analysis) randomly selected datasets from the University of California, Irvine machine learning repository. Wilcoxon signed rank tests showed that the proposed methods achieved significant improvement compared with state-of-the-art methods. The proposed methods were also applied to a CT colonography dataset for colonic polyp classification and showed promising results.

  18. Elastic crack-tip stress field in a semi-strip

    Directory of Open Access Journals (Sweden)

    Victor Reut

    2018-04-01

    Full Text Available In this article the plain elasticity problem for a semi-strip with a transverse crack is investigated in the different cases of the boundary conditions at the semi-strips end. Unlike many works dedicated to this subject, the fixed singularities in the singular integral equation�s kernel are considered. The integral transformations� method is applied by the generalized scheme to reduce the initial problem to a one-dimensional problem. The one-dimensional problem is formulated as the vector boundary value problem which is solved with the help of matrix differential calculations and Green�s matrix apparatus. The solution of the problem is reduced to the solving of the system of three singular integral equations. Depending on the conditions given on the short edge of the semi-strip, the constructed singular integral equation can have one, or two fixed singularities. A special method is applied to solve this equation in regard of the singularities existence. Hence the system of the singular integral equations (SSIE is solved with the help of the generalized method. The stress intensity factors (SIF are investigated for different lengths of crack. The novelty of this work is in the application of new approach allowing the consideration of the fixed singularities in the problem about a transverse crack in the elastic semi-strip. The comparison of the numerical results� accuracy during the usage of the different approaches to the solving of SSIE is worked out

  19. Sparsity guided empirical wavelet transform for fault diagnosis of rolling element bearings

    Science.gov (United States)

    Wang, Dong; Zhao, Yang; Yi, Cai; Tsui, Kwok-Leung; Lin, Jianhui

    2018-02-01

    Rolling element bearings are widely used in various industrial machines, such as electric motors, generators, pumps, gearboxes, railway axles, turbines, and helicopter transmissions. Fault diagnosis of rolling element bearings is beneficial to preventing any unexpected accident and reducing economic loss. In the past years, many bearing fault detection methods have been developed. Recently, a new adaptive signal processing method called empirical wavelet transform attracts much attention from readers and engineers and its applications to bearing fault diagnosis have been reported. The main problem of empirical wavelet transform is that Fourier segments required in empirical wavelet transform are strongly dependent on the local maxima of the amplitudes of the Fourier spectrum of a signal, which connotes that Fourier segments are not always reliable and effective if the Fourier spectrum of the signal is complicated and overwhelmed by heavy noises and other strong vibration components. In this paper, sparsity guided empirical wavelet transform is proposed to automatically establish Fourier segments required in empirical wavelet transform for fault diagnosis of rolling element bearings. Industrial bearing fault signals caused by single and multiple railway axle bearing defects are used to verify the effectiveness of the proposed sparsity guided empirical wavelet transform. Results show that the proposed method can automatically discover Fourier segments required in empirical wavelet transform and reveal single and multiple railway axle bearing defects. Besides, some comparisons with three popular signal processing methods including ensemble empirical mode decomposition, the fast kurtogram and the fast spectral correlation are conducted to highlight the superiority of the proposed method.

  20. Semi-parametric estimation for ARCH models

    Directory of Open Access Journals (Sweden)

    Raed Alzghool

    2018-03-01

    Full Text Available In this paper, we conduct semi-parametric estimation for autoregressive conditional heteroscedasticity (ARCH model with Quasi likelihood (QL and Asymptotic Quasi-likelihood (AQL estimation methods. The QL approach relaxes the distributional assumptions of ARCH processes. The AQL technique is obtained from the QL method when the process conditional variance is unknown. We present an application of the methods to a daily exchange rate series. Keywords: ARCH model, Quasi likelihood (QL, Asymptotic Quasi-likelihood (AQL, Martingale difference, Kernel estimator