Energy Technology Data Exchange (ETDEWEB)
Penna, Rodrigo [UNI-BH, Belo Horizonte, MG (Brazil). Dept. de Ciencias Biologicas, Ambientais e da Saude (DCBAS/DCET); Silva, Clemente Jose Gusmao Carneiro da [Universidade Estadual de Santa Cruz, UESC, Ilheus, BA (Brazil); Gomes, Paulo Mauricio Costa [Universidade FUMEC, Belo Horizonte, MG (Brazil)
2008-07-01
Viability of building a nuclear wood densimeter based on low energy photons Compton scattering was done using Monte Carlo code (MCNP- 4C). It is simulated a collimated 60 keV beam of gamma rays emitted by {sup 241}Am source reaching wood blocks. Backscattered radiation by these blocks was calculated. Photons scattered were correlated with blocks of different wood densities. Results showed a linear relationship on wood density and scattered photons, therefore the viability of this wood densimeter. (author)
Performance of the MTR core with MOX fuel using the MCNP4C2 code.
Shaaban, Ismail; Albarhoum, Mohamad
2016-08-01
The MCNP4C2 code was used to simulate the MTR-22 MW research reactor and perform the neutronic analysis for a new fuel namely: a MOX (U3O8&PuO2) fuel dispersed in an Al matrix for One Neutronic Trap (ONT) and Three Neutronic Traps (TNTs) in its core. Its new characteristics were compared to its original characteristics based on the U3O8-Al fuel. Experimental data for the neutronic parameters including criticality relative to the MTR-22 MW reactor for the original U3O8-Al fuel at nominal power were used to validate the calculated values and were found acceptable. The achieved results seem to confirm that the use of MOX fuel in the MTR-22 MW will not degrade the safe operational conditions of the reactor. In addition, the use of MOX fuel in the MTR-22 MW core leads to reduce the uranium fuel enrichment with (235)U and the amount of loaded (235)U in the core by about 34.84% and 15.21% for the ONT and TNTs cases, respectively.
A simulation of a pebble bed reactor core by the MCNP-4C computer code
Directory of Open Access Journals (Sweden)
Bakhshayesh Moshkbar Khalil
2009-01-01
Full Text Available Lack of energy is a major crisis of our century; the irregular increase of fossil fuel costs has forced us to search for novel, cheaper, and safer sources of energy. Pebble bed reactors - an advanced new generation of reactors with specific advantages in safety and cost - might turn out to be the desired candidate for the role. The calculation of the critical height of a pebble bed reactor at room temperature, while using the MCNP-4C computer code, is the main goal of this paper. In order to reduce the MCNP computing time compared to the previously proposed schemes, we have devised a new simulation scheme. Different arrangements of kernels in fuel pebble simulations were investigated and the best arrangement to decrease the MCNP execution time (while keeping the accuracy of the results, chosen. The neutron flux distribution and control rods worth, as well as their shadowing effects, have also been considered in this paper. All calculations done for the HTR-10 reactor core are in good agreement with experimental results.
Aldawahra Saadou; Khattab Kassem; Saba Gorge
2015-01-01
Comparative studies for conversion of the fuel from HEU to LEU in the miniature neutron source reactor (MNSR) have been performed using the MCNP4C code. The HEU fuel (UAl4-Al, 90% enriched with Al clad) and LEU (UO2 12.6% enriched with zircaloy-4 alloy clad) cores have been analyzed in this study. The existing HEU core of MNSR was analyzed to validate the neutronic model of reactor, while the LEU core was studied to prove the possibility of fuel conversion of the existing HEU core. The propos...
Directory of Open Access Journals (Sweden)
Sedigheh Sina
2011-06-01
Full Text Available Introduction: Brachytherapy is a type of radiotherapy in which radioactive sources are used in proximity of tumors normally for treatment of malignancies in the head, prostate and cervix. Materials and Methods: The Cs-137 Selectron source is a low-dose-rate (LDR brachytherapy source used in a remote afterloading system for treatment of different cancers. This system uses active and inactive spherical sources of 2.5 mm diameter, which can be used in different configurations inside the applicator to obtain different dose distributions. In this study, first the dose distribution at different distances from the source was obtained around a single pellet inside the applicator in a water phantom using the MCNP4C Monte Carlo code. The simulations were then repeated for six active pellets in the applicator and for six point sources. Results: The anisotropy of dose distribution due to the presence of the applicator was obtained by division of dose at each distance and angle to the dose at the same distance and angle of 90 degrees. According to the results, the doses decreased towards the applicator tips. For example, for points at the distances of 5 and 7 cm from the source and angle of 165 degrees, such discrepancies reached 5.8% and 5.1%, respectively. By increasing the number of pellets to six, these values reached 30% for the angle of 5 degrees. Discussion and Conclusion: The results indicate that the presence of the applicator causes a significant dose decrease at the tip of the applicator compared with the dose in the transverse plane. However, the treatment planning systems consider an isotropic dose distribution around the source and this causes significant errors in treatment planning, which are not negligible, especially for a large number of sources inside the applicator.
Energy Technology Data Exchange (ETDEWEB)
Bagheri, Reza; Yousefinia, Hassan [Nuclear Fuel Cycle Research School (NFCRS), Nuclear Science and Technology Research Institute (NSTRI), Atomic Energy Organization of Iran, Tehran (Iran, Islamic Republic of); Moghaddam, Alireza Khorrami [Radiology Department, Paramedical Faculty, Mazandaran University of Medical Sciences, Sari (Iran, Islamic Republic of)
2017-02-15
In this work, linear and mass attenuation coefficients, effective atomic number and electron density, mean free paths, and half value layer and 10th value layer values of barium-bismuth-borosilicate glasses were obtained for 662 keV, 1,173 keV, and 1,332 keV gamma ray energies using MCNP-4C code and XCOM program. Then obtained data were compared with available experimental data. The MCNP-4C code and XCOM program results were in good agreement with the experimental data. Barium-bismuth-borosilicate glasses have good gamma ray shielding properties from the shielding point of view.
NEPHTIS: 2D/3D validation elements using MCNP4c and TRIPOLI4 Monte-Carlo codes
Energy Technology Data Exchange (ETDEWEB)
Courau, T.; Girardi, E. [EDF R and D/SINETICS, 1av du General de Gaulle, F92141 Clamart CEDEX (France); Damian, F.; Moiron-Groizard, M. [DEN/DM2S/SERMA/LCA, CEA Saclay, F91191 Gif-sur-Yvette CEDEX (France)
2006-07-01
High Temperature Reactors (HTRs) appear as a promising concept for the next generation of nuclear power applications. The CEA, in collaboration with AREVA-NP and EDF, is developing a core modeling tool dedicated to the prismatic block-type reactor. NEPHTIS (Neutronics Process for HTR Innovating System) is a deterministic codes system based on a standard two-steps Transport-Diffusion approach (APOLLO2/CRONOS2). Validation of such deterministic schemes usually relies on Monte-Carlo (MC) codes used as a reference. However, when dealing with large HTR cores the fission source stabilization is rather poor with MC codes. In spite of this, it is shown in this paper that MC simulations may be used as a reference for a wide range of configurations. The first part of the paper is devoted to 2D and 3D MC calculations of a HTR core with control devices. Comparisons between MCNP4c and TRIPOLI4 MC codes are performed and show very consistent results. Finally, the last part of the paper is devoted to the code to code validation of the NEPHTIS deterministic scheme. (authors)
Directory of Open Access Journals (Sweden)
Aldawahra Saadou
2015-06-01
Full Text Available Comparative studies for conversion of the fuel from HEU to LEU in the miniature neutron source reactor (MNSR have been performed using the MCNP4C code. The HEU fuel (UAl4-Al, 90% enriched with Al clad and LEU (UO2 12.6% enriched with zircaloy-4 alloy clad cores have been analyzed in this study. The existing HEU core of MNSR was analyzed to validate the neutronic model of reactor, while the LEU core was studied to prove the possibility of fuel conversion of the existing HEU core. The proposed LEU core contained the same number of fuel pins as the HEU core. All other structure materials and dimensions of HEU and LEU cores were the same except the increase in the radius of control rod material from 0.195 to 0.205 cm and keeping the outer diameter of the control rod unchanged in the LEU core. The effective multiplication factor (keff, excess reactivity (ρex, control rod worth (CRW, shutdown margin (SDM, safety reactivity factor (SRF, delayed neutron fraction (βeff and the neutron fluxes in the irradiation tubes for the existing and the potential LEU fuel were investigated. The results showed that the safety parameters and the neutron fluxes in the irradiation tubes of the LEU fuels were in good agreements with the HEU results. Therefore, the LEU fuel was validated to be a suitable choice for fuel conversion of the MNSR in the future.
Khattab, K; Sulieman, I
2009-04-01
The MCNP-4C code, based on the probabilistic approach, was used to model the 3D configuration of the core of the Syrian miniature neutron source reactor (MNSR). The continuous energy neutron cross sections from the ENDF/B-VI library were used to calculate the thermal and fast neutron fluxes in the inner and outer irradiation sites of MNSR. The thermal fluxes in the MNSR inner irradiation sites were also measured experimentally by the multiple foil activation method ((197)Au (n, gamma) (198)Au and (59)Co (n, gamma) (60)Co). The foils were irradiated simultaneously in each of the five MNSR inner irradiation sites to measure the thermal neutron flux and the epithermal index in each site. The calculated and measured results agree well.
Energy Technology Data Exchange (ETDEWEB)
Coelho, T.S.; Yoriyaz, H. [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Fernandes, M.A.R. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Fac. de Medicina. Servico de Radioterapia; Louzada, M.J.Q. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Aracatuba, SP (Brazil). Curso de Medicina Veterinaria
2010-07-01
Although they are no longer manufactured, the applicators of {sup 90}Sr +{sup 90}Y acquired in the decades of 1990 are still in use, by having half-life of 28.5 years. These applicators have calibration certificate given by their manufacturers, where few have been recalibrated. Thus it becomes necessary to accomplish thorough dosimetry of these applicators. This paper presents a dosimetric analysis distribution radial dose profiles for emitted by an {sup 90}Sr+{sup 90}Y beta therapy applicator, using the MCNP-4C code to simulate the distribution radial dose profiles and radiochromium films to get them experimentally . The results with the simulated values were compared with the results of experimental measurements, where both curves show similar behavior, which may validate the use of MCNP-4C and radiochromium films for this type of dosimetry. (author)
Energy Technology Data Exchange (ETDEWEB)
Coelho, Talita S.; Yoriyaz, Helio [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Fernandes, Marco A.R., E-mail: tasallesc@gmail.co [UNESP, Botucatu, SP (Brazil). Faculdade de Medicina. Servico de Radioterapia; Louzada, Mario J.Q. [UNESP, Aracatuba, SP (Brazil). Curso de Medicina Veterinaria
2011-07-01
Although they are no longer manufactured, the applicators of {sup 90}Sr + {sup 90}Y acquired in the decades of 1990 are still in use, by having half-life of 28.5 years. These applicators have calibration certificate given by their manufacturers, where few have been re calibrated. Thus it becomes necessary to accomplish thorough dosimetry of these applicators. This paper presents a dosimetric analysis distribution radial dose profiles for emitted by an {sup 90}Sr + {sup 90}Y beta therapy applicator, using the MCNP-4C code to simulate the distribution radial dose profiles and radio chromium films to get them experimentally . The results with the simulated values were compared with the results of experimental measurements, where both curves show similar behavior, which may validate the use of MCNP-4C and radio chromium films for this type of dosimetry. (author)
Directory of Open Access Journals (Sweden)
Somayeh Gholami
2010-06-01
Full Text Available Introduction: Gamma Knife is an instrument specially designed for treating brain disorders. In Gamma Knife, there are 201 narrow beams of cobalt-60 sources that intersect at an isocenter point to treat brain tumors. The tumor is placed at the isocenter and is treated by the emitted gamma rays. Therefore, there is a high dose at this point and a low dose is delivered to the normal tissue surrounding the tumor. Material and Method: In the current work, the MCNP simulation code was used to simulate the Gamma Knife. The calculated values were compared to the experimental ones and previous works. Dose distribution was compared for different collimators in a water phantom and the Zubal brain-equivalent phantom. The dose profiles were obtained along the x, y and z axes. Result: The evaluation of the developed code was performed using experimental data and we found a good agreement between our simulation and experimental data. Discussion: Our results showed that the skull bone has a high contribution to both scatter and absorbed dose. In other words, inserting the exact material of brain and other organs of the head in digital phantom improves the quality of treatment planning. This work is regarding the measurement of absorbed dose and improving the treatment planning procedure in Gamma-Knife radiosurgery in the brain.
Directory of Open Access Journals (Sweden)
Mehdi Zehtabian
2010-09-01
Full Text Available Introduction: Brachytherapy is the use of small encapsulated radioactive sources in close vicinity of tumors. Various methods are used to obtain the dose distribution around brachytherapy sources. TG-43 is a dosimetry protocol proposed by the AAPM for determining dose distributions around brachytherapy sources. The goal of this study is to update this protocol for presence of bone and air inhomogenities. Material and Methods: To update the dose rate constant parameter of the TG-43 formalism, the MCNP4C simulations were performed in phantoms composed of water-bone and water-air combinations. The values of dose at different distances from the source in both homogeneous and inhomogeneous phantoms were estimated in spherical tally cells of 0.5 mm radius using the F6 tally. Results: The percentages of dose reductions in presence of air and bone inhomogenities for the Cs-137 source were found to be 4% and 10%, respectively. Therefore, the updated dose rate constant (Λ will also decrease by the same percentages. Discussion and Conclusion: It can be easily concluded that such dose variations are more noticeable when using lower energy sources such as Pd-103 or I-125.
Directory of Open Access Journals (Sweden)
Hammam Oktajianto
2014-12-01
Full Text Available Gas-cooled nuclear reactor is a Generation IV reactor which has been receiving significant attention due to many desired characteristics such as inherent safety, modularity, relatively low cost, short construction period, and easy financing. High temperature reactor (HTR pebble-bed as one of type of gas-cooled reactor concept is getting attention. In HTR pebble-bed design, radius and enrichment of the fuel kernel are the key parameter that can be chosen freely to determine the desired value of criticality. This paper models HTR pebble-bed 10 MW and determines an effective of enrichment and radius of the fuel (Kernel to get criticality value of reactor. The TRISO particle coated fuel particle which was modelled explicitly and distributed in the fuelled region of the fuel pebbles using a Simple-Cubic (SC lattice. The pebble-bed balls and moderator balls distributed in the core zone using a Body-Centred Cubic lattice with assumption of a fresh fuel by the fuel enrichment was 7-17% at 1% range and the size of the fuel radius was 175-300 µm at 25 µm ranges. The geometrical model of the full reactor is obtained by using lattice and universe facilities provided by MCNP4C. The details of model are discussed with necessary simplifications. Criticality calculations were conducted by Monte Carlo transport code MCNP4C and continuous energy nuclear data library ENDF/B-VI. From calculation results can be concluded that an effective of enrichment and radius of fuel (Kernel to achieve a critical condition was the enrichment of 15-17% at a radius of 200 µm, the enrichment of 13-17% at a radius of 225 µm, the enrichments of 12-15% at radius of 250 µm, the enrichments of 11-14% at a radius of 275 µm and the enrichment of 10-13% at a radius of 300 µm, so that the effective of enrichments and radii of fuel (Kernel can be considered in the HTR 10 MW. Keywords—MCNP4C, HTR, enrichment, radius, criticality
Zamani, M.; Kasesaz, Y.; Khalafi, H.; Pooya, S. M. Hosseini
Boron Neutron Capture Therapy (BNCT) is used for treatment of many diseases, including brain tumors, in many medical centers. In this method, a target area (e.g., head of patient) is irradiated by some optimized and suitable neutron fields such as research nuclear reactors. Aiming at protection of healthy tissues which are located in the vicinity of irradiated tissue, and based on the ALARA principle, it is required to prevent unnecessary exposure of these vital organs. In this study, by using numerical simulation method (MCNP4C Code), the absorbed dose in target tissue and the equiavalent dose in different sensitive tissues of a patiant treated by BNCT, are calculated. For this purpose, we have used the parameters of MIRD Standard Phantom. Equiavelent dose in 11 sensitive organs, located in the vicinity of target, and total equivalent dose in whole body, have been calculated. The results show that the absorbed dose in tumor and normal tissue of brain equal to 30.35 Gy and 0.19 Gy, respectively. Also, total equivalent dose in 11 sensitive organs, other than tumor and normal tissue of brain, is equal to 14 mGy. The maximum equivalent doses in organs, other than brain and tumor, appear to the tissues of lungs and thyroid and are equal to 7.35 mSv and 3.00 mSv, respectively.
Energy Technology Data Exchange (ETDEWEB)
Zehtabian, M; Zaker, N; Sina, S [Shiraz University, Shiraz, Fars (Iran, Islamic Republic of); Meigooni, A Soleimani [Comprehensive Cancer Center of Nevada, Las Vegas, Nevada (United States)
2015-06-15
Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 which is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.
Simulação de um densímetro nuclear utilizando o código Monte Carlo MCNP-4C
Penna, Rodrigo; Comitê Científico; da Silva, Clemente José Gusmão Carneiro; Professor; Gomes, Paulo Maurício Costa; Professor
2008-01-01
Foi Utilizado o código Monte Carlo (MCNP-4C) para simular um densímetro nuclear capaz de medir a densidade da madeira superficialmente. Utilizou-se uma fonte de Amerício-241, de baixa energia (E= 60 Kev) o que permite uma maior segurança na operação. Os resultados mostraram que a densidade da madeira pode ser medida partir da radiação espalhada devido ao Efeito Compton. A técnica representa um avanço em relação à metodologia atual.
Determination of {beta}{sub eff} using MCNP-4C2 and application to the CROCUS and PROTEUS reactors
Energy Technology Data Exchange (ETDEWEB)
Vollaire, J. [European Organization for Nuclear Research CERN, CH-1211 Geneve 23 (Switzerland); Plaschy, M.; Jatuff, F. [Paul Scherrer Institut PSI, CH-5232 Villigen PSI (Switzerland); Chawla, R. [Paul Scherrer Institut PSI, CH-5232 Villigen PSI (Switzerland); Ecole Polytechnique Federale de Lausanne EPFL, CH-1015 Lausanne (Switzerland)
2006-07-01
A new Monte Carlo method for the determination of {beta}{sub eff} has been recently developed and tested using appropriate models of the experimental reactors CROCUS and PROTEUS. The current paper describes the applied methodology and highlights the resulting improvements compared to the simplest MCNP approach, i.e. the 'prompt method' technique. In addition, the flexibility advantages of the developed method are presented. Specifically, the possibility to obtain the effective delayed neutron fraction {beta}{sub eff} per delayed neutron group, per fissioning nuclide and per reactor region is illustrated. Finally, the MCNP predictions of {beta}{sub eff} are compared to the results of deterministic calculations. (authors)
Pauzi, A. M.
2013-06-01
The neutron transport code, Monte Carlo N-Particle (MCNP) which was wellkown as the gold standard in predicting nuclear reaction was used to model the small nuclear reactor core called "U-batteryTM", which was develop by the University of Manchester and Delft Institute of Technology. The paper introduces on the concept of modeling the small reactor core, a high temperature reactor (HTR) type with small coated TRISO fuel particle in graphite matrix using the MCNPv4C software. The criticality of the core were calculated using the software and analysed by changing key parameters such coolant type, fuel type and enrichment levels, cladding materials, and control rod type. The criticality results from the simulation were validated using the SCALE 5.1 software by [1] M Ding and J L Kloosterman, 2010. The data produced from these analyses would be used as part of the process of proposing initial core layout and a provisional list of materials for newly design reactor core. In the future, the criticality study would be continued with different core configurations and geometries.
Energy Technology Data Exchange (ETDEWEB)
Tada, A., E-mail: ariane.tada@gmail.co [Instituto de Pesquisas Energeticas e Nucleares (CEN/IPEN/CNEN-SP), Sao Paulo, SP (Brazil). Centro de Engenharia Nuclear; Instituto de Pesquisas Tecnologicas (IPT), Sao Paulo, SP (Brazil); Salles, T.; Yoriyaz, H., E-mail: hyoriyaz@ipen.b, E-mail: tasallesc@gmail.co [Instituto de Pesquisas Energeticas e Nucleares (CEN/IPEN/CNEN-SP), Sao Paulo, SP (Brazil). Centro de Engenharia Nuclear; Fernandes, M.A.R, E-mail: marfernandes@fmb.unesp.b [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Fac. de Medicina. Dept. de Dermatologia e Radioterapia
2010-07-01
The present work had as objective to analyze the distribution profile of a therapeutic dose of radiation produced by radioactive sources used in radiotherapy procedures in superficial lesions on the skin. The experimental measurements for analysis of dosimetric radiation sources were compared with calculations obtained from the computer system based on the Monte Carlo Method. The results obtained by the computations calculations using the code MCNP-4C showed a good agreement with the experimental measurements. A comparison of different treatment modalities allows an indication of more appropriate procedures for each clinical case. (author)
Energy Technology Data Exchange (ETDEWEB)
Nasrabadi, M.N., E-mail: mnnasrabadi@ast.ui.ac.ir [Department of Nuclear Engineering, Faculty of Advanced Sciences and Technologies, University of Isfahan, Isfahan 81746-73441 (Iran, Islamic Republic of); Bakhshi, F.; Jalali, M.; Mohammadi, A. [Department of Nuclear Engineering, Faculty of Advanced Sciences and Technologies, University of Isfahan, Isfahan 81746-73441 (Iran, Islamic Republic of)
2011-12-11
Nuclear-based explosive detection methods can detect explosives by identifying their elemental components, especially nitrogen. Thermal neutron capture reactions have been used for detecting prompt gamma 10.8 MeV following radioactive neutron capture by {sup 14}N nuclei. We aimed to study the feasibility of using field-portable prompt gamma neutron activation analysis (PGNAA) along with improved nuclear equipment to detect and identify explosives, illicit substances or landmines. A {sup 252}Cf radio-isotopic source was embedded in a cylinder made of high-density polyethylene (HDPE) and the cylinder was then placed in another cylindrical container filled with water. Measurements were performed on high nitrogen content compounds such as melamine (C{sub 3}H{sub 6}N{sub 6}). Melamine powder in a HDPE bottle was placed underneath the vessel containing water and the neutron source. Gamma rays were detected using two NaI(Tl) crystals. The results were simulated with MCNP4c code calculations. The theoretical calculations and experimental measurements were in good agreement indicating that this method can be used for detection of explosives and illicit drugs.
El-Khayatt, A. M.; Ali, A. M.; Singh, Vishwanath P.
2014-01-01
The mass attenuation coefficients, μ/ρ, total interaction cross-section, σt, and mean free path (MFP) of some Heavy Metal Oxides (HMO) glasses, with potential applications as gamma ray shielding materials, have been investigated using the MCNP-4C code. Appreciable variations are noted for all parameters by changing the photon energy and the chemical composition of HMO glasses. The numerical simulations parameters are compared with experimental data wherever possible. Comparisons are also made with predictions from the XCOM program in the energy region from 1 keV to 100 MeV. Good agreement noticed indicates that the chosen Monte Carlo method may be employed to make additional calculations on the photon attenuation characteristics of different glass systems, a capability particularly useful in cases where no analogous experimental data exist.
Hybrid codes: Methods and applications
Energy Technology Data Exchange (ETDEWEB)
Winske, D. (Los Alamos National Lab., NM (USA)); Omidi, N. (California Univ., San Diego, La Jolla, CA (USA))
1991-01-01
In this chapter we discuss hybrid'' algorithms used in the study of low frequency electromagnetic phenomena, where one or more ion species are treated kinetically via standard PIC methods used in particle codes and the electrons are treated as a single charge neutralizing massless fluid. Other types of hybrid models are possible, as discussed in Winske and Quest, but hybrid codes with particle ions and massless fluid electrons have become the most common for simulating space plasma physics phenomena in the last decade, as we discuss in this paper.
An assessment of the MCNP4C weight window
Energy Technology Data Exchange (ETDEWEB)
Christopher N. Culbertson; John S. Hendricks
1999-12-01
A new, enhanced weight window generator suite has been developed for MCNP version 4C. The new generator correctly estimates importances in either a user-specified, geometry-independent, orthogonal grid or in MCNP geometric cells. The geometry-independent option alleviates the need to subdivide the MCNP cell geometry for variance reduction purposes. In addition, the new suite corrects several pathologies in the existing MCNP weight window generator. The new generator is applied in a set of five variance reduction problems. The improved generator is compared with the weight window generator applied in MCNP4B. The benefits of the new methodology are highlighted, along with a description of its limitations. The authors also provide recommendations for utilization of the weight window generator.
Measurement of the neutron spectrum by the multi-sphere method using a BF3 counter
Directory of Open Access Journals (Sweden)
Khabaz Rahim
2011-01-01
Full Text Available The multi-sphere method, a neutron detection technique, has been improved with a BF3 long cylindrical counter as a thermal detector located in the center of seven spheres with a diameter range of 3.5 to 12 inches. Energy response functions of the system have been determined by applying the MCNP4C Monte Carlo code of 10-8 MeV to 18 MeV. A new shadow cone has been designed to account for scattered neutrons. Although the newly designed shadow cone is smaller in length, its attenuation coefficient has been improved. To evaluate the system, the neutron spectrum of a 241AM-Be source has been measured.
Report on HOM experimental methods and code
Shinton, I R R; Flisgen, T
2013-01-01
Experimental methods and various codes used are reported on with the aim to understand the signals picked up from the higher order modes in the third harmonic cavities within the ACC39 module at FLASH. Both commercial computer codes have been used, and also codes written for the express purpose of understanding the sensitivity of the modal profiles to geometrical errors and other sources of experimental errors.
A mathematical method to calculate efficiency of BF3 detectors
Institute of Scientific and Technical Information of China (English)
SI Fenni; HU Qingyuan; PENG Taiping
2009-01-01
In order to calculate absolute efficiency of the BF3 detector, MCNP/4C code is applied to calculate rela-tive efficiency of the BF3 detector first, and then absolute efficiency is figured out through mathematical techniques. Finally an energy response curve of the BF3 detector for 1~20 MeV neutrons is derived. It turns out that efficiency of BF3 detector are relatively uniform for 2~16 MeV neutrons.
Fractal methods in image analysis and coding
Neary, David
2001-01-01
In this thesis we present an overview of image processing techniques which use fractal methods in some way. We show how these fields relate to each other, and examine various aspects of fractal methods in each area. The three principal fields of image processing and analysis th a t we examine are texture classification, image segmentation and image coding. In the area of texture classification, we examine fractal dimension estimators, comparing these methods to other methods in use, a...
Daures, J; Gouriou, J; Bordy, J M
2011-03-01
This work has been performed within the frame of the European Union ORAMED project (Optimisation of RAdiation protection for MEDical staff). The main goal of the project is to improve standards of protection for medical staff for procedures resulting in potentially high exposures and to develop methodologies for better assessing and for reducing, exposures to medical staff. The Work Package WP2 is involved in the development of practical eye-lens dosimetry in interventional radiology. This study is complementary of the part of the ENEA report concerning the calculations with the MCNP-4C code of the conversion factors related to the operational quantity H(p)(3). In this study, a set of energy- and angular-dependent conversion coefficients (H(p)(3)/K(a)), in the newly proposed square cylindrical phantom made of ICRU tissue, have been calculated with the Monte-Carlo code PENELOPE and MCNP5. The H(p)(3) values have been determined in terms of absorbed dose, according to the definition of this quantity, and also with the kerma approximation as formerly reported in ICRU reports. At a low-photon energy (up to 1 MeV), the two results obtained with the two methods are consistent. Nevertheless, large differences are showed at a higher energy. This is mainly due to the lack of electronic equilibrium, especially for small angle incidences. The values of the conversion coefficients obtained with the MCNP-4C code published by ENEA quite agree with the kerma approximation calculations obtained with PENELOPE. We also performed the same calculations with the code MCNP5 with two types of tallies: F6 for kerma approximation and *F8 for estimating the absorbed dose that is, as known, due to secondary electrons. PENELOPE and MCNP5 results agree for the kerma approximation and for the absorbed dose calculation of H(p)(3) and prove that, for photon energies larger than 1 MeV, the transport of the secondary electrons has to be taken into account.
Simulation of the BNCT of Brain Tumors Using MCNP Code: Beam Designing and Dose Evaluation
Directory of Open Access Journals (Sweden)
Fatemeh Sadat Rasouli
2012-09-01
Full Text Available Introduction BNCT is an effective method to destroy brain tumoral cells while sparing the healthy tissues. The recommended flux for epithermal neutrons is 109 n/cm2s, which has the most effectiveness on deep-seated tumors. In this paper, it is indicated that using D-T neutron source and optimizing of Beam Shaping Assembly (BSA leads to treating brain tumors in a reasonable time where all IAEA recommended criteria are met. Materials and Methods The proposed BSA based on a D-T neutron generator consists of a neutron multiplier system, moderators, reflector, and collimator. The simulated Snyder head phantom is used to evaluate dose profiles in tissues due to the irradiation of designed beam. Monte Carlo Code, MCNP-4C, was used in order to perform these calculations. Results The neutron beam associated with the designed and optimized BSA has an adequate epithermal flux at the beam port and neutron and gamma contaminations are removed as much as possible. Moreover, it was showed that increasing J/Φ, as a measure of beam directionality, leads to improvement of beam performance and survival of healthy tissues surrounding the tumor. Conclusion According to the simulation results, the proposed system based on D-T neutron source, which is suitable for in-hospital installations, satisfies all in-air parameters. Moreover, depth-dose curves investigate proper performance of designed beam in tissues. The results are comparable with the performances of other facilities.
Numerical method improvement for a subchannel code
Energy Technology Data Exchange (ETDEWEB)
Ding, W.J.; Gou, J.L.; Shan, J.Q. [Xi' an Jiaotong Univ., Shaanxi (China). School of Nuclear Science and Technology
2016-07-15
Previous studies showed that the subchannel codes need most CPU time to solve the matrix formed by the conservation equations. Traditional matrix solving method such as Gaussian elimination method and Gaussian-Seidel iteration method cannot meet the requirement of the computational efficiency. Therefore, a new algorithm for solving the block penta-diagonal matrix is designed based on Stone's incomplete LU (ILU) decomposition method. In the new algorithm, the original block penta-diagonal matrix will be decomposed into a block upper triangular matrix and a lower block triangular matrix as well as a nonzero small matrix. After that, the LU algorithm is applied to solve the matrix until the convergence. In order to compare the computational efficiency, the new designed algorithm is applied to the ATHAS code in this paper. The calculation results show that more than 80 % of the total CPU time can be saved with the new designed ILU algorithm for a 324-channel PWR assembly problem, compared with the original ATHAS code.
Energy Technology Data Exchange (ETDEWEB)
Mendonca, Dalila; Neves, Lucio P.; Perini, Ana P., E-mail: anapaula.perini@ufu.br [Universidade Federal de Uberlandia (INFIS/UFU), Uberlandia, MG (Brazil). Instituto de Fisica; Santos, William S.; Caldas, Linda V.E. [Instituto de Pesquisas Energeticas e Nucleres (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2015-07-01
A special pencil type ionization chamber, developed at the Instituto de Pesquisas Energeticas e Nucleares, was characterized by means of Monte Carlo simulation to determine the influence of its components on its response. The main differences between this ionization chamber and commercial ionization chambers are related to its configuration and constituent materials. The simulations were made employing the MCNP-4C Monte Carlo code. The highest influence was obtained for the body of PMMA: 7.0%. (author)
A FINE GRANULAR JOINT SOURCE CHANNEL CODING METHOD
Institute of Scientific and Technical Information of China (English)
Zhuo Li; Shen Lansun; Zhu Qing
2003-01-01
An improved FGS (Fine Granular Scalability) coding method is proposed in this letter, which is based on human visual characteristics. This method adjusts FGS coding frame rate according to the evaluation of video sequences so as to improve the coding efficiency and subject perceived quality of reconstructed images. Finally, a fine granular joint source channel coding is proposed based on the source coding method, which not only utilizes the network resources efficiently, but guarantees the reliable transmission of video information.
Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S
2016-03-01
Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes. PACS number(s): 87.56.bg.
Three Methods for Occupation Coding Based on Statistical Learning
Directory of Open Access Journals (Sweden)
Gweon Hyukjun
2017-03-01
Full Text Available Occupation coding, an important task in official statistics, refers to coding a respondent’s text answer into one of many hundreds of occupation codes. To date, occupation coding is still at least partially conducted manually, at great expense. We propose three methods for automatic coding: combining separate models for the detailed occupation codes and for aggregate occupation codes, a hybrid method that combines a duplicate-based approach with a statistical learning algorithm, and a modified nearest neighbor approach. Using data from the German General Social Survey (ALLBUS, we show that the proposed methods improve on both the coding accuracy of the underlying statistical learning algorithm and the coding accuracy of duplicates where duplicates exist. Further, we find defining duplicates based on ngram variables (a concept from text mining is preferable to one based on exact string matches.
A Subband Coding Method for HDTV
Chung, Wilson; Kossentini, Faouzi; Smith, Mark J. T.
1995-01-01
This paper introduces a new HDTV coder based on motion compensation, subband coding, and high order conditional entropy coding. The proposed coder exploits the temporal and spatial statistical dependencies inherent in the HDTV signal by using intra- and inter-subband conditioning for coding both the motion coordinates and the residual signal. The new framework provides an easy way to control the system complexity and performance, and inherently supports multiresolution transmission. Experimental results show that the coder outperforms MPEG-2, while still maintaining relatively low complexity.
A Fast Fractal Image Compression Coding Method
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Fast algorithms for reducing encoding complexity of fractal image coding have recently been an important research topic. Search of the best matched domain block is the most computation intensive part of the fractal encoding process. In this paper, a fast fractal approximation coding scheme implemented on a personal computer based on matching in range block's neighbours is presented. Experimental results show that the proposed algorithm is very simple in implementation, fast in encoding time and high in compression ratio while PSNR is almost the same as compared with Barnsley's fractal block coding .
A Rate-Distortion Optimized Coding Method for Region of Interest in Scalable Video Coding
Directory of Open Access Journals (Sweden)
Hongtao Wang
2015-01-01
original ones is also considered during rate-distortion optimization so that a reasonable trade-off between coding efficiency and decoding drift can be made. Besides, a new Lagrange multiplier derivation method is developed for further coding performance improvement. Experimental results demonstrate that the proposed method achieves significant bitrate saving compared to existing methods.
Code Verification by the Method of Manufactured Solutions
Energy Technology Data Exchange (ETDEWEB)
SALARI,KAMBIZ; KNUPP,PATRICK
2000-06-01
A procedure for code Verification by the Method of Manufactured Solutions (MMS) is presented. Although the procedure requires a certain amount of creativity and skill, we show that MMS can be applied to a variety of engineering codes which numerically solve partial differential equations. This is illustrated by detailed examples from computational fluid dynamics. The strength of the MMS procedure is that it can identify any coding mistake that affects the order-of-accuracy of the numerical method. A set of examples which use a blind-test protocol demonstrates the kinds of coding mistakes that can (and cannot) be exposed via the MMS code Verification procedure. The principle advantage of the MMS procedure over traditional methods of code Verification is that code capabilities are tested in full generality. The procedure thus results in a high degree of confidence that all coding mistakes which prevent the equations from being solved correctly have been identified.
Totally Coded Method for Signal Flow Graph Algorithm
Institute of Scientific and Technical Information of China (English)
XU Jing-bo; ZHOU Mei-hua
2002-01-01
After a code-table has been established by means of node association information from signal flow graph, the totally coded method (TCM) is applied merely in the domain of code operation beyond any figure-earching algorithm. The code-series (CS) have the holoinformation nature, so that both the content and the sign of each gain- term can be determined via the coded method. The principle of this method is simple and it is suited for computer programming. The capability of the computer-aided analysis for switched current network(SIN) can be enhanced.
A FINE GRANULAR JOINT SOURCE CHANNEL CODING METHOD
Institute of Scientific and Technical Information of China (English)
ZhuoLi; ShenLanusun
2003-01-01
An improved FGS (Fine Granular Scalability) coding method is proposed in this letter,which is based on human visual characteristics.This method adjusts FGS coding frame rate according to the evaluation of video sequences so as to improve the coding efficiency and subject perceived quality of reconstructed images.Finally,a fine granular joint source channel coding is proposed based on the source coding method,which not only utilizes the network resources efficiently,but guarantees the reliable transmission of video information.
Recent Developments in the MCNP-POLIMI Postprocessing Code
Energy Technology Data Exchange (ETDEWEB)
Pozzi, S.A.
2004-12-17
The design and analysis of measurements performed with organic scintillators rely on the use of Monte Carlo codes to simulate the interaction of neutrons and photons, originating from fission and other reactions, with the materials present in the system and the radiation detectors. MCNP-PoliMi is a modification of the MCNP-4c code that models the physics of secondary particle emission from fission and other processes realistically. This characteristic allows for the simulation of the higher moments of the distribution of the number of neutrons and photons in a multiplying system. The present report describes the recent additions to the MCNP-PoliMi post-processing code. These include the simulation of detector dead time, multiplicity, and third order statistics.
Permutation Matrix Method for Dense Coding Using GHZ States
Institute of Scientific and Technical Information of China (English)
JIN Rui-Bo; CHEN Li-Bing; WANG Fa-Qiang; SU Zhi-Kun
2008-01-01
We present a new method called the permutation matrix method to perform dense coding using Greenberger-Horne-Zeilinger (GHZ) states. We show that this method makes the study of dense coding systematically and regularly. It also has high potential to be realized physically.
New Methods for Lossless Image Compression Using Arithmetic Coding.
Howard, Paul G.; Vitter, Jeffrey Scott
1992-01-01
Identifies four components of a good predictive lossless image compression method: (1) pixel sequence, (2) image modeling and prediction, (3) error modeling, and (4) error coding. Highlights include Laplace distribution and a comparison of the multilevel progressive method for image coding with the prediction by partial precision matching method.…
Calibration Methods for Reliability-Based Design Codes
DEFF Research Database (Denmark)
Gayton, N.; Mohamed, A.; Sørensen, John Dalsgaard
2004-01-01
The calibration methods are applied to define the optimal code format according to some target safety levels. The calibration procedure can be seen as a specific optimization process where the control variables are the partial factors of the code. Different methods are available in the literature...
Lattice Boltzmann method fundamentals and engineering applications with computer codes
Mohamad, A A
2014-01-01
Introducing the Lattice Boltzmann Method in a readable manner, this book provides detailed examples with complete computer codes. It avoids the most complicated mathematics and physics without scarifying the basic fundamentals of the method.
A novel method of generating and remembering international morse codes
Digital Repository Service at National Institute of Oceanography (India)
Charyulu, R.J.K.
A novel method of generating and remembering International Morse Code is presented in this paper The method requires only memorizing 4 key sentences and requires knowledge of writing binary equivalents of decimal numerals 1 to 16 However much...
An Overview of the Monte Carlo Methods, Codes, & Applications Group
Energy Technology Data Exchange (ETDEWEB)
Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-08-30
This report sketches the work of the Group to deliver first-principle Monte Carlo methods, production quality codes, and radiation transport-based computational and experimental assessments using the codes MCNP and MCATK for such applications as criticality safety, non-proliferation, nuclear energy, nuclear threat reduction and response, radiation detection and measurement, radiation health protection, and stockpile stewardship.
A New Video Coding Method Based on Improving Detail Regions
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
The Moving Pictures Expert Group (MPEG) and H.263 standard coding method is widely used in video compression. However, the visual quality of detail regions such as eyes and mouth is not content in people at the decoder, as far as the conference telephone or videophone is concerned. A new coding method based on improving detail regions is presented in this paper. Experimental results show that this method can improve the visual quality at the decoder.
PhyloCSF: a comparative genomics method to distinguish protein coding and non-coding regions.
Lin, Michael F; Jungreis, Irwin; Kellis, Manolis
2011-07-01
As high-throughput transcriptome sequencing provides evidence for novel transcripts in many species, there is a renewed need for accurate methods to classify small genomic regions as protein coding or non-coding. We present PhyloCSF, a novel comparative genomics method that analyzes a multispecies nucleotide sequence alignment to determine whether it is likely to represent a conserved protein-coding region, based on a formal statistical comparison of phylogenetic codon models. We show that PhyloCSF's classification performance in 12-species Drosophila genome alignments exceeds all other methods we compared in a previous study. We anticipate that this method will be widely applicable as the transcriptomes of many additional species, tissues and subcellular compartments are sequenced, particularly in the context of ENCODE and modENCODE, and as interest grows in long non-coding RNAs, often initially recognized by their lack of protein coding potential rather than conserved RNA secondary structures. The Objective Caml source code and executables for GNU/Linux and Mac OS X are freely available at http://compbio.mit.edu/PhyloCSF CONTACT: mlin@mit.edu; manoli@mit.edu.
Directory of Open Access Journals (Sweden)
Lida Gholamkar
2016-09-01
Full Text Available Introduction One of the best methods in the diagnosis and control of breast cancer is mammography. The importance of mammography is directly related to its value in the detection of breast cancer in the early stages, which leads to a more effective treatment. The purpose of this article was to calculate the X-ray spectrum in a mammography system with Monte Carlo codes, including MCNPX and MCNP5. Materials and Methods The device, simulated using the MCNP code, was Planmed Nuance digital mammography device (Planmed Oy, Finland, equipped with an amorphous selenium detector. Different anode/filter materials, such as molybdenum-rhodium (Mo-Rh, molybdenum-molybdenum (Mo-Mo, tungsten-tin (W-Sn, tungsten-silver (W-Ag, tungsten-palladium (W-Pd, tungsten-aluminum (W-Al, tungsten-molybdenum (W-Mo, molybdenum-aluminum (Mo-Al, tungsten-rhodium (W-Rh, rhodium-aluminum (Rh-Al, and rhodium-rhodium (Rh-Rh, were simulated in this study. The voltage range of the X-ray tube was between 24 and 34 kV with a 2 kV interval. Results The charts of changing photon flux versus energy were plotted for different types of anode-filter combinations. The comparison with the findings reported by others indicated acceptable consistency. Also, the X-ray spectra, obtained from MCNP5 and MCNPX codes for W-Ag and W-Rh combinations, were compared. We compared the present results with the reported data of MCNP4C and IPEM report No. 78 for Mo-Mo, Mo-Rh, and W-Al combinations. Conclusion The MCNPX calculation outcomes showed acceptable results in a low-energy X-ray beam range (10-35 keV. The obtained simulated spectra for different anode/filter combinations were in good conformity with the finding of previous research.
A modified phase-coding method for absolute phase retrieval
Xing, Y.; Quan, C.; Tay, C. J.
2016-12-01
Fringe projection technique is one of the most robust tools for three dimensional (3D) shape measurement. Various fringe projection methods have been proposed for addressing different issues in profilometry and phase-coding is one such technique employed to determine fringe orders for absolute phase retrieval. However this method is prone to fringe order error, while dealing with high-frequency fringes. This paper studies phase error introduced by system non-linearity in phase-coding and provides a mathematical model to obtain the maximum number of achievable codewords in a given scheme. In addition, a modified phase-coding method is also proposed for phase error compensation. Experimental study validates the theoretical analysis on the maximum number of achievable codewords and the performance of the modified phase-coding method is also illustrated.
Combined backscatter and transmission method for nuclear density gauge
Directory of Open Access Journals (Sweden)
Golgoun Seyed Mohammad
2015-01-01
Full Text Available Nowadays, the use of nuclear density gauges, due to the ability to work in harsh industrial environments, is very common. In this study, to reduce error related to the ρ of continuous measuring density, the combination of backscatter and transmission are used simultaneously. For this reason, a 137Cs source for Compton scattering dominance and two detectors are simulated by MCNP4C code for measuring the density of 3 materials. Important advantages of this combined radiometric gauge are diminished influence of μ and therefore improving linear regression.
Beam neutron energy optimization for boron neutron capture therapy using Monte Carlo method
Directory of Open Access Journals (Sweden)
Ali Pazirandeh
2006-06-01
Full Text Available In last two decades the optimal neutron energy for the treatment of deep seated tumors in boron neutron capture therapy in view of neutron physics and chemical compounds of boron carrier has been under thorough study. Although neutron absorption cross section of boron is high (3836b, the treatment of deep seated tumors such as gliobelastoma multiform (GBM requires beam of neutrons of higher energy that can penetrate deeply into the brain and thermalize in the proximity of the tumor. Dosage from recoil proton associated with fast neutrons however poses some constraints on maximum neutron energy that can be used in the treatment. For this reason neutrons in the epithermal energy range of 10eV-10keV are generally to be the most appropriate. The simulation carried out by Monte Carlo methods using MCBNCT and MCNP4C codes along with the cross section library in 290 groups extracted from ENDF/B6 main library. The optimal neutron energy for deep seated tumors depends on the size and depth of tumor. Our estimated optimized energy for the tumor of 5cm wide and 1-2cm thick stands at 5cm depth is in the range of 3-5keV
Direct GPS P-Code Acquisition Method Based on FFT
Institute of Scientific and Technical Information of China (English)
LI Hong; LU Mingquan; FENG Zhenming
2008-01-01
Recently, direct acquisition of GPS P-code has received considerable attention to enhance the anti-jamming and anti-spoofing capabilities of GPS receivers. This paper describes a P-code acquisition method that uses block searches with large-scale FFT to search code phases and carrier frequency offsets in parallel. To limit memory use, especially when implemented in hardware, only the largest correlation result with its position information was preserved after searching a block of resolution cells in both the time and frequency domains. A second search was used to solve the code phase slip problem induced by the code frequency offset. Simulation results demonstrate that the probability of detection is above 0.99 for carrier-to-noise density ratios in excess of 40 dB- Hz when the predetection integration time is 0.8 ms and 6 non-coherent integrations are used in the analysis.
He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2015-03-01
Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.
Kasesaz, Y; Khalafi, H; Rahmani, F
2013-12-01
Optimization of the Beam Shaping Assembly (BSA) has been performed using the MCNP4C Monte Carlo code to shape the 2.45 MeV neutrons that are produced in the D-D neutron generator. Optimal design of the BSA has been chosen by considering in-air figures of merit (FOM) which consists of 70 cm Fluental as a moderator, 30 cm Pb as a reflector, 2mm (6)Li as a thermal neutron filter and 2mm Pb as a gamma filter. The neutron beam can be evaluated by in-phantom parameters, from which therapeutic gain can be derived. Direct evaluation of both set of FOMs (in-air and in-phantom) is very time consuming. In this paper a Response Matrix (RM) method has been suggested to reduce the computing time. This method is based on considering the neutron spectrum at the beam exit and calculating contribution of various dose components in phantom to calculate the Response Matrix. Results show good agreement between direct calculation and the RM method.
A GPU code for analytic continuation through a sampling method
Nordström, Johan; Schött, Johan; Locht, Inka L. M.; Di Marco, Igor
We here present a code for performing analytic continuation of fermionic Green's functions and self-energies as well as bosonic susceptibilities on a graphics processing unit (GPU). The code is based on the sampling method introduced by Mishchenko et al. (2000), and is written for the widely used CUDA platform from NVidia. Detailed scaling tests are presented, for two different GPUs, in order to highlight the advantages of this code with respect to standard CPU computations. Finally, as an example of possible applications, we provide the analytic continuation of model Gaussian functions, as well as more realistic test cases from many-body physics.
2D arc-PIC code description: methods and documentation
Timko, Helga
2011-01-01
Vacuum discharges are one of the main limiting factors for future linear collider designs such as that of the Compact LInear Collider. To optimize machine efficiency, maintaining the highest feasible accelerating gradient below a certain breakdown rate is desirable; understanding breakdowns can therefore help us to achieve this goal. As a part of ongoing theoretical research on vacuum discharges at the Helsinki Institute of Physics, the build-up of plasma can be investigated through the particle-in-cell method. For this purpose, we have developed the 2D Arc-PIC code introduced here. We present an exhaustive description of the 2D Arc-PIC code in two parts. In the first part, we introduce the particle-in-cell method in general and detail the techniques used in the code. In the second part, we provide a documentation and derivation of the key equations occurring in the code. The code is original work of the author, written in 2010, and is therefore under the copyright of the author. The development of the code h...
DETERMINISTIC TRANSPORT METHODS AND CODES AT LOS ALAMOS
Energy Technology Data Exchange (ETDEWEB)
J. E. MOREL
1999-06-01
The purposes of this paper are to: Present a brief history of deterministic transport methods development at Los Alamos National Laboratory from the 1950's to the present; Discuss the current status and capabilities of deterministic transport codes at Los Alamos; and Discuss future transport needs and possible future research directions. Our discussion of methods research necessarily includes only a small fraction of the total research actually done. The works that have been included represent a very subjective choice on the part of the author that was strongly influenced by his personal knowledge and experience. The remainder of this paper is organized in four sections: the first relates to deterministic methods research performed at Los Alamos, the second relates to production codes developed at Los Alamos, the third relates to the current status of transport codes at Los Alamos, and the fourth relates to future research directions at Los Alamos.
An Empirical Evaluation of Coding Methods for Multi-Symbol Alphabets.
Moffat, Alistair; And Others
1994-01-01
Evaluates the performance of different methods of data compression coding in several situations. Huffman's code, arithmetic coding, fixed codes, fast approximations to arithmetic coding, and splay coding are discussed in terms of their speed, memory requirements, and proximity to optimal performance. Recommendations for the best methods of…
Methods and computer codes for nuclear systems calculations
Indian Academy of Sciences (India)
B P Kochurov; A P Knyazev; A Yu Kwaretzkheli
2007-02-01
Some numerical methods for reactor cell, sub-critical systems and 3D models of nuclear reactors are presented. The methods are developed for steady states and space–time calculations. Computer code TRIFON solves space-energy problem in (, ) systems of finite height and calculates heterogeneous few-group matrix parameters of reactor cells. These parameters are used as input data in the computer code SHERHAN solving the 3D heterogeneous reactor equation for steady states and 3D space–time neutron processes simulation. Modification of TRIFON was developed for the simulation of space–time processes in sub-critical systems with external sources. An option of SHERHAN code for the system with external sources is under development.
A Method of Coding and Decoding in Underwater Image Transmission
Institute of Scientific and Technical Information of China (English)
程恩
2001-01-01
A new method of coding and decoding in the system of underwater image transmission is introduced, including the rapid digital frequency synthesizer in multiple frequency shift keying,image data generator, image grayscale decoder with intelligent fuzzy algorithm, image restoration and display on microcomputer.
P-code enhanced method for processing encrypted GPS signals without knowledge of the encryption code
Meehan, Thomas K. (Inventor); Thomas, Jr., Jess Brooks (Inventor); Young, Lawrence E. (Inventor)
2000-01-01
In the preferred embodiment, an encrypted GPS signal is down-converted from RF to baseband to generate two quadrature components for each RF signal (L1 and L2). Separately and independently for each RF signal and each quadrature component, the four down-converted signals are counter-rotated with a respective model phase, correlated with a respective model P code, and then successively summed and dumped over presum intervals substantially coincident with chips of the respective encryption code. Without knowledge of the encryption-code signs, the effect of encryption-code sign flips is then substantially reduced by selected combinations of the resulting presums between associated quadrature components for each RF signal, separately and independently for the L1 and L2 signals. The resulting combined presums are then summed and dumped over longer intervals and further processed to extract amplitude, phase and delay for each RF signal. Precision of the resulting phase and delay values is approximately four times better than that obtained from straight cross-correlation of L1 and L2. This improved method provides the following options: separate and independent tracking of the L1-Y and L2-Y channels; separate and independent measurement of amplitude, phase and delay L1-Y channel; and removal of the half-cycle ambiguity in L1-Y and L2-Y carrier phase.
Research on coding and decoding method for digital levels.
Tu, Li-fen; Zhong, Si-dong
2011-01-20
A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1 mm when the measuring range is between 2 m and 100 m, which can meet practical needs.
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
Based on the studies of Reed-Solomon codes and orthogonal space-time block codes over Rayleigh fading channel, a theoretical method for estimating performance of Reed-Solomon codes concatenated with orthogonal space-time block codes is presented in this paper. And an upper bound of the bit error rate is also obtained. It is shown through computer simulations that the signal-to-noise ratio reduces about 15 dB or more after orthogonal space-time block codes are concatenate with Reed-Solomon (15,6) codes over Rayleigh fading channel, when the bit error rate is 10-4.
Energy Technology Data Exchange (ETDEWEB)
Takeda, Mauro Noriaki
2006-07-01
The present work described a new methodology for modelling the behaviour of the activity in a 4{pi}{beta}-{gamma} coincidence system. The detection efficiency for electrons in the proportional counter and gamma radiation in the NaI(Tl) detector was calculated using the Monte Carlo program MCNP4C. Another Monte Carlo code was developed which follows the path in the disintegration scheme from the initial state of the precursor radionuclide, until the ground state of the daughter nucleus. Every step of the disintegration scheme is sorted by random numbers taking into account the probabilities of all {beta}{sup -} branches, electronic capture branches, transitions probabilities and internal conversion coefficients. Once the final state was reached beta, electronic capture events and gamma transitions are accounted for the three spectra: beta, gamma and coincidence variation in the beta efficiency was performed simulating energy cut off or use of absorbers (Collodion). The selected radionuclides for simulation were: {sup 134}Cs, {sup 72}Ga which disintegrate by {beta}{sup -} transition, {sup 133}Ba which disintegrates by electronic capture and {sup 35}S which is a beta pure emitter. For the latter, the Efficiency Tracing technique was simulated. The extrapolation curves obtained by Monte Carlo were filled by the Least Square Method with the experimental points and the results were compared to the Linear Extrapolation method. (author)
CNC LATHE MACHINE PRODUCING NC CODE BY USING DIALOG METHOD
Directory of Open Access Journals (Sweden)
Yakup TURGUT
2004-03-01
Full Text Available In this study, an NC code generation program utilising Dialog Method was developed for turning centres. Initially, CNC lathes turning methods and tool path development techniques were reviewed briefly. By using geometric definition methods, tool path was generated and CNC part program was developed for FANUC control unit. The developed program made CNC part program generation process easy. The program was developed using BASIC 6.0 programming language while the material and cutting tool database were and supported with the help of ACCESS 7.0.
Quantization Skipping Method for H.264/AVC Video Coding
Institute of Scientific and Technical Information of China (English)
Won-seon SONG; Min-cheol HONG
2010-01-01
This paper presents a quantization skipping method for H.264/AVC video coding standard. In order to reduce the computational-cost of quantization process coming from integer discrete cosine transform of H.264/AVC, a quantization skipping condition is derived by the analysis of integer transform and quantization procedures. The experimental results show that the proposed algorithm has the capability to reduce the computational cost about 10%～25%.
Bolewski, A; Ciechanowski, M; Dydejczyk, A; Kreft, A
2008-04-01
The effect of the detector characteristics on the performance of an isotopic neutron source device for measuring thermal neutron absorption cross section (Sigma) has been examined by means of Monte Carlo simulations. Three specific experimental arrangements, alternately with BF(3) counters and (3)He counters of the same sizes, have been modelled using the MCNP-4C code. Results of Monte Carlo calculations show that devices with BF(3) counters are more sensitive to Sigma, but high-pressure (3)He counters offer faster assays.
A coded VEP method to measure interhemispheric transfer time (IHTT).
Li, Yun; Bin, Guangyu; Hong, Bo; Gao, Xiaorong
2010-03-19
Interhemispheric transfer time (IHTT) is an important parameter for research on the information conduction time across the corpus callosum between the two hemispheres. There are several traditional methods used to estimate the IHTT, including the reaction time (RT) method, the evoked potential (EP) method and the measure based on the transcranial magnetic stimulation (TMS). The present study proposes a novel coded VEP method to estimate the IHTT based on the specific properties of the m-sequence. These properties include good signal-to-noise ratio (SNR) and high noise tolerance. Additionally, calculation of the circular cross-correlation function is sensitive to the phase difference. The method presented in this paper estimates the IHTT using the m-sequence to encode the visual stimulus and also compares the results with the traditional flash VEP method. Furthermore, with the phase difference of the two responses calculated using the circular cross-correlation technique, the coded VEP method could obtain IHTT results, which does not require the selection of the utilized component.
Comparison of a laboratory spectrum of Eu-152 with results of simulation using the MCNP code
Energy Technology Data Exchange (ETDEWEB)
Rodenas, J. [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain); Gallardo, S. [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain)], E-mail: sergalbe@iqn.upv.es; Ortiz, J. [Laboratorio de Radiactividad Ambiental, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain)
2007-09-21
Detectors used for gamma spectrometry must be calibrated for each geometry considered in environmental radioactivity laboratories. This calibration is performed using a standard solution containing gamma emitter sources. Nevertheless, the efficiency curves obtained are periodically checked using a source such as {sup 152}Eu emitting many gamma rays that cover a wide energy range (20-1500 keV). {sup 152}Eu presents a problem because it has a lot of peaks affected by True Coincidence Summing (TCS). Two experimental measures have been performed placing the source (a Marinelli beaker) at 0 and 10 cm from the detector. Both spectra are simulated by the MCNP 4C code, where the TCS is not reproduced. Therefore, the comparison between experimental and simulated peak net areas permits one to choose the most convenient peaks to check the efficiency curves of the detector.
Sparse coding based feature representation method for remote sensing images
Oguslu, Ender
In this dissertation, we study sparse coding based feature representation method for the classification of multispectral and hyperspectral images (HSI). The existing feature representation systems based on the sparse signal model are computationally expensive, requiring to solve a convex optimization problem to learn a dictionary. A sparse coding feature representation framework for the classification of HSI is presented that alleviates the complexity of sparse coding through sub-band construction, dictionary learning, and encoding steps. In the framework, we construct the dictionary based upon the extracted sub-bands from the spectral representation of a pixel. In the encoding step, we utilize a soft threshold function to obtain sparse feature representations for HSI. Experimental results showed that a randomly selected dictionary could be as effective as a dictionary learned from optimization. The new representation usually has a very high dimensionality requiring a lot of computational resources. In addition, the spatial information of the HSI data has not been included in the representation. Thus, we modify the framework by incorporating the spatial information of the HSI pixels and reducing the dimension of the new sparse representations. The enhanced model, called sparse coding based dense feature representation (SC-DFR), is integrated with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) classifiers to discriminate different types of land cover. We evaluated the proposed algorithm on three well known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit (SOMP) and image fusion and recursive filtering (IFRF). The results from the experiments showed that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification. To further
PIPI: PTM-Invariant Peptide Identification Using Coding Method.
Yu, Fengchao; Li, Ning; Yu, Weichuan
2016-12-02
In computational proteomics, the identification of peptides with an unlimited number of post-translational modification (PTM) types is a challenging task. The computational cost associated with database search increases exponentially with respect to the number of modified amino acids and linearly with respect to the number of potential PTM types at each amino acid. The problem becomes intractable very quickly if we want to enumerate all possible PTM patterns. To address this issue, one group of methods named restricted tools (including Mascot, Comet, and MS-GF+) only allow a small number of PTM types in database search process. Alternatively, the other group of methods named unrestricted tools (including MS-Alignment, ProteinProspector, and MODa) avoids enumerating PTM patterns with an alignment-based approach to localizing and characterizing modified amino acids. However, because of the large search space and PTM localization issue, the sensitivity of these unrestricted tools is low. This paper proposes a novel method named PIPI to achieve PTM-invariant peptide identification. PIPI belongs to the category of unrestricted tools. It first codes peptide sequences into Boolean vectors and codes experimental spectra into real-valued vectors. For each coded spectrum, it then searches the coded sequence database to find the top scored peptide sequences as candidates. After that, PIPI uses dynamic programming to localize and characterize modified amino acids in each candidate. We used simulation experiments and real data experiments to evaluate the performance in comparison with restricted tools (i.e., Mascot, Comet, and MS-GF+) and unrestricted tools (i.e., Mascot with error tolerant search, MS-Alignment, ProteinProspector, and MODa). Comparison with restricted tools shows that PIPI has a close sensitivity and running speed. Comparison with unrestricted tools shows that PIPI has the highest sensitivity except for Mascot with error tolerant search and Protein
Institute of Scientific and Technical Information of China (English)
XIONG ChengYi; TIAN JinWen; LIU Jian
2008-01-01
This paper introduced a novel high performance algorithm and VLSI architectures for achieving bit plane coding (BPC) in word level sequential and parallel mode. The proposed BPC algorithm adopts the techniques of coding pass prediction and par-allel & pipeline to reduce the number of accessing memory and to increase the ability of concurrently processing of the system, where all the coefficient bits of a code block could be coded by only one scan. A new parallel bit plane architecture (PA) was proposed to achieve word-level sequential coding. Moreover, an efficient high-speed architecture (HA) was presented to achieve multi-word parallel coding. Compared to the state of the art, the proposed PA could reduce the hardware cost more efficiently, though the throughput retains one coefficient coded per clock. While the proposed HA could perform coding for 4 coefficients belonging to a stripe column at one intra-clock cycle, so that coding for an N×N code-block could be completed in approximate N2/4 intra-clock cycles. Theoretical analysis and ex-perimental results demonstrate that the proposed designs have high throughput rate with good performance in terms of speedup to cost, which can be good alter-natives for low power applications.
Local coding based matching kernel method for image classification.
Directory of Open Access Journals (Sweden)
Yan Song
Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.
A CLASS OF LDPC CODE'S CONSTRUCTION BASED ON AN ITERATIVE RANDOM METHOD
Institute of Scientific and Technical Information of China (English)
Huang Zhonghu; Shen Lianfeng
2006-01-01
This letter gives a random construction for Low Density Parity Check (LDPC) codes, which uses an iterative algorithm to avoid short cycles in the Tanner graph. The construction method has great flexible choice in LDPC code's parameters including codelength, code rate, the least girth of the graph, the weight of column and row in the parity check matrix. The method can be applied to the irregular LDPC codes and strict regular LDPC codes. Systemic codes have many applications in digital communication, so this letter proposes a construction of the generator matrix of systemic LDPC codes from the parity check matrix. Simulations show that the method performs well with iterative decoding.
A robust fusion method for multiview distributed video coding
DEFF Research Database (Denmark)
Salmistraro, Matteo; Ascenso, Joao; Brites, Catarina;
2014-01-01
Distributed video coding (DVC) is a coding paradigm which exploits the redundancy of the source (video) at the decoder side, as opposed to predictive coding, where the encoder leverages the redundancy. To exploit the correlation between views, multiview predictive video codecs require the encoder...
Coupling of partitioned physics codes with quasi-Newton methods
CSIR Research Space (South Africa)
Haelterman, R
2017-03-01
Full Text Available Many physics problems can only be studied by coupling various numerical codes, each modeling a subaspect of the physics problem that is addressed. Often, each of these codes needs to be considered as a black box, either because the codes were...
Improved Fast Fourier Transform Based Method for Code Accuracy Quantification
Energy Technology Data Exchange (ETDEWEB)
Ha, Tae Wook; Jeong, Jae Jun [Pusan National University, Busan (Korea, Republic of); Choi, Ki Yong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-10-15
The capability of the proposed method is discussed. In this study, the limitations of the FFTBM were analyzed. The FFTBM produces quantitatively different results due to its frequency dependence. Because the problem is intensified by including a lot of high frequency components, a new method using a reduced cut-off frequency was proposed. The results of the proposed method show that the shortcomings of FFTBM are considerably relieved. Among them, the fast Fourier transform based method (FFTBM) introduced in 1990 has been widely used to evaluate a code uncertainty or accuracy. Prosek et al., (2008) identified its drawbacks, the so-called 'edge effect'. To overcome the problems, an improved FFTBM by signal mirroring (FFTBM-SM) was proposed and it has been used up to now. In spite of the improvement, the FFTBM-SM yielded different accuracy depending on the frequency components of a parameter, such as pressure, temperature and mass flow rate. Therefore, it is necessary to reduce the frequency dependence of the FFTBMs. In this study, the deficiencies of the present FFTBMs are analyzed and a new method is proposed to mitigate its frequency dependence.
Proposed Arabic Text Steganography Method Based on New Coding Technique
Directory of Open Access Journals (Sweden)
Assist. prof. Dr. Suhad M. Kadhem
2016-09-01
Full Text Available Steganography is one of the important fields of information security that depend on hiding secret information in a cover media (video, image, audio, text such that un authorized person fails to realize its existence. One of the lossless data compression techniques which are used for a given file that contains many redundant data is run length encoding (RLE. Sometimes the RLE output will be expanded rather than compressed, and this is the main problem of RLE. In this paper we will use a new coding method such that its output will be contains sequence of ones with few zeros, so modified RLE that we proposed in this paper will be suitable for compression, finally we employ the modified RLE output for stenography purpose that based on Unicode and non-printed characters to hide the secret information in an Arabic text.
Method for Veterbi decoding of large constraint length convolutional codes
Hsu, In-Shek; Truong, Trieu-Kie; Reed, Irving S.; Jing, Sun
1988-05-01
A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.
Directory of Open Access Journals (Sweden)
Ai-bing Zhang
Full Text Available Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish and two representing non-coding ITS barcodes (rust fungi and brown algae. Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ and Maximum likelihood (ML methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40% for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37% for 1094 brown algae queries, both using ITS barcodes.
Improved Methods For Generating Quasi-Gray Codes
Jansens, Dana; Carmi, Paz; Maheshwari, Anil; Morin, Pat; Smid, Michiel
2010-01-01
Consider a sequence of bit strings of length d, such that each string differs from the next in a constant number of bits. We call this sequence a quasi-Gray code. We examine the problem of efficiently generating such codes, by considering the number of bits read and written at each generating step, the average number of bits read while generating the entire code, and the number of strings generated in the code. Our results give a trade-off between these constraints, and present algorithms that do less work on average than previous results, and that increase the number of bit strings generated.
RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1
Energy Technology Data Exchange (ETDEWEB)
NONE
1995-08-01
The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes.
Directory of Open Access Journals (Sweden)
Ms. Ashvini Kute
2015-01-01
Full Text Available Phishing is an attempt by an individual or a group to thieve personal confidential information such as passwords, credit card information etc from unsuspecting victims for identity theft, financial gain and other fraudulent activities. Here an image based (QR codes authentication using Visual Cryptography (VC is used. The use of Visual cryptography is explored to convert the QR code into two shares and both these shares can then be transmitted separately. One Time Passwords (OTP is passwords which are valid only for a session to validate the user within a specified amount of time. In this paper we are presenting a new authentication scheme for secure OTP distribution in phishing website detection through VC and QR codes.
Directory of Open Access Journals (Sweden)
Ms. Ashvini Kute
2015-05-01
Full Text Available Phishing is an attempt by an individual or a group to thieve personal confidential information such as passwords, credit card information etc from unsuspecting victims for identity theft, financial gain and other fraudulent activities. Here an image based (QR codes authentication using Visual Cryptography (VC is used. The use of Visual cryptography is explored to convert the QR code into two shares and both these shares can then be transmitted separately. One Time Passwords (OTP is passwords which are valid only for a session to validate the user within a specified amount of time. In this paper we are presenting a new authentication scheme for secure OTP distribution in phishing website detection through VC and QR codes.
Firstenberg, H.
1971-01-01
The statistics are considered of the Monte Carlo method relative to the interpretation of the NUGAM2 and NUGAM3 computer code results. A numerical experiment using the NUGAM2 code is presented and the results are statistically interpreted.
An Efficient Method for Verifying Gyrokinetic Microstability Codes
Bravenec, R.; Candy, J.; Dorland, W.; Holland, C.
2009-11-01
Benchmarks for gyrokinetic microstability codes can be developed through successful ``apples-to-apples'' comparisons among them. Unlike previous efforts, we perform the comparisons for actual discharges, rendering the verification efforts relevant to existing experiments and future devices (ITER). The process requires i) assembling the experimental analyses at multiple times, radii, discharges, and devices, ii) creating the input files ensuring that the input parameters are faithfully translated code-to-code, iii) running the codes, and iv) comparing the results, all in an organized fashion. The purpose of this work is to automate this process as much as possible: At present, a python routine is used to generate and organize GYRO input files from TRANSP or ONETWO analyses. Another routine translates the GYRO input files into GS2 input files. (Translation software for other codes has not yet been written.) Other python codes submit the multiple GYRO and GS2 jobs, organize the results, and collect them into a table suitable for plotting. (These separate python routines could easily be consolidated.) An example of the process -- a linear comparison between GYRO and GS2 for a DIII-D discharge at multiple radii -- will be presented.
A code for hadrontherapy treatment planning with the voxelscan method.
Berga, S; Bourhaleb, F; Cirio, R; Derkaoui, J; Gallice, B; Hamal, M; Marchetto, F; Rolando, V; Viscomi, S
2000-11-01
A code for the implementation of treatment plannings in hadrontherapy with an active scan beam is presented. The package can determine the fluence and energy of the beams for several thousand voxels in a few minutes. The performances of the program have been tested with a full simulation.
Energy Technology Data Exchange (ETDEWEB)
Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl (and others)
2008-10-15
The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained.
Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C
2015-02-01
The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. Copyright © 2015 Elsevier Inc. All rights reserved.
Zeng, Xiaoming; Bell, Paul D
2011-01-01
In this study, we report on a qualitative method known as the Delphi method, used in the first part of a research study for improving the accuracy and reliability of ICD-9-CM coding. A panel of independent coding experts interacted methodically to determine that the three criteria to identify a problematic ICD-9-CM subcategory for further study were cost, volume, and level of coding confusion caused. The Medicare Provider Analysis and Review (MEDPAR) 2007 fiscal year data set as well as suggestions from the experts were used to identify coding subcategories based on cost and volume data. Next, the panelists performed two rounds of independent ranking before identifying Excisional Debridement as the subcategory that causes the most confusion among coders. As a result, they recommended it for further study aimed at improving coding accuracy and variation. This framework can be adopted at different levels for similar studies in need of a schema for determining problematic subcategories of code sets.
Interleaver Design Method for Turbo Codes Based on Genetic Algorithm
Institute of Scientific and Technical Information of China (English)
Tan Ying; Sun Hong; Zhou Huai-bei
2004-01-01
This paper describes a new interleaver construction technique for turbo code. The technique searches as much as possible pseudo-random interleaving patterns under a certain condition using genetic algorithms(GAs). The new interleavers have the superiority of the S-random interleavers and this interleaver construction technique can reduce the time taken to generate pseudo-random interleaving patterns under a certain condition. Tbe results obtained indicate that the new interleavers yield an equal to or better performance than the Srandom interleavers. Compared to the S-random interleaver,this design requires a lower level of computational complexity.
Directory of Open Access Journals (Sweden)
OL Ahmadi
2015-12-01
Full Text Available Introduction: 103Pd is a low energy source, which is used in brachytherapy. According to the standards of American Association of Physicists in Medicine, dosimetric parameters determination of brachytherapy sources before the clinical application was considered significantly important. Therfore, the present study aimed to compare the dosimetric parameters of the target source using the water phantom and soft tissue. Methods: According to the TG-43U1 protocol, the dosimetric parameters were compared around the 103Pd source in regard with water phantom with the density of 0.998 gr/cm3 and the soft tissue with the density of 1.04 gr/cm3 on the longitudinal and transverse axes using the MCNP4C code and the relative differences were compared between the both conditions. Results: The simulation results indicated that the dosimetric parameters depended on the radial dose function and the anisotropy function in the application of the water phantom instead of soft tissue up to a distance of 1.5 cm, between which a good consistency was observed. With increasing the distance, the difference increased, so as within 6 cm from the source, this difference increased to 4%. Conclusions: The results of the soft tissue phantom compared with those of the water phantom indicated 4% relative difference at a distance of 6 cm from the source. Therefore, the results of the water phantom with a maximum error of 4% can be used in practical applications instead of soft tissue. Moreover, the amount of differences obtained in each distance regarding using the soft tissue phantom could be corrected.
Energy Technology Data Exchange (ETDEWEB)
Perfetti, Christopher M [ORNL; Martin, William R [University of Michigan; Rearden, Bradley T [ORNL; Williams, Mark L [ORNL
2012-01-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.
Energy Technology Data Exchange (ETDEWEB)
Costa, Priscila
2014-07-01
The Cuno filter is part of the water processing circuit of the IEA-R1 reactor and, when saturated, it is replaced and becomes a radioactive waste, which must be managed. In this work, the primary characterization of the Cuno filter of the IEA-R1 nuclear reactor at IPEN was carried out using gamma spectrometry associated with the Monte Carlo method. The gamma spectrometry was performed using a hyperpure germanium detector (HPGe). The germanium crystal represents the detection active volume of the HPGe detector, which has a region called dead layer or inactive layer. It has been reported in the literature a difference between the theoretical and experimental values when obtaining the efficiency curve of these detectors. In this study we used the MCNP-4C code to obtain the detector calibration efficiency for the geometry of the Cuno filter, and the influence of the dead layer and the effect of sum in cascade at the HPGe detector were studied. The correction of the dead layer values were made by varying the thickness and the radius of the germanium crystal. The detector has 75.83 cm{sup 3} of active volume of detection, according to information provided by the manufacturer. Nevertheless, the results showed that the actual value of active volume is less than the one specified, where the dead layer represents 16% of the total volume of the crystal. A Cuno filter analysis by gamma spectrometry has enabled identifying energy peaks. Using these peaks, three radionuclides were identified in the filter: {sup 108m}Ag, {sup 110m}Ag and {sup 60}Co. From the calibration efficiency obtained by the Monte Carlo method, the value of activity estimated for these radionuclides is in the order of MBq. (author)
Deblurring, Localization and Geometry Correction of 2D QR Bar Codes Using Richardson Lucy Method
Directory of Open Access Journals (Sweden)
Manpreet Kaur
2014-09-01
Full Text Available This paper includes the recognition of 2D QR bar codes. This paper describes the deblurring, localization and geometry correction of 2D QR bar codes. The images captured are blurred due motion between the image and the camera. Hence the image containing the QR barcode cannot be read by QR reader. To make the QR barcode readable the images are need to be deblurred. Lucy Richardson method and Weiner Deconvolution Method is used to deblurr and localize the bar code. From both of the methods Lucy Richardson Method is best because this method takes less time for execution than the other method. Simulink Model is used for the Geometry correction of the QR bar code. In future, we would like to investigate the generalization of our algorithm to handle more complicated motion blur.
A System Call Randomization Based Method for Countering Code-Injection Attacks
Directory of Open Access Journals (Sweden)
Zhaohui Liang
2009-10-01
Full Text Available Code-injection attacks pose serious threat to today’s Internet. The existing code-injection attack defense methods have some deficiencies on performance overhead and effectiveness. To this end, we propose a method that uses system called randomization to counter code injection attacks based on instruction set randomization idea. System calls must be used when an injected code would perform its actions. By creating randomized system calls of the target process, an attacker who does not know the key to the randomization algorithm will inject code that isn’t randomized like as the target process and is invalid for the corresponding de-randomized module. The injected code would fail to execute without calling system calls correctly. Moreover, with extended complier, our method creates source code randomization during its compiling and implements binary executable files randomization by feature matching. Our experiments on built prototype show that our method can effectively counter variety code injection attacks with low-overhead.
A study of transonic aerodynamic analysis methods for use with a hypersonic aircraft synthesis code
Sandlin, Doral R.; Davis, Paul Christopher
1992-01-01
A means of performing routine transonic lift, drag, and moment analyses on hypersonic all-body and wing-body configurations were studied. The analysis method is to be used in conjunction with the Hypersonic Vehicle Optimization Code (HAVOC). A review of existing techniques is presented, after which three methods, chosen to represent a spectrum of capabilities, are tested and the results are compared with experimental data. The three methods consist of a wave drag code, a full potential code, and a Navier-Stokes code. The wave drag code, representing the empirical approach, has very fast CPU times, but very limited and sporadic results. The full potential code provides results which compare favorably to the wind tunnel data, but with a dramatic increase in computational time. Even more extreme is the Navier-Stokes code, which provides the most favorable and complete results, but with a very large turnaround time. The full potential code, TRANAIR, is used for additional analyses, because of the superior results it can provide over empirical and semi-empirical methods, and because of its automated grid generation. TRANAIR analyses include an all body hypersonic cruise configuration and an oblique flying wing supersonic transport.
A decoding method of an n length binary BCH code through (n + 1n length binary cyclic code
Directory of Open Access Journals (Sweden)
TARIQ SHAH
2013-09-01
Full Text Available For a given binary BCH code Cn of length n = 2 s - 1 generated by a polynomial of degree r there is no binary BCH code of length (n + 1n generated by a generalized polynomial of degree 2r. However, it does exist a binary cyclic code C (n+1n of length (n + 1n such that the binary BCH code Cn is embedded in C (n+1n . Accordingly a high code rate is attained through a binary cyclic code C (n+1n for a binary BCH code Cn . Furthermore, an algorithm proposed facilitates in a decoding of a binary BCH code Cn through the decoding of a binary cyclic code C (n+1n , while the codes Cn and C (n+1n have the same minimum hamming distance.
Compatibility of global environmental assessment methods of buildings with an Egyptian energy code
Directory of Open Access Journals (Sweden)
Amal Kamal Mohamed Shamseldin
2017-04-01
Full Text Available Several environmental assessment methods of buildings had emerged over the world to set environmental classifications for buildings, such as the American method “Leadership in Energy and Environmental Design” (LEED the most widespread one. Several countries decided to put their own assessment methods to catch up with the previous orientation, such as Egypt. The main goal of putting the Egyptian method was to impose the voluntary local energy efficiency codes. Through a local survey, it was clearly noted that many of the construction makers in Egypt do not even know the local method, and whom are interested in the environmental assessment of buildings seek to apply LEED rather than anything else. Therefore, several questions appear about the American method compatibility with the Egyptian energy codes – that contain the most exact characteristics and requirements and give the outmost credible energy efficiency results for buildings in Egypt-, and the possibility of finding another global method that gives closer results to those of the Egyptian codes, especially with the great variety of energy efficiency measurement approaches used among the different assessment methods. So, the researcher is trying to find the compatibility of using non-local assessment methods with the local energy efficiency codes. Thus, if the results are not compatible, the Egyptian government should take several steps to increase the local building sector awareness of the Egyptian method to benefit these codes, and it should begin to enforce it within the building permits after a proper guidance and feedback.
Source Code Plagiarism Detection Method Using Protégé Built Ontologies
Directory of Open Access Journals (Sweden)
Ion SMEUREANU
2013-01-01
Full Text Available Software plagiarism is a growing and serious problem that affects computer science universities in particular and the quality of education in general. More and more students tend to copy their thesis’s software from older theses or internet databases. Checking source codes manually, to detect if they are similar or the same, is a laborious and time consuming job, maybe even impossible due to existence of large digital repositories. Ontology is a way of describing a document’s semantic, so it can be easily used for source code files too. OWL Web Ontology Language could find its applicability in describing both vocabulary and taxonomy of a programming language source code. SPARQL is a query language based on SQL that extracts saved or deducted information from ontologies. Our paper proposes a source code plagiarism detection method, based on ontologies created using Protégé editor, which can be applied in scanning students' theses' software source code.
Hydrodynamic Optimization Method and Design Code for Stall-Regulated Hydrokinetic Turbine Rotors
Energy Technology Data Exchange (ETDEWEB)
Sale, D.; Jonkman, J.; Musial, W.
2009-08-01
This report describes the adaptation of a wind turbine performance code for use in the development of a general use design code and optimization method for stall-regulated horizontal-axis hydrokinetic turbine rotors. This rotor optimization code couples a modern genetic algorithm and blade-element momentum performance code in a user-friendly graphical user interface (GUI) that allows for rapid and intuitive design of optimal stall-regulated rotors. This optimization method calculates the optimal chord, twist, and hydrofoil distributions which maximize the hydrodynamic efficiency and ensure that the rotor produces an ideal power curve and avoids cavitation. Optimizing a rotor for maximum efficiency does not necessarily create a turbine with the lowest cost of energy, but maximizing the efficiency is an excellent criterion to use as a first pass in the design process. To test the capabilities of this optimization method, two conceptual rotors were designed which successfully met the design objectives.
Ntsama, Eloundou Pascal; Colince, Welba; Ele, Pierre
2016-01-01
In this article, we make a comparative study for a new approach compression between discrete cosine transform (DCT) and discrete wavelet transform (DWT). We seek the transform proper to vector quantization to compress the EMG signals. To do this, we initially associated vector quantization and DCT, then vector quantization and DWT. The coding phase is made by the SPIHT coding (set partitioning in hierarchical trees coding) associated with the arithmetic coding. The method is demonstrated and evaluated on actual EMG data. Objective performance evaluations metrics are presented: compression factor, percentage root mean square difference and signal to noise ratio. The results show that method based on the DWT is more efficient than the method based on the DCT.
Energy Technology Data Exchange (ETDEWEB)
Perfetti, C.; Martin, W. [Univ. of Michigan, Dept. of Nuclear Engineering and Radiological Sciences, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109-2104 (United States); Rearden, B.; Williams, M. [Oak Ridge National Laboratory, Reactor and Nuclear Systems Div., Bldg. 5700, P.O. Box 2008, Oak Ridge, TN 37831-6170 (United States)
2012-07-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the Shift Monte Carlo code within the SCALE code package. The methods were used for two small-scale test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods. (authors)
Adaptive bit truncation and compensation method for EZW image coding
Dai, Sheng-Kui; Zhu, Guangxi; Wang, Yao
2003-09-01
The embedded zero-tree wavelet algorithm (EZW) is widely adopted to compress wavelet coefficients of images with the property that the bits stream can be truncated and produced anywhere. The lower bit plane of the wavelet coefficents is verified to be less important than the higher bit plane. Therefore it can be truncated and not encoded. Based on experiments, a generalized function, which can provide a glancing guide for EZW encoder to intelligently decide the number of low bit plane to be truncated, is deduced in this paper. In the EZW decoder, a simple method is presented to compensate for the truncated wavelet coefficients, and finally it can surprisingly enhance the quality of reconstructed image and spend scarcely any additional cost at the same time.
Status of SFR Codes and Methods QA Implementation
Energy Technology Data Exchange (ETDEWEB)
Brunett, Acacia J. [Argonne National Lab. (ANL), Argonne, IL (United States); Briggs, Laural L. [Argonne National Lab. (ANL), Argonne, IL (United States); Fanning, Thomas H. [Argonne National Lab. (ANL), Argonne, IL (United States)
2017-01-31
This report details development of the SAS4A/SASSYS-1 SQA Program and describes the initial stages of Program implementation planning. The provisional Program structure, which is largely focused on the establishment of compliant SQA documentation, is outlined in detail, and Program compliance with the appropriate SQA requirements is highlighted. Additional program activities, such as improvements to testing methods and Program surveillance, are also described in this report. Given that the programmatic resources currently granted to development of the SAS4A/SASSYS-1 SQA Program framework are not sufficient to adequately address all SQA requirements (e.g. NQA-1, NUREG/BR-0167, etc.), this report also provides an overview of the gaps that remain the SQA program, and highlights recommendations on a path forward to resolution of these issues. One key finding of this effort is the identification of the need for an SQA program sustainable over multiple years within DOE annual R&D funding constraints.
Energy Technology Data Exchange (ETDEWEB)
Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)
2014-05-01
A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.
DCT domain filtering method for multi-antenna code acquisition
Institute of Scientific and Technical Information of China (English)
Xiaojie Li; Luping Xu; Shibin Song; Hua Zhang
2013-01-01
For global navigation satel ite system (GNSS) signals in Gaussian and Rayleigh fading channel, a novel signal detection al-gorithm is proposed. Under the low frequency uncertainty case, af-ter performing discrete cosine transform (DCT) to the outputs of the partial matched filter (PMF) for every antenna, the high order com-ponents in the transforming domain wil be filtered, then the equal-gain (EG) combination for the inverse discrete cosine transform (IDCT) reconstructed signal would be done subsequently. Thus, due to the different frequency distribution characteristics between the noise and signals, after EG combination, the energy of signals has almost no loss and the noise energy is greatly reduced. The theoretical analysis and simulation results show that the detection algorithm can effectively improve the signal-to-noise ratio of the captured signal and increase the probability of detection under the same false alarm probability. In addition, it should be pointed out that this method can also be applied to Rayleigh fading channels with moving antenna.
On the efficiency and accuracy of interpolation methods for spectral codes
Hinsberg, van M.A.T.; Thije Boonkkamp, ten J.H.M.; Toschi, F.; Clercx, H.J.H.
2012-01-01
In this paper a general theory for interpolation methods on a rectangular grid is introduced. By the use of this theory an efficient B-spline-based interpolation method for spectral codes is presented. The theory links the order of the interpolation method with its spectral properties. In this way m
A Low Complexity VCS Method for PAPR Reduction in Multicarrier Code Division Multiple Access
Institute of Scientific and Technical Information of China (English)
Si-Si Liu; Yue Xiao; Qing-Song Wen; Shao-Qian Li
2007-01-01
This paper investigatesa peak to average power ratio (PAPR) reduction method in multicarrier code division multiple access (MC-CDMA) system. Variable code sets (VCS), a spreading codes selection scheme, can improve the PAPR property of the MC-CDMA signals, but this technique requires an exhaustive search over the combinations of spreading code sets. It is observed that when the number of active users increases, the search complexity will increase exponentially. Based on this fact, we propose a low complexity VCS (LC-VCS) method to reduce the computational complexity. The basic idea of LC-VCS is to derive new signals using the relationship between candidature signals. Simulation results show that the proposed approach can reduce PAPR with lower computational complexity. In addition, it can be blindly received without any side information.
Hierarchical Symbolic Analysis of Large Analog Circuits with Totally Coded Method
Institute of Scientific and Technical Information of China (English)
XU Jing-bo
2006-01-01
Symbolic analysis has many applications in the design of analog circuits. Existing approaches rely on two forms of symbolic-expression representation: expanded sum-ofproduct form and arbitrarily nested form. Expanded form suffers the problem that the number of product terms grows exponentially with the size of a circuit. Nested form is neither canonical nor amenable to symbolic manipulation. In this paper, we present a new approach to exact and canonical symbolic analysis by exploiting the sparsity and sharing of product terms. This algorithm, called totally coded method (TCM), consists of representing the symbolic determinant of a circuit matrix by code series and performing symbolic analysis by code manipulation. We describe an efficient code-ordering heuristic and prove that it is optimum for ladder-structured circuits. For practical analog circuits, TCM not only covers all advantages of the algorithm via determinant decision diagrams (DDD) but is more simple and efficient than DDD method.
Coupling methods for parallel running RELAPSim codes in nuclear power plant simulation
Energy Technology Data Exchange (ETDEWEB)
Li, Yankai; Lin, Meng, E-mail: linmeng@sjtu.edu.cn; Yang, Yanhua
2016-02-15
When the plant is modeled detailedly for high precision, it is hard to achieve real-time calculation for one single RELAP5 in a large-scale simulation. To improve the speed and ensure the precision of simulation at the same time, coupling methods for parallel running RELAPSim codes were proposed in this study. Explicit coupling method via coupling boundaries was realized based on a data-exchange and procedure-control environment. Compromise of synchronization frequency was well considered to improve the precision of simulation and guarantee the real-time simulation at the same time. The coupling methods were assessed using both single-phase flow models and two-phase flow models and good agreements were obtained between the splitting–coupling models and the integrated model. The mitigation of SGTR was performed as an integral application of the coupling models. A large-scope NPP simulator was developed adopting six splitting–coupling models of RELAPSim and other simulation codes. The coupling models could improve the speed of simulation significantly and make it possible for real-time calculation. In this paper, the coupling of the models in the engineering simulator is taken as an example to expound the coupling methods, i.e., coupling between parallel running RELAPSim codes, and coupling between RELAPSim code and other types of simulation codes. However, the coupling methods are also referable in other simulator, for example, a simulator employing ATHLETE instead of RELAP5, other logic code instead of SIMULINK. It is believed the coupling method is commonly used for NPP simulator regardless of the specific codes chosen in this paper.
Chau, H F
2009-01-01
Many entanglement distillation schemes use either universal random hashing or breading as their final step to obtain shared almost perfect EPR pairs. Both methods involve random stabilizer quantum error-correcting codes whose syndromes can be measured using simple and efficient quantum circuits. When applied to high fidelity Werner states, the highest yield protocol among those using local Bell measurements and local unitary operations is the one that uses a certain breading method. And random hashing method losses to breading just by a thin margin. In spite of their high yield, the hardness of decoding random linear code makes the use of random hashing and breading infeasible in practice. In this pilot study, we analyze the performance of recurrence method, a well-known entanglement distillation scheme, by replacing the final random hashing or breading procedure by various efficiently decodable quantum codes. We find that among all the replacements we have investigated, the one using a certain adaptive quant...
Bombardelli, F. A.; Zamani, K.
2014-12-01
We introduce and discuss an open-source, user friendly, numerical post-processing piece of software to assess reliability of the modeling results of environmental fluid mechanics' codes. Verification and Validation, Uncertainty Quantification (VAVUQ) is a toolkit developed in Matlab© for general V&V proposes. In this work, The VAVUQ implementation of V&V techniques and user interfaces would be discussed. VAVUQ is able to read Excel, Matlab, ASCII, and binary files and it produces a log of the results in txt format. Next, each capability of the code is discussed through an example: The first example is the code verification of a sediment transport code, developed with the Finite Volume Method, with MES. Second example is a solution verification of a code for groundwater flow, developed with the Boundary Element Method, via MES. Third example is a solution verification of a mixed order, Compact Difference Method code of heat transfer via MMS. Fourth example is a solution verification of a 2-D, Finite Difference Method code of floodplain analysis via Complete Richardson Extrapolation. In turn, application of VAVUQ in quantitative model skill assessment studies (validation) of environmental codes is given through two examples: validation of a two-phase flow computational modeling of air entrainment in a free surface flow versus lab measurements and heat transfer modeling in the earth surface versus field measurement. At the end, we discuss practical considerations and common pitfalls in interpretation of V&V results.
Yan, Jingwen; Chen, Jiazhen
2007-03-01
A new hyperspectral image compression method of spectral feature classification vector quantization (SFCVQ) and embedded zero-tree of wavelet (EZW) based on Karhunen-Loeve transformation (KLT) and integer wavelet transformation is represented. In comparison with the other methods, this method not only keeps the characteristics of high compression ratio and easy real-time transmission, but also has the advantage of high computation speed. After lifting based integer wavelet and SFCVQ coding are introduced, a system of nearly lossless compression of hyperspectral images is designed. KLT is used to remove the correlation of spectral redundancy as one-dimensional (1D) linear transform, and SFCVQ coding is applied to enhance compression ratio. The two-dimensional (2D) integer wavelet transformation is adopted for the decorrelation of 2D spatial redundancy. EZW coding method is applied to compress data in wavelet domain. Experimental results show that in comparison with the method of wavelet SFCVQ (WSFCVQ), the method of improved BiBlock zero tree coding (IBBZTC) and the method of feature spectral vector quantization (FSVQ), the peak signal-to-noise ratio (PSNR) of this method can enhance over 9 dB, and the total compression performance is improved greatly.
Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code
Taherkhani, Ahmad; Malmi, Lauri
2013-01-01
In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…
Institute of Scientific and Technical Information of China (English)
Jingwen Yan; Jiazhen Chen
2007-01-01
A new hyperspectral image compression method of spectral feature classification vector quantization (SFCVQ) and embedded zero-tree of wavelet (EZW) based on Karhunen-Loeve transformation (KLT) and integer wavelet transformation is represented. In comparison with the other methods, this method not only keeps the characteristics of high compression ratio and easy real-time transmission, but also has the advantage of high computation speed. After lifting based integer wavelet and SFCVQ coding are introduced, a system of nearly lossless compression of hyperspectral images is designed. KLT is used to remove the correlation of spectral redundancy as one-dimensional (1D) linear transform, and SFCVQ coding is applied to enhance compression ratio. The two-dimensional (2D) integer wavelet transformation is adopted for the decorrelation of 2D spatial redundancy. EZW coding method is applied to compress data in wavelet domain. Experimental results show that in comparison with the method of wavelet SFCVQ (WSFCVQ),the method of improved BiBlock zero tree coding (IBBZTC) and the method of feature spectral vector quantization (FSVQ), the peak signal-to-noise ratio (PSNR) of this method can enhance over 9 dB, and the total compression performance is improved greatly.
Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code
Taherkhani, Ahmad; Malmi, Lauri
2013-01-01
In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…
How could the replica method improve accuracy of performance assessment of channel coding?
Kabashima, Yoshiyuki
2009-12-01
We explore the relation between the techniques of statistical mechanics and information theory for assessing the performance of channel coding. We base our study on a framework developed by Gallager in IEEE Trans. Inform. Theory IT-11, 3 (1965), where the minimum decoding error probability is upper-bounded by an average of a generalized Chernoff's bound over a code ensemble. We show that the resulting bound in the framework can be directly assessed by the replica method, which has been developed in statistical mechanics of disordered systems, whereas in Gallager's original methodology further replacement by another bound utilizing Jensen's inequality is necessary. Our approach associates a seemingly ad hoc restriction with respect to an adjustable parameter for optimizing the bound with a phase transition between two replica symmetric solutions, and can improve the accuracy of performance assessments of general code ensembles including low density parity check codes, although its mathematical justification is still open.
Methods, algorithms and computer codes for calculation of electron-impact excitation parameters
Bogdanovich, P; Stonys, D
2015-01-01
We describe the computer codes, developed at Vilnius University, for the calculation of electron-impact excitation cross sections, collision strengths, and excitation rates in the plane-wave Born approximation. These codes utilize the multireference atomic wavefunctions which are also adopted to calculate radiative transition parameters of complex many-electron ions. This leads to consistent data sets suitable in plasma modelling codes. Two versions of electron scattering codes are considered in the present work, both of them employing configuration interaction method for inclusion of correlation effects and Breit-Pauli approximation to account for relativistic effects. These versions differ only by one-electron radial orbitals, where the first one employs the non-relativistic numerical radial orbitals, while another version uses the quasirelativistic radial orbitals. The accuracy of produced results is assessed by comparing radiative transition and electron-impact excitation data for neutral hydrogen, helium...
Effective wavelet-based compression method with adaptive quantization threshold and zerotree coding
Przelaskowski, Artur; Kazubek, Marian; Jamrogiewicz, Tomasz
1997-10-01
Efficient image compression technique especially for medical applications is presented. Dyadic wavelet decomposition by use of Antonini and Villasenor bank filters is followed by adaptive space-frequency quantization and zerotree-based entropy coding of wavelet coefficients. Threshold selection and uniform quantization is made on a base of spatial variance estimate built on the lowest frequency subband data set. Threshold value for each coefficient is evaluated as linear function of 9-order binary context. After quantization zerotree construction, pruning and arithmetic coding is applied for efficient lossless data coding. Presented compression method is less complex than the most effective EZW-based techniques but allows to achieve comparable compression efficiency. Specifically our method has similar to SPIHT efficiency in MR image compression, slightly better for CT image and significantly better in US image compression. Thus the compression efficiency of presented method is competitive with the best published algorithms in the literature across diverse classes of medical images.
WASTK: A Weighted Abstract Syntax Tree Kernel Method for Source Code Plagiarism Detection
Directory of Open Access Journals (Sweden)
Deqiang Fu
2017-01-01
Full Text Available In this paper, we introduce a source code plagiarism detection method, named WASTK (Weighted Abstract Syntax Tree Kernel, for computer science education. Different from other plagiarism detection methods, WASTK takes some aspects other than the similarity between programs into account. WASTK firstly transfers the source code of a program to an abstract syntax tree and then gets the similarity by calculating the tree kernel of two abstract syntax trees. To avoid misjudgment caused by trivial code snippets or frameworks given by instructors, an idea similar to TF-IDF (Term Frequency-Inverse Document Frequency in the field of information retrieval is applied. Each node in an abstract syntax tree is assigned a weight by TF-IDF. WASTK is evaluated on different datasets and, as a result, performs much better than other popular methods like Sim and JPlag.
Kobayashi, Tetsuji
The education of information and communication technologies is important for engineering, and it includes terminals, communication media, transmission, switching, software, communication protocols, coding, etc. The proposed teaching method for protocols is based on the HDLC (High-level Data Link Control) procedures using our newly developed software “HDLC trainer” , and includes the extensions for understanding other protocols such as TCP/IP. As for teaching the coding theory that is applied for the error control in protocols, we use both of a mathematical programming language and a general-purpose programming language. We have practiced and evaluated the proposed teaching method in our college, and it is shown that the method has remarkable effects for understanding the fundamental technology of protocols and coding.
An Approach to a Method of Construction of (F, K, 1) Optical Orthogonal Codes from Block Design
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
(F, K, 1) Optical orthogonal codes (OOC) are the best address codes applied to optical code division multiple access (OCDMA) communication systems, but the construction of the codes is very complex. In this paper, a method of construction of the OOC from block design is discussed and a method of computer aid design is presented, by which we can construct desired (F, K, 1) OOC easily.
Introduction into scientific work methods-a necessity when performance-based codes are introduced
DEFF Research Database (Denmark)
Dederichs, Anne; Sørensen, Lars Schiøtt
The introduction of performance-based codes in Denmark in 2004 requires new competences from people working with different aspects of fire safety in the industry and the public sector. This abstract presents an attempt in reducing problems with handling and analysing the mathematical methods...... and CFD models when applying performance-based codes. This is done within the educational program "Master of Fire Safety Engineering" at the department of Civil Engineering at the Technical University of Denmark. It was found that the students had general problems with academic methods. Therefore, a new...
Domain and range decomposition methods for coded aperture x-ray coherent scatter imaging
Odinaka, Ikenna; Kaganovsky, Yan; O'Sullivan, Joseph A.; Politte, David G.; Holmgren, Andrew D.; Greenberg, Joel A.; Carin, Lawrence; Brady, David J.
2016-05-01
Coded aperture X-ray coherent scatter imaging is a novel modality for ascertaining the molecular structure of an object. Measurements from different spatial locations and spectral channels in the object are multiplexed through a radiopaque material (coded aperture) onto the detectors. Iterative algorithms such as penalized expectation maximization (EM) and fully separable spectrally-grouped edge-preserving reconstruction have been proposed to recover the spatially-dependent coherent scatter spectral image from the multiplexed measurements. Such image recovery methods fall into the category of domain decomposition methods since they recover independent pieces of the image at a time. Ordered subsets has also been utilized in conjunction with penalized EM to accelerate its convergence. Ordered subsets is a range decomposition method because it uses parts of the measurements at a time to recover the image. In this paper, we analyze domain and range decomposition methods as they apply to coded aperture X-ray coherent scatter imaging using a spectrally-grouped edge-preserving regularizer and discuss the implications of the increased availability of parallel computational architecture on the choice of decomposition methods. We present results of applying the decomposition methods on experimental coded aperture X-ray coherent scatter measurements. Based on the results, an underlying observation is that updating different parts of the image or using different parts of the measurements in parallel, decreases the rate of convergence, whereas using the parts sequentially can accelerate the rate of convergence.
Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes
Mainardi, E; Donahue, R J
2002-01-01
The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions of a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using ...
Improved DCT-based image coding and decoding methods for low-bit-rate applications
Jung, Sung-Hwan; Mitra, Sanjit K.
1994-05-01
The discrete cosine transform (DCT) is well known for highly efficient coding performance, and it is widely used in many image compression applications. However, in low-bit rate coding, it produces undesirable block artifacts that are visually not pleasing. In addition, in many applications, faster compression and easier VLSI implementation of DCT coefficients are also important issues. The removal of the block artifacts and faster DCT computation are therefore of practical interest. In this paper, we outline a modified DCT computation scheme that provides a simple efficient solution to the reduction of the block artifacts while achieving faster computation. We also derive a similar solution for the efficient computation of the inverse DCT. We have applied the new approach for the low-bit rate coding and decoding of images. Initial simulation results on real images have verified the improved performance obtained using the proposed method over the standard JPEG method.
Energy Technology Data Exchange (ETDEWEB)
Kwon, Hyuk; Kim, S. J.; Park, J. P.; Hwang, D. H. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-05-15
Krylov subspace method was implemented to perform the efficient whole core calculation of SMART with pin by pin subchannel model without lumping channel. The SMART core consisted of 57 fuel assemblies of 17 by 17 arrays with 264 fuel rods and 25 guide tubes and there are total 15,048 fuel rods and 16,780 subchannels. Restarted GMRES and BiCGStab methods are selected among Krylov subspace methods. For the purpose of verifying the implementation of Krylov method, whole core problem is considered under the normal operating condition. In this problem, solving a linear system Aχ = b is considered when A is nearly symmetric and when the system is preconditioned with incomplete LU factorization(ILU). The preconditioner using incomplete LU factorization are among the most effective preconditioners for solving general large, sparse linear systems arising from practical engineering problem. The Krylov subspace method is expected to improve the calculation effectiveness of MATRA code rather than direct method and stationary iteration method such as Gauss elimination and SOR. The present study describes the implementation of Krylov subspace methods with ILU into MATRA code. In this paper, we explore an improved performance of MATRA code for the SMART whole core problems by of Krylov subspace method. For this purpose, two preconditioned Krylov subspace methods, GMRES and BiCGStab, are implemented into the subchannel code MATRA. A typical ILU method is used as the preconditioner. Numerical problems examined in this study indicate that the Krylov subspace method shows the outstanding improvements in the calculation speed and easy convergence.
GPU-accelerated 3D neutron diffusion code based on finite difference method
Energy Technology Data Exchange (ETDEWEB)
Xu, Q.; Yu, G.; Wang, K. [Dept. of Engineering Physics, Tsinghua Univ. (China)
2012-07-01
Finite difference method, as a traditional numerical solution to neutron diffusion equation, although considered simpler and more precise than the coarse mesh nodal methods, has a bottle neck to be widely applied caused by the huge memory and unendurable computation time it requires. In recent years, the concept of General-Purpose computation on GPUs has provided us with a powerful computational engine for scientific research. In this study, a GPU-Accelerated multi-group 3D neutron diffusion code based on finite difference method was developed. First, a clean-sheet neutron diffusion code (3DFD-CPU) was written in C++ on the CPU architecture, and later ported to GPUs under NVIDIA's CUDA platform (3DFD-GPU). The IAEA 3D PWR benchmark problem was calculated in the numerical test, where three different codes, including the original CPU-based sequential code, the HYPRE (High Performance Pre-conditioners)-based diffusion code and CITATION, were used as counterpoints to test the efficiency and accuracy of the GPU-based program. The results demonstrate both high efficiency and adequate accuracy of the GPU implementation for neutron diffusion equation. A speedup factor of about 46 times was obtained, using NVIDIA's Geforce GTX470 GPU card against a 2.50 GHz Intel Quad Q9300 CPU processor. Compared with the HYPRE-based code performing in parallel on an 8-core tower server, the speedup of about 2 still could be observed. More encouragingly, without any mathematical acceleration technology, the GPU implementation ran about 5 times faster than CITATION which was speeded up by using the SOR method and Chebyshev extrapolation technique. (authors)
2D ArcPIC Code Description: Description of Methods and User / Developer Manual (second edition)
Sjobak, Kyrre Ness
2014-01-01
Vacuum discharges are one of the main limiting factors for future linear collider designs such as that of the Compact LInear Collider (CLIC). To optimize machine efficiency, maintaining the highest feasible accelerating gradient below a certain breakdown rate is desirable; understanding breakdowns can therefore help us to achieve this goal. As a part of ongoing theoretical research on vacuum discharges at the Helsinki Institute of Physics, the build-up of plasma can be investigated through the particle-in-cell method. For this purpose, we have developed the 2D ArcPIC code introduced here. We present an exhaustive description of the 2D ArcPIC code in several parts. In the first chapter, we introduce the particle-in-cell method in general and detail the techniques used in the code. In the second chapter, we describe the code and provide a documentation and derivation of the key equations occurring in it. In the third chapter, we describe utilities for running the code and analyzing the results. The last chapter...
Li, Xujing; Zheng, Weiying
2016-10-01
A new parallel code based on discontinuous Galerkin (DG) method for hyperbolic conservation laws on three dimensional unstructured meshes is developed recently. This code can be used for simulations of MHD equations, which are very important in magnetic confined plasma research. The main challenges in MHD simulations in fusion include the complex geometry of the configurations, such as plasma in tokamaks, the possibly discontinuous solutions and large scale computing. Our new developed code is based on three dimensional unstructured meshes, i.e. tetrahedron. This makes the code flexible to arbitrary geometries. Second order polynomials are used on each element and HWENO type limiter are applied. The accuracy tests show that our scheme reaches the desired three order accuracy and the nonlinear shock test demonstrate that our code can capture the sharp shock transitions. Moreover, One of the advantages of DG compared with the classical finite element methods is that the matrices solved are localized on each element, making it easy for parallelization. Several simulations including the kink instabilities in toroidal geometry will be present here. Chinese National Magnetic Confinement Fusion Science Program 2015GB110003.
Coarse mesh methods for the transport calculation in the CRONOS reactor code
Energy Technology Data Exchange (ETDEWEB)
Fedon-Magnaud, C.; Lautard, J.J.; Akherraz, B.; Wu, G.J. [Commissariat a l`Energie Atomique, Gif sur Yvette (France)
1995-12-31
Homogeneous transport methods have been recently implemented in the kinetic code CRONOS dedicated mainly to PWR calculations. Two different methods are presented. The first one is based on the even parity flux formalism and uses finite element spatial discretization and a discrete ordinates angular approximation; the treatment of the anisotropic scattering is described in detail. The second method uses the odd flux as the main unknown, it is closely connected to nodal methods. This method is used to solve two different problems, the simplified PN equations and the exact transport equation using an angular PN expansion. Numerical results are presented for some standard benchmarks and the methods are compared.
Coarse mesh methods for the transport calculation in the Cronos reactor code
Energy Technology Data Exchange (ETDEWEB)
Fedon-Magnaud, C.; Lautard, J.J.; Akherraz, B.; Wu, G.J.
1995-12-31
Homogeneous transports methods have been recently implemented in the kinetic code CRONOS dedicated mainly to PWR calculations. Two different methods are presented. The first one is based on the even parity flux formalism and uses finite element spatial discretization and a discrete ordinates angular approximation; the treatment of the anisotropic scattering is described in detail. The second method uses the odd flux as the main unknown, it is closely to nodal methods. This method is used to solve different problems, the simplified PN equations and the exact transport equation using an angular PN expansion. Numerical results are presented for some standard benchmarks and the method are compared. (authors). 18 refs., 3 tabs.
Source reconstruction for neutron coded-aperture imaging: A sparse method.
Wang, Dongming; Hu, Huasi; Zhang, Fengna; Jia, Qinggang
2017-08-01
Neutron coded-aperture imaging has been developed as an important diagnostic for inertial fusion studies in recent decades. It is used to measure the distribution of neutrons produced in deuterium-tritium plasma. Source reconstruction is an essential part of the coded-aperture imaging. In this paper, we applied a sparse reconstruction method to neutron source reconstruction. This method takes advantage of the sparsity of the source image. Monte Carlo neutron transport simulations were performed to obtain the system response. An interpolation method was used while obtaining the spatially variant point spread functions on each point of the source in order to reduce the number of point spread functions that needs to be calculated by the Monte Carlo method. Source reconstructions from simulated images show that the sparse reconstruction method can result in higher signal-to-noise ratio and less distortion at a relatively high statistical noise level.
Comparison of different methods used in integral codes to model coagulation of aerosols
Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.
2013-09-01
The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.
SQA of finite element method (FEM) codes used for analyses of pit storage/transport packages
Energy Technology Data Exchange (ETDEWEB)
Russel, E. [Lawrence Livermore National Lab., CA (United States)
1997-11-01
This report contains viewgraphs on the software quality assurance of finite element method codes used for analyses of pit storage and transport projects. This methodology utilizes the ISO 9000-3: Guideline for application of 9001 to the development, supply, and maintenance of software, for establishing well-defined software engineering processes to consistently maintain high quality management approaches.
一种码本训练算法%A Method of Training Code Book
Institute of Scientific and Technical Information of China (English)
徐军; 叶澄清
2000-01-01
This paper proposes a new code training method of VQ based on discussing varies VQ,brings out a math. model,and shows a training algorithm. The experiment results of the image encoding that employs this algorithm demonstrate the efficiency of training.
A lossless compression method for medical image sequences using JPEG-LS and interframe coding.
Miaou, Shaou-Gang; Ke, Fu-Sheng; Chen, Shu-Ching
2009-09-01
Hospitals and medical centers produce an enormous amount of digital medical images every day, especially in the form of image sequences, which requires considerable storage space. One solution could be the application of lossless compression. Among available methods, JPEG-LS has excellent coding performance. However, it only compresses a single picture with intracoding and does not utilize the interframe correlation among pictures. Therefore, this paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG-LS alone. Since the interframe correlation between two adjacent images in a medical image sequence is usually not as high as that in a general video image sequence, the interframe coding is activated only when the interframe correlation is high enough. With six capsule endoscope image sequences under test, the proposed method achieves average compression gains of 13.3% and 26.3% over the methods of using JPEG-LS and JPEG2000 alone, respectively. Similarly, for an MRI image sequence, coding gains of 77.5% and 86.5% are correspondingly obtained.
Building America Guidance for Identifying and Overcoming Code, Standard, and Rating Method Barriers
Energy Technology Data Exchange (ETDEWEB)
Cole, P. C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Halverson, M. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2013-09-01
This guidance document was prepared using the input from the meeting summarized in the draft CSI Roadmap to provide Building America research teams and partners with specific information and approaches to identifying and overcoming potential barriers to Building America innovations arising in and/or stemming from codes, standards, and rating methods.
Institute of Scientific and Technical Information of China (English)
Wang Na; Zhang Li; Zhou Xiao'an; Jia Chuanying; Li Xia
2005-01-01
This letter exploits fundamental characteristics of a wavelet transform image to form a progressive octave-based spatial resolution. Each wavelet subband is coded based on zeroblock and quardtree partitioning ordering scheme with memory optimization technique. The method proposed in this letter is of low complexity and efficient for Internet plug-in software.
Performance evaluation of moment-method codes on an Intel iPSC/860 hypercube computer
Energy Technology Data Exchange (ETDEWEB)
Klimkowski, K.; Ling, H. (Texas Univ., Austin (United States))
1993-09-01
An analytical evaluation is conducted of the performance of a moment-method code on a parallel computer, treating algorithmic complexity costs within the framework of matrix size and the 'subblock-size' matrix-partitioning parameter. A scaled-efficiencies analysis is conducted for the measured computation times of the matrix-fill operation and LU decomposition. 6 refs.
Yang, Chunwei; Yao, Junping; Sun, Dawei; Wang, Shicheng; Liu, Huaping
2016-05-01
Automatic target recognition in infrared imagery is a challenging problem. In this paper, a kernel sparse coding method for infrared target recognition using covariance descriptor is proposed. First, covariance descriptor combining gray intensity and gradient information of the infrared target is extracted as a feature representation. Then, due to the reason that covariance descriptor lies in non-Euclidean manifold, kernel sparse coding theory is used to solve this problem. We verify the efficacy of the proposed algorithm in terms of the confusion matrices on the real images consisting of seven categories of infrared vehicle targets.
Development of Variational Data Assimilation Methods for the MoSST Geodynamo Code
Egbert, G. D.; Erofeeva, S.; Kuang, W.; Tangborn, A.; Dimitrova, L. L.
2013-12-01
A range of different approaches to data assimilation for Earth's geodynamo are now being pursued, from sequential schemes based on approximate covariances of various degrees of sophistication, to variational methods for models of varying degrees of physical completeness. While variational methods require development of adjoint (and possible tangent linear) variants on the forward code---a challenging programming task for a fully self-consistent modern dynamo code---this approach may ultimately offer significant advantages. For example, adjoint based variational approaches allow initial, boundary, and forcing terms to be explicitly adjusted to combine data from modern and historical eras into dynamically consistent maps of core state, including flow, buoyancy and magnetic fields. Here we describe development of tangent linear and adjoint codes for the Modular Scalable Self-consistent Three-dimensional (MoSST) geodynamo simulator, and present initial results from simple synthetic data assimilation experiments. Our approach has been to develop the exact linearization and adjoint of the actual discrete functions represented by the computer code. To do this we use a 'divide-and-concur' approach: the code is decomposed as the sequential action of a series of linear and non-linear procedures on specified inputs. Non-linear procedures are first linearized about a pre-computed input background state (derived by running the non-linear forward model), and a tangent linear time-step code is developed. For small perturbations of initial state the linearization appears to remain valid for times comparable to the secular variation time-scale. Adjoints for each linear (or linearized) procedure were then developed and tested separately (for symmetry), and then merged into adjoint procedures of increasing complexity. We have completed development of the adjoint for a serial version of the MoSST code, explore time limits of forward operator linearization, and discuss next steps
Carels, Nicolas; Frías, Diego
2013-01-01
In this study, we investigated the modalities of coding open reading frame (cORF) classification of expressed sequence tags (EST) by using the universal feature method (UFM). The UFM algorithm is based on the scoring of purine bias (Rrr) and stop codon frequencies. UFM classifies ORFs as coding or non-coding through a score based on 5 factors: (i) stop codon frequency; (ii) the product of the probabilities of purines occurring in the three positions of nucleotide triplets; (iii) the product of the probabilities of Cytosine (C), Guanine (G), and Adenine (A) occurring in the 1st, 2nd, and 3rd positions of triplets, respectively; (iv) the probabilities of a G occurring in the 1st and 2nd positions of triplets; and (v) the probabilities of a T occurring in the 1st and an A in the 2nd position of triplets. Because UFM is based on primary determinants of coding sequences that are conserved throughout the biosphere, it is suitable for cORF classification of any sequence in eukaryote transcriptomes without prior knowledge. Considering the protein sequences of the Protein Data Bank (RCSB PDB or more simply PDB) as a reference, we found that UFM classifies cORFs of ≥200 bp (if the coding strand is known) and cORFs of ≥300 bp (if the coding strand is unknown), and releases them in their coding strand and coding frame, which allows their automatic translation into protein sequences with a success rate equal to or higher than 95%. We first established the statistical parameters of UFM using ESTs from Plasmodium falciparum, Arabidopsis thaliana, Oryza sativa, Zea mays, Drosophila melanogaster, Homo sapiens and Chlamydomonas reinhardtii in reference to the protein sequences of PDB. Second, we showed that the success rate of cORF classification using UFM is expected to apply to approximately 95% of higher eukaryote genes that encode for proteins. Third, we used UFM in combination with CAP3 to assemble large EST samples into cORFs that we used to analyze transcriptome
Directory of Open Access Journals (Sweden)
Deng Sen
2015-04-01
Full Text Available Impulse components in vibration signals are important fault features of complex machines. Sparse coding (SC algorithm has been introduced as an impulse feature extraction method, but it could not guarantee a satisfactory performance in processing vibration signals with heavy background noises. In this paper, a method based on fusion sparse coding (FSC and online dictionary learning is proposed to extract impulses efficiently. Firstly, fusion scheme of different sparse coding algorithms is presented to ensure higher reconstruction accuracy. Then, an improved online dictionary learning method using FSC scheme is established to obtain redundant dictionary and it can capture specific features of training samples and reconstruct the sparse approximation of vibration signals. Simulation shows that this method has a good performance in solving sparse coefficients and training redundant dictionary compared with other methods. Lastly, the proposed method is further applied to processing aircraft engine rotor vibration signals. Compared with other feature extraction approaches, our method can extract impulse features accurately and efficiently from heavy noisy vibration signal, which has significant supports for machinery fault detection and diagnosis.
Building America Guidance for Identifying and Overcoming Code, Standard, and Rating Method Barriers
Energy Technology Data Exchange (ETDEWEB)
Cole, Pamala C.; Halverson, Mark A.
2013-09-01
The U.S. Department of Energy’s (DOE) Building America program implemented a new Codes and Standards Innovation (CSI) Team in 2013. The Team’s mission is to assist Building America (BA) research teams and partners in identifying and resolving conflicts between Building America innovations and the various codes and standards that govern the construction of residences. A CSI Roadmap was completed in September, 2013. This guidance document was prepared using the information in the CSI Roadmap to provide BA research teams and partners with specific information and approaches to identifying and overcoming potential barriers to Building America (BA) innovations arising in and/or stemming from codes, standards, and rating methods. For more information on the BA CSI team, please email: CSITeam@pnnl.gov
Development of improved methods for the LWR lattice physics code EPRI-CELL
Energy Technology Data Exchange (ETDEWEB)
Williams, M.L.; Wright, R.Q.; Barhen, J.
1982-07-01
A number of improvements have been made by ORNL to the lattice physics code EPRI-CELL (E-C) which is widely used by utilities for analysis of power reactors. The code modifications were made mainly in the thermal and epithermal routines and resulted in improved reactor physics approximations and more efficient running times. The improvements in the thermal flux calculation included implementation of a group-dependent rebalance procedure to accelerate the iterative process and a more rigorous calculation of interval-to-interval collision probabilities. The epithermal resonance shielding methods used in the code have been extensively studied to determine its major approximations and to examine the sensitivity of computed results to these approximations. The study has resulted in several improvements in the original methodology.
Two-Level Bregman Method for MRI Reconstruction with Graph Regularized Sparse Coding
Institute of Scientific and Technical Information of China (English)
刘且根; 卢红阳; 张明辉
2016-01-01
In this paper, a two-level Bregman method is presented with graph regularized sparse coding for highly undersampled magnetic resonance image reconstruction. The graph regularized sparse coding is incorporated with the two-level Bregman iterative procedure which enforces the sampled data constraints in the outer level and up-dates dictionary and sparse representation in the inner level. Graph regularized sparse coding and simple dictionary updating applied in the inner minimization make the proposed algorithm converge with a relatively small number of iterations. Experimental results demonstrate that the proposed algorithm can consistently reconstruct both simulated MR images and real MR data efficiently, and outperforms the current state-of-the-art approaches in terms of visual comparisons and quantitative measures.
Energy Technology Data Exchange (ETDEWEB)
Poole, G.; Heroux, M. [Engineering Applications Group, Eagan, MN (United States)
1994-12-31
This paper will focus on recent work in two widely used industrial applications codes with iterative methods. The ANSYS program, a general purpose finite element code widely used in structural analysis applications, has now added an iterative solver option. Some results are given from real applications comparing performance with the tradition parallel/vector frontal solver used in ANSYS. Discussion of the applicability of iterative solvers as a general purpose solver will include the topics of robustness, as well as memory requirements and CPU performance. The FIDAP program is a widely used CFD code which uses iterative solvers routinely. A brief description of preconditioners used and some performance enhancements for CRAY parallel/vector systems is given. The solution of large-scale applications in structures and CFD includes examples from industry problems solved on CRAY systems.
Artificial viscosity method for the design of supercritical airfoils. [Analysis code H
Energy Technology Data Exchange (ETDEWEB)
McFadden, G.B.
1979-07-01
The need for increased efficiency in the use of our energy resources has stimulated applied research in many areas. Recently progress has been made in the field of aerodynamics, where the development of the supercritical wing promises significant savings in the fuel consumption of aircraft operating near the speed of sound. Computational transonic aerodynamics has proved to be a useful tool in the design and evaluation of these wings. A numerical technique for the design of two-dimensional supercritical wing sections with low wave drag is presented. The method is actually a design mode of the analysis code H developed by Bauer, Garabedian, and Korn. This analysis code gives excellent agreement with experimental results and is used widely by the aircraft industry. The addition of a conceptually simple design version should make this code even more useful to the engineering public.
An efficient simulation method of a cyclotron sector-focusing magnet using 2D Poisson code
Energy Technology Data Exchange (ETDEWEB)
Gad Elmowla, Khaled Mohamed M; Chai, Jong Seo, E-mail: jschai@skku.edu; Yeon, Yeong H; Kim, Sangbum; Ghergherehchi, Mitra
2016-10-01
In this paper we discuss design simulations of a spiral magnet using 2D Poisson code. The Independent Layers Method (ILM) is a new technique that was developed to enable the use of two-dimensional simulation code to calculate a non-symmetric 3-dimensional magnetic field. In ILM, the magnet pole is divided into successive independent layers, and the hill and valley shape around the azimuthal direction is implemented using a reference magnet. The normalization of the magnetic field in the reference magnet produces a profile that can be multiplied by the maximum magnetic field in the hill magnet, which is a dipole magnet made of the hills at the same radius. Both magnets are then calculated using the 2D Poisson SUPERFISH code. Then a fully three-dimensional magnetic field is produced using TOSCA for the original spiral magnet, and the comparison of the 2D and 3D results shows a good agreement between both.
Moving object detection method using H.263 video coded data for remote surveillance systems
Kohno, Atsushi; Hata, Toshihiko; Ozaki, Minoru
1998-12-01
This paper describes a moving object detection method using H.263 coded data. For video surveillance systems, it is necessary to detect unusual states because there are a lot of cameras in the system and video surveillance is tedious in normal states. We examine the information extracted from H.263 coded data and propose a method of detecting alarm events from that information. Our method consists of two steps. In the first step, using motion vector information, a moving object can be detected based on the vector's size and the similarities between the vectors in one frame and the two adjoining frames. In the second step, using DCT coefficients, the detection errors caused by the change of the luminous intensity can be eliminated based on the characteristics of the H.263's DCT coefficients. Thus moving objects are detected by analyzing the motion vectors and DCT coefficients, and we present some experimental results that show the effectiveness of our method.
Institute of Scientific and Technical Information of China (English)
ZhuLiping
1996-01-01
Through the analysis for the process of Walsh modulation and demodulation, the adaptive error-limiting method suitable for the Walsh code shutting multiplexing in the mine monitor system is advanced in this article. It is proved by theoretical analysis and circuit experiments that this method is easy to carry out and can not onlyimprove the quality of information transmission but also meet the requirement of thesystem patrol test time without the increasement of system investment.
Introduction into scientific work methods-a necessity when performance-based codes are introduced
DEFF Research Database (Denmark)
Dederichs, Anne; Sørensen, Lars Schiøtt
The introduction of performance-based codes in Denmark in 2004 requires new competences from people working with different aspects of fire safety in the industry and the public sector. This abstract presents an attempt in reducing problems with handling and analysing the mathematical methods...... and CFD models when applying performance-based codes. This is done within the educational program "Master of Fire Safety Engineering" at the department of Civil Engineering at the Technical University of Denmark. It was found that the students had general problems with academic methods. Therefore, a new...... educational moment is introduced as a result of this investigation. The course is positioned in the program prior the work with the final project. In the course a mini project is worked out, in which the students provides extra training in academic methods....
Coding Methods for the NMF Approach to Speech Recognition and Vocabulary Acquisition
Directory of Open Access Journals (Sweden)
Meng Sun
2012-12-01
Full Text Available This paper aims at improving the accuracy of the non- negative matrix factorization approach to word learn- ing and recognition of spoken utterances. We pro- pose and compare three coding methods to alleviate quantization errors involved in the vector quantization (VQ of speech spectra: multi-codebooks, soft VQ and adaptive VQ. We evaluate on the task of spotting a vocabulary of 50 keywords in continuous speech. The error rates of multi-codebooks decreased with increas- ing number of codebooks, but the accuracy leveled off around 5 to 10 codebooks. Soft VQ and adaptive VQ made a better trade-off between the required memory and the accuracy. The best of the proposed methods reduce the error rate to 1.2% from the 1.9% obtained with a single codebook. The coding methods and the model framework may also prove useful for applica- tions such as topic discovery/detection and mining of sequential patterns.
A WYNER-ZIV VIDEO CODING METHOD UTILIZING MIXTURE CORRELATION NOISE MODEL
Institute of Scientific and Technical Information of China (English)
Hu Xiaofei; Zhu Xiuchang
2012-01-01
In Wyner-Ziv (WZ) Distributed Video Coding (DVC),correlation noise model is often used to describe the error distribution between WZ frame and the side information.The accuracy of the model can influence the performance of the video coder directly.A mixture correlation noise model in Discrete Cosine Transform (DCT) domain for WZ video coding is established in this paper.Different correlation noise estimation method is used for direct current and alternating current coefficients.Parameter estimation method based on expectation maximization algorithm is used to estimate the Laplace distribution center of direct current frequency band and Mixture Laplace-Uniform Distribution Model (MLUDM) is established for alternating current coefficients.Experimental results suggest that the proposed mixture correlation noise model can describe the heavy tail and sudden change of the noise accurately at high rate and make significant improvement on the coding efficiency compared with the noise model presented by DIStributed COding for Video sERvices (DISCOVER).
Investigate Methods to Decrease Compilation Time-AX-Program Code Group Computer Science R& D Project
Energy Technology Data Exchange (ETDEWEB)
Cottom, T
2003-06-11
Large simulation codes can take on the order of hours to compile from scratch. In Kull, which uses generic programming techniques, a significant portion of the time is spent generating and compiling template instantiations. I would like to investigate methods that would decrease the overall compilation time for large codes. These would be methods which could then be applied, hopefully, as standard practice to any large code. Success is measured by the overall decrease in wall clock time a developer spends waiting for an executable. Analyzing the make system of a slow to build project can benefit all developers on the project. Taking the time to analyze the number of processors used over the life of the build and restructuring the system to maximize the parallelization can significantly reduce build times. Distributing the build across multiple machines with the same configuration can increase the number of available processors for building and can help evenly balance the load. Becoming familiar with compiler options can have its benefits as well. The time improvements of the sum can be significant. Initial compilation time for Kull on OSF1 was {approx} 3 hours. Final time on OSF1 after completion is 16 minutes. Initial compilation time for Kull on AIX was {approx} 2 hours. Final time on AIX after completion is 25 minutes. Developers now spend 3 hours less waiting for a Kull executable on OSF1, and 2 hours less on AIX platforms. In the eyes of many Kull code developers, the project was a huge success.
Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes
Energy Technology Data Exchange (ETDEWEB)
Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.
2002-09-11
The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions of a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.
Tanana, Michael; Hallgren, Kevin A; Imel, Zac E; Atkins, David C; Srikumar, Vivek
2016-06-01
Motivational interviewing (MI) is an efficacious treatment for substance use disorders and other problem behaviors. Studies on MI fidelity and mechanisms of change typically use human raters to code therapy sessions, which requires considerable time, training, and financial costs. Natural language processing techniques have recently been utilized for coding MI sessions using machine learning techniques, rather than human coders, and preliminary results have suggested these methods hold promise. The current study extends this previous work by introducing two natural language processing models for automatically coding MI sessions via computer. The two models differ in the way they semantically represent session content, utilizing either 1) simple discrete sentence features (DSF model) and 2) more complex recursive neural networks (RNN model). Utterance- and session-level predictions from these models were compared to ratings provided by human coders using a large sample of MI sessions (N=341 sessions; 78,977 clinician and client talk turns) from 6 MI studies. Results show that the DSF model generally had slightly better performance compared to the RNN model. The DSF model had "good" or higher utterance-level agreement with human coders (Cohen's kappa>0.60) for open and closed questions, affirm, giving information, and follow/neutral (all therapist codes); considerably higher agreement was obtained for session-level indices, and many estimates were competitive with human-to-human agreement. However, there was poor agreement for client change talk, client sustain talk, and therapist MI-inconsistent behaviors. Natural language processing methods provide accurate representations of human derived behavioral codes and could offer substantial improvements to the efficiency and scale in which MI mechanisms of change research and fidelity monitoring are conducted.
A novel quantum LSB-based steganography method using the Gray code for colored quantum images
Heidari, Shahrokh; Farzadnia, Ehsan
2017-10-01
As one of the prevalent data-hiding techniques, steganography is defined as the act of concealing secret information in a cover multimedia encompassing text, image, video and audio, imperceptibly, in order to perform interaction between the sender and the receiver in which nobody except the receiver can figure out the secret data. In this approach a quantum LSB-based steganography method utilizing the Gray code for quantum RGB images is investigated. This method uses the Gray code to accommodate two secret qubits in 3 LSBs of each pixel simultaneously according to reference tables. Experimental consequences which are analyzed in MATLAB environment, exhibit that the present schema shows good performance and also it is more secure and applicable than the previous one currently found in the literature.
A method for detecting code security vulnerability based on variables tracking with validated-tree
Institute of Scientific and Technical Information of China (English)
2008-01-01
SQL injection poses a major threat to the application level security of the database and there is no systematic solution to these attacks.Different from traditional run time security strategies such as IDS and fire wall,this paper focuses on the solution at the outset;it presents a method to find vulnerabilities by analyzing the source codes.The concept of validated tree is developed to track variables referenced by database operations in scripts.By checking whether these variables are influenced by outside inputs,the database operations are proved to be secure or not.This method has advantages of high accuracy and efficiency as well as low costs,and it is universal to any type of web application platforms.It is implemented by the SOftware code vulnerabilities of SQL injection detector(CVSID).The validity and efficiency are demonstrated with an example.
A novel coding method for gene mutation correction during protein translation process.
Zhang, Lei; Tian, Fengchun; Wang, Shiyuan; Liu, Xiao
2012-03-07
In gene expression, gene mutations often lead to negative effect of protein translation in prokaryotic organisms. With consideration of the influences produced by gene mutation, a novel method based on error-correction coding theory is proposed for modeling and detection of translation initiation in this paper. In the proposed method, combined with a one-dimensional codebook from block coding, a decoding method based on the minimum hamming distance is designed for analysis of translation efficiency. The results show that the proposed method can recognize the biologically significant regions such as Shine-Dalgarno region within the mRNA leader sequences effectively. Also, a global analysis of single base and multiple bases mutations of the Shine-Dalgarno sequences are established. Compared with other published experimental methods for mutation analysis, the translation initiation can not be disturbed by multiple bases mutations using the proposed method, which shows the effectiveness of this method in improving the translation efficiency and its biological relevance for genetic regulatory system.
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The known design criterions of Space-Time Trellis Codes (STTC) on slow Rayleigh fading channel are rank, determinant and trace criterion. These criterions are not advantageous not only in operation but also in performance. With classifying the error events of STTC, a new criterion was presented on slow Rayleigh fading channels. Based on the criterion, an effective and straightforward multi-step method is proposed to construct codes with better performance. This method can reduce the computation of search to small enough. Simulation results show that the codes searched by computer have the same or even better performance than the reported codes.
Hybrid parallel code acceleration methods in full-core reactor physics calculations
Energy Technology Data Exchange (ETDEWEB)
Courau, T.; Plagne, L.; Ponicot, A. [EDF R and D, 1, Avenue du General de Gaulle, 92141 Clamart Cedex (France); Sjoden, G. [Nuclear and Radiological Engineering, Georgia Inst. of Technology, Atlanta, GA 30332 (United States)
2012-07-01
When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadrature required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)
The piecewise-linear predictor-corrector code - A Lagrangian-remap method for astrophysical flows
Lufkin, Eric A.; Hawley, John F.
1993-01-01
We describe a time-explicit finite-difference algorithm for solving the nonlinear fluid equations. The method is similar to existing Eulerian schemes in its use of operator-splitting and artificial viscosity, except that we solve the Lagrangian equations of motion with a predictor-corrector and then remap onto a fixed Eulerian grid. The remap is formulated to eliminate errors associated with coordinate singularities, with a general prescription for remaps of arbitrary order. We perform a comprehensive series of tests on standard problems. Self-convergence tests show that the code has a second-order rate of convergence in smooth, two-dimensional flow, with pressure forces, gravity, and curvilinear geometry included. While not as accurate on idealized problems as high-order Riemann-solving schemes, the predictor-corrector Lagrangian-remap code has great flexibility for application to a variety of astrophysical problems.
Implementation of discrete transfer radiation method into swift computational fluid dynamics code
Directory of Open Access Journals (Sweden)
Baburić Mario
2004-01-01
Full Text Available The Computational Fluid Dynamics (CFD has developed into a powerful tool widely used in science, technology and industrial design applications, when ever fluid flow, heat transfer, combustion, or other complicated physical processes, are involved. During decades of development of CFD codes scientists were writing their own codes, that had to include not only the model of processes that were of interest, but also a whole spectrum of necessary CFD procedures, numerical techniques, pre-processing and post-processing. That has arrested much of the scientist effort in work that has been copied many times over, and was not actually producing the added value. The arrival of commercial CFD codes brought relief to many engineers that could now use the user-function approach for mod el ling purposes, en trusting the application to do the rest of the work. This pa per shows the implementation of Discrete Transfer Radiation Method into AVL’s commercial CFD code SWIFT with the help of user defined functions. Few standard verification test cases were per formed first, and in order to check the implementation of the radiation method it self, where the comparisons with available analytic solution could be performed. After wards, the validation was done by simulating the combustion in the experimental furnace at IJmuiden (Netherlands, for which the experimental measurements were available. The importance of radiation prediction in such real-size furnaces is proved again to be substantial, where radiation itself takes the major fraction of over all heat transfer. The oil-combustion model used in simulations was the semi-empirical one that has been developed at the Power Engineering Department, and which is suit able for a wide range of typical oil flames.
Pretreatment Method of Quick Response Code%一种QR码的预处理方法
Institute of Scientific and Technical Information of China (English)
杨佳丽; 高美凤
2011-01-01
Aiming at the problem that the Quick Response(QR) code by vidicon has asymmetrical beam, angulation and spin, this paper proposes the adaptive threshold method.Combinating the Roberts operator and a wavelet modulus maxima, a new edge detection algorithm overcomes the noise-sensitive defects of traditional algorithm, and accurately extracts the edge information of QR code.The QR code is located according to the principle of quadrilateral with four vertices to the diagonal line parallel of the shortest distance and the bilinear interpolation algorithm is used to correct the distorted QR code.Experimental results show that the method is reliable.%针对摄像机采集的快速响应码(QR码)在提取之前存在光照不均匀以及可能产生的旋转、扭曲等现象,提出一种自适应阈值算法.对图像进行二值化,将Roberts算子与小波模极大值相结合,克服传统边缘检测算法对噪声敏感的缺点,提取QR码的边缘信息.根据四边形4个顶点到与对角线平行的直线的最短距离来定位QR码,并利用双线性差值进行纠正.实验结果证明了该算法的可靠性.
Pressure vessels design methods using the codes, fracture mechanics and multiaxial fatigue
Directory of Open Access Journals (Sweden)
Fatima Majid
2016-10-01
Full Text Available This paper gives a highlight about pressure vessel (PV methods of design to initiate new engineers and new researchers to understand the basics and to have a summary about the knowhow of PV design. This understanding will contribute to enhance their knowledge in the selection of the appropriate method. There are several types of tanks distinguished by the operating pressure, temperature and the safety system to predict. The selection of one or the other of these tanks depends on environmental regulations, the geographic location and the used materials. The design theory of PVs is very detailed in various codes and standards API, such as ASME, CODAP ... as well as the standards of material selection such as EN 10025 or EN 10028. While designing a PV, we must design the fatigue of its material through the different methods and theories, we can find in the literature, and specific codes. In this work, a focus on the fatigue lifetime calculation through fracture mechanics theory and the different methods found in the ASME VIII DIV 2, the API 579-1 and EN 13445-3, Annex B, will be detailed by giving a comparison between these methods. In many articles in the literature the uniaxial fatigue has been very detailed. Meanwhile, the multiaxial effect has not been considered as it must be. In this paper we will lead a discussion about the biaxial fatigue due to cyclic pressure in thick-walled PV. Besides, an overview of multiaxial fatigue in PVs is detailed
Institute of Scientific and Technical Information of China (English)
KhalidH.Sayhood; WuLenan
2003-01-01
The multilevel modulation techniques of M-Differential Amplitude Phase Shift Keying(DAPSK)have been proposed in combination with Turbo code scheme for digital radio broad-casting bands below 30 MHz radio channel.Comparison of this modulation method with channel coding in an Additive White Gaussian Noise(AWGN)and mulit-path fading channels has been presented.The analysis provides an iterative decoding of the Turbo code.
Institute of Scientific and Technical Information of China (English)
Khalid H. Sayhood; Wu Lenan
2003-01-01
The multilevel modulation techniques of M-Differential Amplitude Phase Shift Keying (DAPSK) have been proposed in combination with Turbo code scheme for digital radio broadcasting bands below 30 MHz radio channel. Comparison of this modulation method with channel coding in an Additive White Gaussian Noise (AWGN) and multi-path fading channels has been presented. The analysis provides an iterative decoding of the Turbo code.
A New Region-of-interest Coding Method to Control the Relative Quality of Progressive Decoded Images
Institute of Scientific and Technical Information of China (English)
LI Ji-liang; FANG Xiang-zhong; ZHANG Dong-dong
2007-01-01
Based on the ideas of controlling relative quality and rearranging bitplanes, a new ROI coding method for JPEG2000 was proposed, which shifts and rearranges bitplanes in units of bitplane groups.It can code arbitrary shaped ROI without shape coding, and reserve almost arbitrary percent of background information.It also can control the relative quality of progressive decoded images.In addition, it is easy to be implemented and has low computational cost.
Ki, Dae Wook; Kim, Jae Ho
2013-07-01
We propose a fast new multiple run_before decoding method in context-adaptive variable length coding (CAVLC). The transform coefficients are coded using CAVLC, in which the run_before symbols are generated for a 4×4 block input. To speed up the CAVLC decoding, the run_before symbols need to be decoded in parallel. We implemented a new CAVLC table for simultaneous decoding of up to three run_befores. The simulation results show a Total Speed-up Factor of 205%˜144% over various resolutions and quantization steps.
Directory of Open Access Journals (Sweden)
Thomine O.
2013-12-01
Full Text Available The present work deals with an optimization procedure developed in the full-f global GYrokinetic SEmi-LAgrangian code (GYSELA. Optimizing the writing of the restart files is necessary to reduce the computing impact of crashes. These files require a very large memory space, and particularly so for very large mesh sizes. The limited bandwidth of the data pipe between the comput- ing nodes and the storage system induces a non-scalable part in the GYSELA code, which increases with the mesh size. Indeed the transfer time of RAM to data depends linearly on the files size. The necessity of non synchronized writing-in-file procedure is therefore crucial. A new GYSELA module has been developed. This asynchronous procedure allows the frequent writ- ing of the restart files, whilst preventing a severe slowing down due to the limited writing bandwidth. This method has been improved to generate a checksum control of the restart files, and automatically rerun the code in case of a crash for any cause.
Energy Technology Data Exchange (ETDEWEB)
Kida, Takashi; Umeda, Miki; Sugikawa, Susumu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2003-03-01
MOX dissolution using silver-mediated electrochemical method will be employed for the preparation of plutonium nitrate solution in the criticality safety experiments in the Nuclear Fuel Cycle Safety Engineering Research Facility (NUCEF). A simulation code for the MOX dissolution has been developed for the operating support. The present report describes the outline of the simulation code, a comparison with the experimental data and a parameter study on the MOX dissolution. The principle of this code is based on the Zundelevich's model for PuO{sub 2} dissolution using Ag(II). The influence of nitrous acid on the material balance of Ag(II) is taken into consideration and the surface area of MOX powder is evaluated by particle size distribution in this model. The comparison with experimental data was carried out to confirm the validity of this model. It was confirmed that the behavior of MOX dissolution could adequately be simulated using an appropriate MOX dissolution rate constant. It was found from the result of parameter studies that MOX particle size was major governing factor on the dissolution rate. (author)
Coding Model and Mapping Method of Spherical Diamond Discrete Grids Based on Icosahedron
Directory of Open Access Journals (Sweden)
LIN Bingxian
2016-12-01
Full Text Available Discrete Global Grid(DGG provides a fundamental environment for global-scale spatial data's organization and management. DGG's encoding scheme, which blocks coordinate transformation between different coordination reference frames and reduces the complexity of spatial analysis, contributes a lot to the multi-scale expression and unified modeling of spatial data. Compared with other kinds of DGGs, Diamond Discrete Global Grid(DDGG based on icosahedron is beneficial to the spherical spatial data's integration and expression for much better geometric properties. However, its structure seems more complicated than DDGG on octahedron due to its initial diamond's edges cannot fit meridian and parallel. New challenges are posed when it comes to the construction of hierarchical encoding system and mapping relationship with geographic coordinates. On this issue, this paper presents a DDGG's coding system based on the Hilbert curve and designs conversion methods between codes and geographical coordinates. The study results indicate that this encoding system based on the Hilbert curve can express space scale and location information implicitly with the similarity between DDG and planar grid put into practice, and balances efficiency and accuracy of conversion between codes and geographical coordinates in order to support global massive spatial data's modeling, integrated management and all kinds of spatial analysis.
Parallel processing method for two-dimensional Sn transport code DOT3.5
Energy Technology Data Exchange (ETDEWEB)
Uematsu, Mikio [Toshiba Corp., Kawasaki, Kanagawa (Japan)
1998-03-01
A parallel processing method for the two-dimensional Sn transport code DOT3.5 has been developed to achieve drastic reduction of computation time. In the proposed method, parallelization is made with angular domain decomposition and/or space domain decomposition. Calculational speedup for parallel processing by angular domain decomposition is achieved by minimizing frequency of communications between processing elements. As for parallel processing by space domain decomposition, two-step rescaling method consisting of segmentwise rescaling and the ordinary pointwise rescaling have been developed to accelerate convergence, which will otherwise be degraded because of discontinuity at the segment boundaries. The developed method was examined with a Sun workstation using the PVM message-passing library, and sufficient speedup was observed. (author)
Simple PSF based method for pupil phase mask's optimization in wavefront coding system
Institute of Scientific and Technical Information of China (English)
ZHANG Wen-zi; CHEN Yan-ping; ZHAO Ting-yu; YE Zi; YU Fei-hong
2007-01-01
By applying the wavefront coding technique to an optical system, the depth of focus can be greatly increased. Several complicated methods, such as Fisher Information based method, have already been taken to optimize for the best pupil phase mask in ideal condition. Here one simple point spread function (PSF) based method with only the standard deviation method used to evaluate the PSF stability over the depth of focus is taken to optimize for the best coefficients of pupil phase mask in practical optical systems. Results of imaging simulations for optical systems with and without pupil phase mask are presented, and the sharpness of image is calculated for comparison. The optimized results showed better and much more stable imaging quality over the original system without changing the position of the image plane.
Implementation of the probability table method in a continuous-energy Monte Carlo code system
Energy Technology Data Exchange (ETDEWEB)
Sutton, T.M.; Brown, F.B. [Lockheed Martin Corp., Schenectady, NY (United States)
1998-10-01
RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5.
Resin Matrix/Fiber Reinforced Composite Material, Ⅱ: Method of Solution and Computer Code
Institute of Scientific and Technical Information of China (English)
Li Chensha(李辰砂); Jiao Caishan; Liu Ying; Wang Zhengping; Wang Hongjie; Cao Maosheng
2003-01-01
According to a mathematical model which describes the curing process of composites constructed from continuous fiber-reinforced, thermosetting resin matrix prepreg materials, and the consolidation of the composites, the solution method to the model is made and a computer code is developed, which for flat-plate composites cured by a specified cure cycle, provides the variation of temperature distribution, the cure reaction process in the resin, the resin flow and fibers stress inside the composite, the void variation and the residual stress distribution.
Apparatus, Method, and Computer Program for a Resolution-Enhanced Pseudo-Noise Code Technique
Li, Steven X. (Inventor)
2015-01-01
An apparatus, method, and computer program for a resolution enhanced pseudo-noise coding technique for 3D imaging is provided. In one embodiment, a pattern generator may generate a plurality of unique patterns for a return to zero signal. A plurality of laser diodes may be configured such that each laser diode transmits the return to zero signal to an object. Each of the return to zero signal includes one unique pattern from the plurality of unique patterns to distinguish each of the transmitted return to zero signals from one another.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
Oliver, A. Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.
Tominaga, Nozomu; Blinnikov, Sergei I
2015-01-01
We develop a time-dependent multi-group multidimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) that evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with a ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed frame approach; the source function is evaluated in the comoving frame whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated with various test problems and comparisons with results of a relativistic Monte Carlo code. These validations confirm that the code ...
A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting.
Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue
2016-07-29
The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals.
Application of Fast Multipole Methods to the NASA Fast Scattering Code
Dunn, Mark H.; Tinetti, Ana F.
2008-01-01
The NASA Fast Scattering Code (FSC) is a versatile noise prediction program designed to conduct aeroacoustic noise reduction studies. The equivalent source method is used to solve an exterior Helmholtz boundary value problem with an impedance type boundary condition. The solution process in FSC v2.0 requires direct manipulation of a large, dense system of linear equations, limiting the applicability of the code to small scales and/or moderate excitation frequencies. Recent advances in the use of Fast Multipole Methods (FMM) for solving scattering problems, coupled with sparse linear algebra techniques, suggest that a substantial reduction in computer resource utilization over conventional solution approaches can be obtained. Implementation of the single level FMM (SLFMM) and a variant of the Conjugate Gradient Method (CGM) into the FSC is discussed in this paper. The culmination of this effort, FSC v3.0, was used to generate solutions for three configurations of interest. Benchmarking against previously obtained simulations indicate that a twenty-fold reduction in computational memory and up to a four-fold reduction in computer time have been achieved on a single processor.
A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting
Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue
2016-01-01
The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals. PMID:27483275
Ioan, M.-R.
2016-08-01
In ionizing radiation related experiments, precisely knowing of the involved parameters it is a very important task. Some of these experiments are involving the use of electromagnetic ionizing radiation such are gamma rays and X rays, others make use of energetic charged or not charged small dimensions particles such are protons, electrons, neutrons and even, in other cases, larger accelerated particles such are helium or deuterium nuclei are used. In all these cases the beam used to hit an exposed target must be previously collimated and precisely characterized. In this paper, a novel method to determine the distribution of the collimated beam involving Matlab coding is proposed. The method was implemented by using of some Pyrex glass test samples placed in the beam where its distribution and dimension must be determined, followed by taking high quality pictures of them and then by digital processing the resulted images. By this method, information regarding the doses absorbed in the exposed samples volume are obtained too.
Simple Strehl ratio based method for pupil phase mask's optimization in wavefront coding system
Institute of Scientific and Technical Information of China (English)
Wenzi Zhang; Yanping Chen; Tingyu Zhao; Zi Ye; Feihong Yu
2006-01-01
@@ By applying the wavefront coding technique to an optical system,the depth of focus can be greatly increased.Several complicated methods have already been taken to optimize for the best pupil phase mask in ideal condition.Here a simple Strehl ratio based method with only the standard deviation method used to evaluate the Strehl ratio stability over the depth of focus is applied to optimize for the best coefficients of pupil phase mask in practical optical systems.Results of imaging simulations for optical systems with and without pupil phase mask are presented,and the sharpness of image is calculated for comparison.The optimized pupil phase mask shows good results in extending the depth of focus.
Energy Technology Data Exchange (ETDEWEB)
Yoo, S.; Henderson, D.L. [Dept. of Medical Physics, Madison, WI (United States); Thomadsen, B.R. [Dept. of Medical Physics and Dept. of Human Oncology, Madison (United States)
2001-07-01
Interstitial brachytherapy is a type of radiation in which radioactive sources are implanted directly into cancerous tissue. Determination of dose delivered to tissue by photons emitted from implanted seeds is an important step in the treatment plan process. In this paper we will investigate the use of the discrete ordinates method and the adjoint method to calculate absorbed dose in the regions of interest. MIP (mixed-integer programming) is used to determine the optimal seed distribution that conforms the prescribed dose to the tumor and delivers minimal dose to the sensitive structures. The patient treatment procedure consists of three steps: (1) image acquisition with the transrectal ultrasound (TRUS) and assessing the region of interest, (2) adjoint flux computation with discrete ordinate code for inverse dose calculation, and (3) optimization with the MIP branch-and-bound method.
Non-coding RNA detection methods combined to improve usability, reproducibility and precision
Directory of Open Access Journals (Sweden)
Kreikemeyer Bernd
2010-09-01
Full Text Available Abstract Background Non-coding RNAs gain more attention as their diverse roles in many cellular processes are discovered. At the same time, the need for efficient computational prediction of ncRNAs increases with the pace of sequencing technology. Existing tools are based on various approaches and techniques, but none of them provides a reliable ncRNA detector yet. Consequently, a natural approach is to combine existing tools. Due to a lack of standard input and output formats combination and comparison of existing tools is difficult. Also, for genomic scans they often need to be incorporated in detection workflows using custom scripts, which decreases transparency and reproducibility. Results We developed a Java-based framework to integrate existing tools and methods for ncRNA detection. This framework enables users to construct transparent detection workflows and to combine and compare different methods efficiently. We demonstrate the effectiveness of combining detection methods in case studies with the small genomes of Escherichia coli, Listeria monocytogenes and Streptococcus pyogenes. With the combined method, we gained 10% to 20% precision for sensitivities from 30% to 80%. Further, we investigated Streptococcus pyogenes for novel ncRNAs. Using multiple methods--integrated by our framework--we determined four highly probable candidates. We verified all four candidates experimentally using RT-PCR. Conclusions We have created an extensible framework for practical, transparent and reproducible combination and comparison of ncRNA detection methods. We have proven the effectiveness of this approach in tests and by guiding experiments to find new ncRNAs. The software is freely available under the GNU General Public License (GPL, version 3 at http://www.sbi.uni-rostock.de/moses along with source code, screen shots, examples and tutorial material.
Non-coding RNA detection methods combined to improve usability, reproducibility and precision.
Raasch, Peter; Schmitz, Ulf; Patenge, Nadja; Vera, Julio; Kreikemeyer, Bernd; Wolkenhauer, Olaf
2010-09-29
Non-coding RNAs gain more attention as their diverse roles in many cellular processes are discovered. At the same time, the need for efficient computational prediction of ncRNAs increases with the pace of sequencing technology. Existing tools are based on various approaches and techniques, but none of them provides a reliable ncRNA detector yet. Consequently, a natural approach is to combine existing tools. Due to a lack of standard input and output formats combination and comparison of existing tools is difficult. Also, for genomic scans they often need to be incorporated in detection workflows using custom scripts, which decreases transparency and reproducibility. We developed a Java-based framework to integrate existing tools and methods for ncRNA detection. This framework enables users to construct transparent detection workflows and to combine and compare different methods efficiently. We demonstrate the effectiveness of combining detection methods in case studies with the small genomes of Escherichia coli, Listeria monocytogenes and Streptococcus pyogenes. With the combined method, we gained 10% to 20% precision for sensitivities from 30% to 80%. Further, we investigated Streptococcus pyogenes for novel ncRNAs. Using multiple methods--integrated by our framework--we determined four highly probable candidates. We verified all four candidates experimentally using RT-PCR. We have created an extensible framework for practical, transparent and reproducible combination and comparison of ncRNA detection methods. We have proven the effectiveness of this approach in tests and by guiding experiments to find new ncRNAs. The software is freely available under the GNU General Public License (GPL), version 3 at http://www.sbi.uni-rostock.de/moses along with source code, screen shots, examples and tutorial material.
A Simple Method for Guaranteeing ECG Quality in Real-Time Wavelet Lossy Coding
Directory of Open Access Journals (Sweden)
Alesanco Álvaro
2007-01-01
Full Text Available Guaranteeing ECG signal quality in wavelet lossy compression methods is essential for clinical acceptability of reconstructed signals. In this paper, we present a simple and efficient method for guaranteeing reconstruction quality measured using the new distortion index wavelet weighted PRD (WWPRD, which reflects in a more accurate way the real clinical distortion of the compressed signal. The method is based on the wavelet transform and its subsequent coding using the set partitioning in hierarchical trees (SPIHT algorithm. By thresholding the WWPRD in the wavelet transform domain, a very precise reconstruction error can be achieved thus enabling to obtain clinically useful reconstructed signals. Because of its computational efficiency, the method is suitable to work in a real-time operation, thus being very useful for real-time telecardiology systems. The method is extensively tested using two different ECG databases. Results led to an excellent conclusion: the method controls the quality in a very accurate way not only in mean value but also with a low-standard deviation. The effects of ECG baseline wandering as well as noise in compression are also discussed. Baseline wandering provokes negative effects when using WWPRD index to guarantee quality because this index is normalized by the signal energy. Therefore, it is better to remove it before compression. On the other hand, noise causes an increase in signal energy provoking an artificial increase of the coded signal bit rate. Clinical validation by cardiologists showed that a WWPRD value of 10 preserves the signal quality and thus they recommend this value to be used in the compression system.
A Simple Method for Guaranteeing ECG Quality in Real-Time Wavelet Lossy Coding
Directory of Open Access Journals (Sweden)
José García
2007-01-01
Full Text Available Guaranteeing ECG signal quality in wavelet lossy compression methods is essential for clinical acceptability of reconstructed signals. In this paper, we present a simple and efficient method for guaranteeing reconstruction quality measured using the new distortion index wavelet weighted PRD (WWPRD, which reflects in a more accurate way the real clinical distortion of the compressed signal. The method is based on the wavelet transform and its subsequent coding using the set partitioning in hierarchical trees (SPIHT algorithm. By thresholding the WWPRD in the wavelet transform domain, a very precise reconstruction error can be achieved thus enabling to obtain clinically useful reconstructed signals. Because of its computational efficiency, the method is suitable to work in a real-time operation, thus being very useful for real-time telecardiology systems. The method is extensively tested using two different ECG databases. Results led to an excellent conclusion: the method controls the quality in a very accurate way not only in mean value but also with a low-standard deviation. The effects of ECG baseline wandering as well as noise in compression are also discussed. Baseline wandering provokes negative effects when using WWPRD index to guarantee quality because this index is normalized by the signal energy. Therefore, it is better to remove it before compression. On the other hand, noise causes an increase in signal energy provoking an artificial increase of the coded signal bit rate. Clinical validation by cardiologists showed that a WWPRD value of 10% preserves the signal quality and thus they recommend this value to be used in the compression system.
A molecular method for a qualitative analysis of potentially coding sequences of DNA
Directory of Open Access Journals (Sweden)
M. L. Christoffersen
Full Text Available Total sequence phylogenies have low information content. Ordinary misconceptions are that character quality can be ignored and that relying on computer algorithms is enough. Despite widespread preference for a posteriori methods of character evaluation, a priori methods are necessary to produce transformation series that are independent of tree topologies. We propose a stepwise qualitative method for analyzing protein sequences. Informative codons are selected, alternative amino acid transformation series are analyzed, and most parsimonious transformations are hypothesized. We conduct four phylogenetic analyses of philodryanine snakes. The tree based on all nucleotides produces least resolution. Trees based on the exclusion of third positions, on an asymmetric step matrix, and on our protocol, produce similar results. Our method eliminates noise by hypothesizing explicit transformation series for each informative protein-coding amino acid. This approaches qualitative methods for morphological data, in which only characters successfully interpreted in a phylogenetic context are used in cladistic analyses. The method allows utilizing character information contained in the original sequence alignment and, therefore, has higher resolution in inferring a phylogenetic tree than some traditional methods (such as distance methods.
Three-dimensional surface reconstruction via a robust binary shape-coded structured light method
Tang, Suming; Zhang, Xu; Song, Zhan; Jiang, Hualie; Nie, Lei
2017-01-01
A binary shape-coded structured light method for single-shot three-dimensional reconstruction is presented. The projected structured pattern is composed with eight geometrical shapes with a coding window size of 2×2. The pattern element is designed as rhombic with embedded geometrical shapes. The pattern feature point is defined as the intersection of two adjacent rhombic shapes, and a multitemplate-based feature detector is presented for its robust detection and precise localization. Based on the extracted grid-points, a topological structure is constructed to separate the pattern elements from the obtained image. In the decoding stage, a training dataset is first established from training samples that are collected from a variety of target surfaces. Then, the deep neural network technique is applied for the classification of pattern elements. Finally, an error correction algorithm is introduced based on the epipolar and neighboring constraints to refine the decoding results. The experimental results show that the proposed method not only owns high measurement precision but also has strong robustness to surface color and texture.
Treiber, David A.; Muilenburg, Dennis A.
1995-01-01
The viability of applying a state-of-the-art Euler code to calculate the aerodynamic forces and moments through maximum lift coefficient for a generic sharp-edge configuration is assessed. The OVERFLOW code, a method employing overset (Chimera) grids, was used to conduct mesh refinement studies, a wind-tunnel wall sensitivity study, and a 22-run computational matrix of flow conditions, including sideslip runs and geometry variations. The subject configuration was a generic wing-body-tail geometry with chined forebody, swept wing leading-edge, and deflected part-span leading-edge flap. The analysis showed that the Euler method is adequate for capturing some of the non-linear aerodynamic effects resulting from leading-edge and forebody vortices produced at high angle-of-attack through C(sub Lmax). Computed forces and moments, as well as surface pressures, match well enough useful preliminary design information to be extracted. Vortex burst effects and vortex interactions with the configuration are also investigated.
Advanced Error-Control Coding Methods Enhance Reliability of Transmission and Storage Data Systems
Directory of Open Access Journals (Sweden)
K. Vlcek
2003-04-01
Full Text Available Iterative coding systems are currently being proposed and acceptedfor many future systems as next generation wireless transmission andstorage systems. The text gives an overview of the state of the art initerative decoded FEC (Forward Error-Correction error-control systems.Such systems can typically achieve capacity to within a fraction of adB at unprecedented low complexities. Using a single code requires verylong code words, and consequently very complex coding system. One wayaround the problem of achieving very low error probabilities is turbocoding (TC application. A general model of concatenated coding systemis shown - an algorithm of turbo codes is given in this paper.
Algorithmic complexity for psychology: a user-friendly implementation of the coding theorem method.
Gauvrit, Nicolas; Singmann, Henrik; Soler-Toscano, Fernando; Zenil, Hector
2016-03-01
Kolmogorov-Chaitin complexity has long been believed to be impossible to approximate when it comes to short sequences (e.g. of length 5-50). However, with the newly developed coding theorem method the complexity of strings of length 2-11 can now be numerically estimated. We present the theoretical basis of algorithmic complexity for short strings (ACSS) and describe an R-package providing functions based on ACSS that will cover psychologists' needs and improve upon previous methods in three ways: (1) ACSS is now available not only for binary strings, but for strings based on up to 9 different symbols, (2) ACSS no longer requires time-consuming computing, and (3) a new approach based on ACSS gives access to an estimation of the complexity of strings of any length. Finally, three illustrative examples show how these tools can be applied to psychology.
A morphology screen coding anti-counterfeiting method based on visual characteristics
Institute of Scientific and Technical Information of China (English)
ZHAO Li-long; GU Ze-cang; FANG Zhi-liang
2008-01-01
A paper information anti-fake and tamper-proofing method based on human visual characteristics and morphology screen coding technology is proposed. Through controlling the distribution of mathematical morphology of screen dot-matrix, warning mark and information are hidden in the background texture. Because of the differences between human vision and the duplicate characteristics of copy machine, warning mark which can not be discriminated by human eyes will emerge after copy. Tampered or fake certificates can be verified by comparing embedded information which is extracted from scanned image of certificate with plain text printed on the certificate. This method is applied in many bills and certificates. Experimental results show that the identification accuracy is above 98%
A New Method Of Gene Coding For A Genetic Algorithm Designed For Parametric Optimization
Directory of Open Access Journals (Sweden)
Radu BELEA
2003-12-01
Full Text Available In a parametric optimization problem the genes code the real parameters of the fitness function. There are two coding techniques known under the names of: binary coded genes and real coded genes. The comparison between these two is a controversial subject since the first papers about parametric optimization have appeared. An objective analysis regarding the advantages and disadvantages of the two coding techniques is difficult to be done while different format information is compared. The present paper suggests a gene coding technique that uses the same format for both binary coded genes and for the real coded genes. After unifying the real parameters representation, the next criterion is going to be applied: the differences between the two techniques are statistically measured by the effect of the genetic operators over some random generated fellows.
Directory of Open Access Journals (Sweden)
Ronesh Sharma
2013-06-01
Full Text Available An automatic container code recognition system is of a great importance to the logistic supply chain management. Techniques have been proposed and implemented for the ISO container code region identification and recognition, however those systems have limitations on the type of container images with illumination factor and marks present on the container due to handling in the mass environmental condition. Moreover the research is not limited for differentiating between different formats of code and color of code characters. In this paper firstly an object clustering method is proposed to localize each line of the container code region. Secondly, the localizing algorithm is implemented with opencv and visual studio to perform localization and then recognition. Thus for real time application, the implemented system has added advantage of being easily integrated with other web application to increase the efficiency of the supply chain management. The experimental results and the application demonstrate the effectiveness of the proposed system for practical use.
The FLUKA code for application of Monte Carlo methods to promote high precision ion beam therapy
Parodi, K; Cerutti, F; Ferrari, A; Mairani, A; Paganetti, H; Sommerer, F
2010-01-01
Monte Carlo (MC) methods are increasingly being utilized to support several aspects of commissioning and clinical operation of ion beam therapy facilities. In this contribution two emerging areas of MC applications are outlined. The value of MC modeling to promote accurate treatment planning is addressed via examples of application of the FLUKA code to proton and carbon ion therapy at the Heidelberg Ion Beam Therapy Center in Heidelberg, Germany, and at the Proton Therapy Center of Massachusetts General Hospital (MGH) Boston, USA. These include generation of basic data for input into the treatment planning system (TPS) and validation of the TPS analytical pencil-beam dose computations. Moreover, we review the implementation of PET/CT (Positron-Emission-Tomography / Computed- Tomography) imaging for in-vivo verification of proton therapy at MGH. Here, MC is used to calculate irradiation-induced positron-emitter production in tissue for comparison with the +-activity measurement in order to infer indirect infor...
Solution of the neutronics code dynamic benchmark by finite element method
Avvakumov, A. V.; Vabishchevich, P. N.; Vasilev, A. O.; Strizhov, V. F.
2016-10-01
The objective is to analyze the dynamic benchmark developed by Atomic Energy Research for the verification of best-estimate neutronics codes. The benchmark scenario includes asymmetrical ejection of a control rod in a water-type hexagonal reactor at hot zero power. A simple Doppler feedback mechanism assuming adiabatic fuel temperature heating is proposed. The finite element method on triangular calculation grids is used to solve the three-dimensional neutron kinetics problem. The software has been developed using the engineering and scientific calculation library FEniCS. The matrix spectral problem is solved using the scalable and flexible toolkit SLEPc. The solution accuracy of the dynamic benchmark is analyzed by condensing calculation grid and varying degree of finite elements.
Research on Differential Coding Method for Satellite Remote Sensing Data Compression
Lin, Z. J.; Yao, N.; Deng, B.; Wang, C. Z.; Wang, J. H.
2012-07-01
Data compression, in the process of Satellite Earth data transmission, is of great concern to improve the efficiency of data transmission. Information amounts inherent to remote sensing images provide a foundation for data compression in terms of information theory. In particular, distinct degrees of uncertainty inherent to distinct land covers result in the different information amounts. This paper first proposes a lossless differential encoding method to improve compression rates. Then a district forecast differential encoding method is proposed to further improve the compression rates. Considering the stereo measurements in modern photogrammetry are basically accomplished by means of automatic stereo image matching, an edge protection operator is finally utilized to appropriately filter out high frequency noises which could help magnify the signals and further improve the compression rates. The three steps were applied to a Landsat TM multispectral image and a set of SPOT-5 panchromatic images of four typical land cover types (i.e., urban areas, farm lands, mountain areas and water bodies). Results revealed that the average code lengths obtained by the differential encoding method, compared with Huffman encoding, were more close to the information amounts inherent to remote sensing images. And the compression rates were improved to some extent. Furthermore, the compression rates of the four land cover images obtained by the district forecast differential encoding method were nearly doubled. As for the images with the edge features preserved, the compression rates are average four times as large as those of the original images.
Calibration Method for IATS and Application in Multi-Target Monitoring Using Coded Targets
Zhou, Yueyin; Wagner, Andreas; Wunderlich, Thomas; Wasmeier, Peter
2017-06-01
The technique of Image Assisted Total Stations (IATS) has been studied for over ten years and is composed of two major parts: one is the calibration procedure which combines the relationship between the camera system and the theodolite system; the other is the automatic target detection on the image by various methods of photogrammetry or computer vision. Several calibration methods have been developed, mostly using prototypes with an add-on camera rigidly mounted on the total station. However, these prototypes are not commercially available. This paper proposes a calibration method based on Leica MS50 which has two built-in cameras each with a resolution of 2560 × 1920 px: an overview camera and a telescope (on-axis) camera. Our work in this paper is based on the on-axis camera which uses the 30-times magnification of the telescope. The calibration consists of 7 parameters to estimate. We use coded targets, which are common tools in photogrammetry for orientation, to detect different targets in IATS images instead of prisms and traditional ATR functions. We test and verify the efficiency and stability of this monitoring method with multi-target.
Li, Hanshan; Lei, Zhiyong
2013-01-01
To improve projectile coordinate measurement precision in fire measurement system, this paper introduces the optical fiber coding fire measurement method and principle, sets up their measurement model, and analyzes coordinate errors by using the differential method. To study the projectile coordinate position distribution, using the mathematical statistics hypothesis method to analyze their distributing law, firing dispersion and probability of projectile shooting the object center were put under study. The results show that exponential distribution testing is relatively reasonable to ensure projectile position distribution on the given significance level. Through experimentation and calculation, the optical fiber coding fire measurement method is scientific and feasible, which can gain accurate projectile coordinate position.
Embedded 3D shape measurement system based on a novel spatio-temporal coding method
Xu, Bin; Tian, Jindong; Tian, Yong; Li, Dong
2016-11-01
Structured light measurement has been wildly used since 1970s in industrial component detection, reverse engineering, 3D molding, robot navigation, medical and many other fields. In order to satisfy the demand for high speed, high precision and high resolution 3-D measurement for embedded system, a new patterns combining binary and gray coding principle in space are designed and projected onto the object surface orderly. Each pixel corresponds to the designed sequence of gray values in time - domain, which is treated as a feature vector. The unique gray vector is then dimensionally reduced to a scalar which could be used as characteristic information for binocular matching. In this method, the number of projected structured light patterns is reduced, and the time-consuming phase unwrapping in traditional phase shift methods is avoided. This algorithm is eventually implemented on DM3730 embedded system for 3-D measuring, which consists of an ARM and a DSP core and has a strong capability of digital signal processing. Experimental results demonstrated the feasibility of the proposed method.
Atmospheric Cluster Dynamics Code: a flexible method for solution of the birth-death equations
Directory of Open Access Journals (Sweden)
M. J. McGrath
2012-03-01
Full Text Available The Atmospheric Cluster Dynamics Code (ACDC is presented and explored. This program was created to study the first steps of atmospheric new particle formation by examining the formation of molecular clusters from atmospherically relevant molecules. The program models the cluster kinetics by explicit solution of the birth–death equations, using an efficient computer script for their generation and the MATLAB ode15s routine for their solution. Through the use of evaporation rate coefficients derived from formation free energies calculated by quantum chemical methods for clusters containing dimethylamine or ammonia and sulphuric acid, we have explored the effect of changing various parameters at atmospherically relevant monomer concentrations. We have included in our model clusters with 0–4 base molecules and 0–4 sulfuric acid molecules for which we have commensurable quantum chemical data. The tests demonstrate that large effects can be seen for even small changes in different parameters, due to the non-linearity of the system. In particular, changing the temperature had a significant impact on the steady-state concentrations of all clusters, while the boundary effects (allowing clusters to grow to sizes beyond the largest cluster that the code keeps track of, or forbidding such processes, coagulation sink terms, non-monomer collisions, sticking probabilities and monomer concentrations did not show as large effects under the conditions studied. Removal of coagulation sink terms prevented the system from reaching the steady state when all the initial cluster concentrations were set to the default value of 1 m^{−3}, which is probably an effect caused by studying only relatively small cluster sizes.
Application of computational fluid dynamics methods to improve thermal hydraulic code analysis
Sentell, Dennis Shannon, Jr.
A computational fluid dynamics code is used to model the primary natural circulation loop of a proposed small modular reactor for comparison to experimental data and best-estimate thermal-hydraulic code results. Recent advances in computational fluid dynamics code modeling capabilities make them attractive alternatives to the current conservative approach of coupled best-estimate thermal hydraulic codes and uncertainty evaluations. The results from a computational fluid dynamics analysis are benchmarked against the experimental test results of a 1:3 length, 1:254 volume, full pressure and full temperature scale small modular reactor during steady-state power operations and during a depressurization transient. A comparative evaluation of the experimental data, the thermal hydraulic code results and the computational fluid dynamics code results provides an opportunity to validate the best-estimate thermal hydraulic code's treatment of a natural circulation loop and provide insights into expanded use of the computational fluid dynamics code in future designs and operations. Additionally, a sensitivity analysis is conducted to determine those physical phenomena most impactful on operations of the proposed reactor's natural circulation loop. The combination of the comparative evaluation and sensitivity analysis provides the resources for increased confidence in model developments for natural circulation loops and provides for reliability improvements of the thermal hydraulic code.
A NEW DESIGN METHOD OF CDMA SPREADING CODES BASED ON MULTI-RATE UNITARY FILTER BANK
Institute of Scientific and Technical Information of China (English)
Bi Jianxin; Wang Yingmin; Yi Kechu
2001-01-01
It is well-known that the multi-valued CDMA spreading codes can be designed by means of a pair of mirror multi-rate filter banks based on some optimizing criterion. This paper indicates that there exists a theoretical bound in the performance of its circulating correlation property, which is given by an explicit expression. Based on this analysis, a criterion of maximizing entropy is proposed to design such codes. Computer simulation result suggests that the resulted codes outperform the conventional binary balanced Gold codes for an asynchronous CDMA system.
Nakamura, Yusuke; Hoshizawa, Taku
2016-09-01
Two methods for increasing the data capacity of a holographic data storage system (HDSS) were developed. The first method is called “run-length-limited (RLL) high-density recording”. An RLL modulation has the same effect as enlarging the pixel pitch; namely, it optically reduces the hologram size. Accordingly, the method doubles the raw-data recording density. The second method is called “RLL turbo signal processing”. The RLL turbo code consists of \\text{RLL}(1,∞ ) trellis modulation and an optimized convolutional code. The remarkable point of the developed turbo code is that it employs the RLL modulator and demodulator as parts of the error-correction process. The turbo code improves the capability of error correction more than a conventional LDPC code, even though interpixel interference is generated. These two methods will increase the data density 1.78-fold. Moreover, by simulation and experiment, a data density of 2.4 Tbit/in.2 is confirmed.
Novel methods in the Particle-In-Cell accelerator Code-Framework Warp
Energy Technology Data Exchange (ETDEWEB)
Vay, J-L [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Grote, D. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Cohen, R. H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Friedman, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2012-12-26
The Particle-In-Cell (PIC) Code-Framework Warp is being developed by the Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL) to guide the development of accelerators that can deliver beams suitable for high-energy density experiments and implosion of inertial fusion capsules. It is also applied in various areas outside the Heavy Ion Fusion program to the study and design of existing and next-generation high-energy accelerators, including the study of electron cloud effects and laser wakefield acceleration for example. This study presents an overview of Warp's capabilities, summarizing recent original numerical methods that were developed by the HIFS-VNL (including PIC with adaptive mesh refinement, a large-timestep 'drift-Lorentz' mover for arbitrarily magnetized species, a relativistic Lorentz invariant leapfrog particle pusher, simulations in Lorentz-boosted frames, an electromagnetic solver with tunable numerical dispersion and efficient stride-based digital filtering), with special emphasis on the description of the mesh refinement capability. In addition, selected examples of the applications of the methods to the abovementioned fields are given.
嵌入式代码覆盖率统计方法%STATISTICAL METHOD OF EMBEDDED CODE COVERAGE
Institute of Scientific and Technical Information of China (English)
周雷
2014-01-01
In this paper we explain how to use the supporting code coverage tools GCOV and LCOV of GCC to carry out the coverage statistics on the embedded C language code.Using this method,it can provide measurable indicators for the completion status of embedded code test,and provide the effective data basis for improving the quality of embedded code.%阐述如何利用GCC配套的代码覆盖率工具GCOV和LCOV对C语言嵌入式代码进行覆盖率统计。利用该方法可以为嵌入式代码测试完成情况提供衡量的指标，也为提高单板代码质量提供有效的数据依据。
Sam, Ann; Reszka, Stephanie; Odom, Samuel; Hume, Kara; Boyd, Brian
2015-01-01
Momentary time sampling, partial-interval recording, and event coding are observational coding methods commonly used to examine the social and challenging behaviors of children at risk for or with developmental delays or disabilities. Yet there is limited research comparing the accuracy of and relationship between these three coding methods. By…
Energy Technology Data Exchange (ETDEWEB)
Frichet, A.; Mollard, P.; Gentet, G.; Lippert, H. J.; Curva-Tivig, F.; Cole, S.; Garner, N.
2014-07-01
Since three decades, AREVA has been incrementally implementing upgrades in the BWR and PWR Fuel design and codes and methods leading to an ever greater fuel efficiency and easier licensing. For PWRs, AREVA is implementing upgraded versions of its HTP{sup T}M and AFA 3G technologies called HTP{sup T}M-I and AFA3G-I. These fuel assemblies feature improved robustness and dimensional stability through the ultimate optimization of their hold down system, the use of Q12, the AREVA advanced quaternary alloy for guide tube, the increase in their wall thickness and the stiffening of the spacer to guide tube connection. But an even bigger step forward has been achieved a s AREVA has successfully developed and introduces to the market the GAIA product which maintains the resistance to grid to rod fretting (GTRF) of the HTP{sup T}M product while providing addition al thermal-hydraulic margin and high resistance to Fuel Assembly bow. (Author)
Good Codes From Generalised Algebraic Geometry Codes
Jibril, Mubarak; Ahmed, Mohammed Zaki; Tjhai, Cen
2010-01-01
Algebraic geometry codes or Goppa codes are defined with places of degree one. In constructing generalised algebraic geometry codes places of higher degree are used. In this paper we present 41 new codes over GF(16) which improve on the best known codes of the same length and rate. The construction method uses places of small degree with a technique originally published over 10 years ago for the construction of generalised algebraic geometry codes.
A Kind of Quasi-Orthogonal Space-Time Block Codes and its Decoding Methods
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
It is well known that it is impossible for complex orthogonal space-time block codes with full diversity and full rate to have more than two transmit antennas while non-orthogonal designs will lose the simplicity of maximum likelihxd decoding at receivers. In this paper, we propose a new quasi-orthogonal space-time block code. The code is quasi-orthogonal and can reduce the decoding complexity significantly by employing zero-forced and minimum mean squared error criteria.This paper also presents simulation results of two examples with three and four transmit antennas respectively.
Two Phase Flow Models and Numerical Methods of the Commercial CFD Codes
Energy Technology Data Exchange (ETDEWEB)
Bae, Sung Won; Jeong, Jae Jun; Chang, Seok Kyu; Cho, Hyung Kyu
2007-11-15
The use of commercial CFD codes extend to various field of engineering. The thermal hydraulic analysis is one of the promising engineering field of application of the CFD codes. Up to now, the main application of the commercial CFD code is focused within the single phase, single composition fluid dynamics. Nuclear thermal hydraulics, however, deals with abrupt pressure changes, high heat fluxes, and phase change heat transfer. In order to overcome the CFD limitation and to extend the capability of the nuclear thermal hydraulics analysis, the research efforts are made to collaborate the CFD and nuclear thermal hydraulics. To achieve the final goal, the current useful model and correlations used in commercial CFD codes should be reviewed and investigated. This report gives the summary information about the constitutive relationships that are used in the FLUENT, STAR-CD, and CFX. The brief information of the solution technologies are also enveloped.
Directory of Open Access Journals (Sweden)
Cheng-Yu Yeh
2012-01-01
Full Text Available With the large availability of protein interaction networks and microarray data supported, to identify the linear paths that have biological significance in search of a potential pathway is a challenge issue. We proposed a color-coding method based on the characteristics of biological network topology and applied heuristic search to speed up color-coding method. In the experiments, we tested our methods by applying to two datasets: yeast and human prostate cancer networks and gene expression data set. The comparisons of our method with other existing methods on known yeast MAPK pathways in terms of precision and recall show that we can find maximum number of the proteins and perform comparably well. On the other hand, our method is more efficient than previous ones and detects the paths of length 10 within 40 seconds using CPU Intel 1.73GHz and 1GB main memory running under windows operating system.
Dakota Uncertainty Quantification Methods Applied to the CFD code Nek5000
Energy Technology Data Exchange (ETDEWEB)
Delchini, Marc-Olivier [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Popov, Emilian L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Pointer, William David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division
2016-04-29
This report presents the state of advancement of a Nuclear Energy Advanced Modeling and Simulation (NEAMS) project to characterize the uncertainty of the computational fluid dynamics (CFD) code Nek5000 using the Dakota package for flows encountered in the nuclear engineering industry. Nek5000 is a high-order spectral element CFD code developed at Argonne National Laboratory for high-resolution spectral-filtered large eddy simulations (LESs) and unsteady Reynolds-averaged Navier-Stokes (URANS) simulations.
Dakota Uncertainty Quantification Methods Applied to the CFD code Nek5000
Energy Technology Data Exchange (ETDEWEB)
Delchini, Marc-Olivier [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Popov, Emilian L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Pointer, William David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division
2016-04-29
This report presents the state of advancement of a Nuclear Energy Advanced Modeling and Simulation (NEAMS) project to characterize the uncertainty of the computational fluid dynamics (CFD) code Nek5000 using the Dakota package for flows encountered in the nuclear engineering industry. Nek5000 is a high order spectral element CFD code developed at Argonne National Laboratory for high resolution spectral-filtered large eddy simulations (LESs) and unsteady Reynolds averaged Navier-Stokes (URANS) simulations.
Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)
2016-05-01
the proposition of a weight for averaging CDMA codes. This weighting function is referred in this discussion as the probability of the code matrix...Given a likelihood function of a multivariate Gaussian stochastic process (12), one can assume the values L and U and try to estimate the parameters...such as the average of the exponential functions were formulated. Averaging over a weight that depends on the TSC behaves as a filtering process where
Dattoli, Giuseppe
2005-01-01
The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high intensity electron accelerators. A code devoted to the analysis of this type of problems should be fast and reliable: conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problem in accelerators. The extension of these method to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique, using exponential operators implemented numerically in C++. We show that the integration procedure is capable of reproducing the onset of an instability and effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, parametric studies a...
Prediction Method for Image Coding Quality Based on Differential Information Entropy
Directory of Open Access Journals (Sweden)
Xin Tian
2014-02-01
Full Text Available For the requirement of quality-based image coding, an approach to predict the quality of image coding based on differential information entropy is proposed. First of all, some typical prediction approaches are introduced, and then the differential information entropy is reviewed. Taking JPEG2000 as an example, the relationship between differential information entropy and the objective assessment indicator PSNR at a fixed compression ratio is established via data fitting, and the constraint for fitting is to minimize the average error. Next, the relationship among differential information entropy, compression ratio and PSNR at various compression ratios is constructed and this relationship is used as an indicator to predict the image coding quality. Finally, the proposed approach is compared with some traditional approaches. From the experiments, it can be seen that the differential information entropy has a better linear relationship with image coding quality than that with the image activity. Therefore, the conclusion can be reached that the proposed approach is capable of predicting image coding quality at low compression ratios with small errors, and can be widely applied in a variety of real-time space image coding systems for its simplicity.
Mueller, B; Dimmelmeier, H
2010-01-01
We present a new general relativistic (GR) code for hydrodynamic supernova simulations with neutrino transport in spherical and azimuthal symmetry (1D/2D). The code is a combination of the CoCoNuT hydro module, which is a Riemann-solver based, high-resolution shock-capturing method, and the three-flavor, energy-dependent neutrino transport scheme VERTEX. VERTEX integrates the neutrino moment equations with a variable Eddington factor closure computed from a model Boltzmann equation and uses the ray-by-ray plus approximation in 2D, assuming the neutrino distribution to be axially symmetric around the radial direction, and thus the neutrino flux to be radial. Our spacetime treatment employs the ADM 3+1 formalism with the conformal flatness condition for the spatial three-metric. This approach is exact in 1D and has been shown to yield very accurate results also for rotational stellar collapse. We introduce new formulations of the energy equation to improve total energy conservation in relativistic and Newtonian...
Directory of Open Access Journals (Sweden)
Jochen Gläser
2013-03-01
Full Text Available Qualitative research aimed at "mechanismic" explanations poses specific challenges to qualitative data analysis because it must integrate existing theory with patterns identified in the data. We explore the utilization of two methods—coding and qualitative content analysis—for the first steps in the data analysis process, namely "cleaning" and organizing qualitative data. Both methods produce an information base that is structured by categories and can be used in the subsequent search for patterns in the data and integration of these patterns into a systematic, theoretically embedded explanation. Used as a stand-alone method outside the grounded theory approach, coding leads to an indexed text, i.e. both the original text and the index (the system of codes describing the content of text segments are subjected to further analysis. Qualitative content analysis extracts the relevant information, i.e. separates it from the original text, and processes only this information. We suggest that qualitative content analysis has advantages compared to coding whenever the research question is embedded in prior theory and can be answered without processing knowledge about the form of statements and their position in the text, which usually is the case in the search for "mechanismic" explanations. Coding outperforms qualitative content analysis in research that needs this information in later stages of the analysis, e.g. the exploration of meaning or the study of the construction of narratives. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs130254
Zhu, Debin; Tang, Yabing; Xing, Da; Chen, Wei R
2008-05-15
A bio bar code assay based on oligonucleotide-modified gold nanoparticles (Au-NPs) provides a PCR-free method for quantitative detection of nucleic acid targets. However, the current bio bar code assay requires lengthy experimental procedures including the preparation and release of bar code DNA probes from the target-nanoparticle complex and immobilization and hybridization of the probes for quantification. Herein, we report a novel PCR-free electrochemiluminescence (ECL)-based bio bar code assay for the quantitative detection of genetically modified organism (GMO) from raw materials. It consists of tris-(2,2'-bipyridyl) ruthenium (TBR)-labeled bar code DNA, nucleic acid hybridization using Au-NPs and biotin-labeled probes, and selective capture of the hybridization complex by streptavidin-coated paramagnetic beads. The detection of target DNA is realized by direct measurement of ECL emission of TBR. It can quantitatively detect target nucleic acids with high speed and sensitivity. This method can be used to quantitatively detect GMO fragments from real GMO products.
Micromechanics Analysis Code With Generalized Method of Cells (MAC/GMC): User Guide. Version 3
Arnold, S. M.; Bednarcyk, B. A.; Wilt, T. E.; Trowbridge, D.
1999-01-01
The ability to accurately predict the thermomechanical deformation response of advanced composite materials continues to play an important role in the development of these strategic materials. Analytical models that predict the effective behavior of composites are used not only by engineers performing structural analysis of large-scale composite components but also by material scientists in developing new material systems. For an analytical model to fulfill these two distinct functions it must be based on a micromechanics approach which utilizes physically based deformation and life constitutive models and allows one to generate the average (macro) response of a composite material given the properties of the individual constituents and their geometric arrangement. Here the user guide for the recently developed, computationally efficient and comprehensive micromechanics analysis code, MAC, who's predictive capability rests entirely upon the fully analytical generalized method of cells, GMC, micromechanics model is described. MAC/ GMC is a versatile form of research software that "drives" the double or triply periodic micromechanics constitutive models based upon GMC. MAC/GMC enhances the basic capabilities of GMC by providing a modular framework wherein 1) various thermal, mechanical (stress or strain control) and thermomechanical load histories can be imposed, 2) different integration algorithms may be selected, 3) a variety of material constitutive models (both deformation and life) may be utilized and/or implemented, and 4) a variety of fiber architectures (both unidirectional, laminate and woven) may be easily accessed through their corresponding representative volume elements contained within the supplied library of RVEs or input directly by the user, and 5) graphical post processing of the macro and/or micro field quantities is made available.
Stan, L. C.; Călimănescu, I.; Velcea, D. D.
2016-08-01
The production of oil and gas from offshore oil fields is, nowadays, more and more important. As a result of the increasing demand of oil, and being the shallow water reserves not enough, the industry is pushed forward to develop and exploit more difficult fields in deeper waters. In this paper, there will be deployed the new design code DNV 2012 in terms of checking an offshore pipeline as compliance with the requests of this new construction code, using the Bentley Autopipe V8i. The August 2012 revision of DNV offshore standard, DNV- OS-F101, Submarine Pipeline Systems is supported by AutoPIPE version 9.6. This paper provides a quick walk through for entering input data, analyzing and generating code compliance reports for a model with piping code selected as DNV Offshore 2012. As seen in the present paper, the simulations comprise geometrically complex pipeline subjected to various and variable loading conditions. At the end of the designing process the Engineer has to answer to a simple question: is that pipeline safe or not? The pipeline set as an example, has some sections that are not complying in terms of size and strength with the code DNV 2012 offshore pipelines. Obviously those sections have to be redesigned in a manner to meet those conditions.
U.S. Sodium Fast Reactor Codes and Methods: Current Capabilities and Path Forward
Energy Technology Data Exchange (ETDEWEB)
Brunett, A. J.; Fanning, T. H.
2017-06-26
The United States has extensive experience with the design, construction, and operation of sodium cooled fast reactors (SFRs) over the last six decades. Despite the closure of various facilities, the U.S. continues to dedicate research and development (R&D) efforts to the design of innovative experimental, prototype, and commercial facilities. Accordingly, in support of the rich operating history and ongoing design efforts, the U.S. has been developing and maintaining a series of tools with capabilities that envelope all facets of SFR design and safety analyses. This paper provides an overview of the current U.S. SFR analysis toolset, including codes such as SAS4A/SASSYS-1, MC2-3, SE2-ANL, PERSENT, NUBOW-3D, and LIFE-METAL, as well as the higher-fidelity tools (e.g. PROTEUS) being integrated into the toolset. Current capabilities of the codes are described and key ongoing development efforts are highlighted for some codes.
An Efficient Segmental Bus-Invert Coding Method for Instruction Memory Data Bus Switching Reduction
Directory of Open Access Journals (Sweden)
Gu Ji
2009-01-01
Full Text Available Abstract This paper presents a bus coding methodology for the instruction memory data bus switching reduction. Compared to the existing state-of-the-art multiway partial bus-invert (MPBI coding which relies on data bit correlation, our approach is very effective in reducing the switching activity of the instruction data buses, since little bit correlation can be observed in the instruction data. Our experiments demonstrate that the proposed encoding can reduce up to 42% of switching activity, with an average of 30% reduction, while MPBI achieves just 17.6% reduction in switching activity.
Method to determine the strength of a neutron source
Energy Technology Data Exchange (ETDEWEB)
Vega C, H.R.; Manzanares A, E.; Hernandez D, V.M.; Chacon R, A.; Mercado, G.A. [UAZ, A.P. 336, 98000 Zacatecas (Mexico); Gallego, E.; Lorente, A. [Depto. Ingenieria Nuclear, Universidad Politecnica de Madrid, (Spain)
2006-07-01
The use of a gamma-ray spectrometer with a 3 {phi} x 3 NaI(Tl) detector, with a moderator sphere has been studied in the aim to measure the neutron fluence rate and to determine the source strength. Moderators with a large amount of hydrogen are able to slowdown and thermalize neutrons; once thermalized there is a probability that thermal neutron to be captured by hydrogen producing 2.22 MeV prompt gamma-ray. The pulse-height spectrum collected in a multicharmel analyzer shows a photopeak around 2.22 MeV whose net area is proportional to total neutron fluence rate and to the neutron source strength. The characteristics of this system were determined by a Monte Carlo study using the MCNP 4C code, where a detailed model of the Nal(Tl) was utilized. As moderators 3, 5, and 10 inches-diameter spheres where utilized and the response was calculated for monoenergetic and isotopic neutrons sources. (Author)
Directory of Open Access Journals (Sweden)
Mohammadnia Meysam
2013-01-01
Full Text Available The flux expansion nodal method is a suitable method for considering nodalization effects in node corners. In this paper we used this method to solve the intra-nodal flux analytically. Then, a computer code, named MA.CODE, was developed using the C# programming language. The code is capable of reactor core calculations for hexagonal geometries in two energy groups and three dimensions. The MA.CODE imports two group constants from the WIMS code and calculates the effective multiplication factor, thermal and fast neutron flux in three dimensions, power density, reactivity, and the power peaking factor of each fuel assembly. Some of the code's merits are low calculation time and a user friendly interface. MA.CODE results showed good agreement with IAEA benchmarks, i. e. AER-FCM-101 and AER-FCM-001.
Jackson, Suzanne F.; Kolla, Gillian
2012-01-01
In attempting to use a realistic evaluation approach to explore the role of Community Parents in early parenting programs in Toronto, a novel technique was developed to analyze the links between contexts (C), mechanisms (M) and outcomes (O) directly from experienced practitioner interviews. Rather than coding the interviews into themes in terms of…
Ivanov, Anisoara; Neacsu, Andrei
2011-01-01
This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…
Systematic analysis of coding and noncoding DNA sequences using methods of statistical linguistics
Mantegna, R. N.; Buldyrev, S. V.; Goldberger, A. L.; Havlin, S.; Peng, C. K.; Simons, M.; Stanley, H. E.
1995-01-01
We compare the statistical properties of coding and noncoding regions in eukaryotic and viral DNA sequences by adapting two tests developed for the analysis of natural languages and symbolic sequences. The data set comprises all 30 sequences of length above 50 000 base pairs in GenBank Release No. 81.0, as well as the recently published sequences of C. elegans chromosome III (2.2 Mbp) and yeast chromosome XI (661 Kbp). We find that for the three chromosomes we studied the statistical properties of noncoding regions appear to be closer to those observed in natural languages than those of coding regions. In particular, (i) a n-tuple Zipf analysis of noncoding regions reveals a regime close to power-law behavior while the coding regions show logarithmic behavior over a wide interval, while (ii) an n-gram entropy measurement shows that the noncoding regions have a lower n-gram entropy (and hence a larger "n-gram redundancy") than the coding regions. In contrast to the three chromosomes, we find that for vertebrates such as primates and rodents and for viral DNA, the difference between the statistical properties of coding and noncoding regions is not pronounced and therefore the results of the analyses of the investigated sequences are less conclusive. After noting the intrinsic limitations of the n-gram redundancy analysis, we also briefly discuss the failure of the zeroth- and first-order Markovian models or simple nucleotide repeats to account fully for these "linguistic" features of DNA. Finally, we emphasize that our results by no means prove the existence of a "language" in noncoding DNA.
Improved intra-block copy and motion search methods for screen content coding
Rapaka, Krishna; Pang, Chao; Sole, Joel; Karczewicz, Marta; Li, Bin; Xu, Jizheng
2015-09-01
Screen content video coding extension of HEVC (SCC) is being developed by Joint Collaborative Team on Video Coding (JCT-VC) of ISO/IEC MPEG and ITU-T VCEG. Screen content usually features a mix of camera captured content and a significant proportion of rendered graphics, text, or animation. These two types of content exhibit distinct characteristics requiring different compression scheme to achieve better coding efficiency. This paper presents an efficient block matching schemes for coding screen content to better capture the spatial and temporal characteristics. The proposed schemes are mainly categorized as a) hash based global region block matching for intra block copy b) selective search based local region block matching for inter frame prediction c) hash based global region block matching for inter frame prediction. In the first part, a hash-based full frame block matching algorithm is designed for intra block copy to handle the repeating patterns and large motions when the reference picture constituted already decoded samples of the current picture. In the second part, a selective local area block matching algorithm is designed for inter motion estimation to handle sharp edges, high spatial frequencies and non-monotonic error surface. In the third part, a hash based full frame block matching algorithm is designed for inter motion estimation to handle repeating patterns and large motions across the temporal reference picture. The proposed schemes are compared against HM-13.0+RExt-6.0, which is the state-of-art screen content coding. The first part provides a luma BD-rate gains of -26.6%, -15.6%, -11.4% for AI, RA and LD TGM configurations. The second part provides a luma BD-rate gains of -10.1%, -12.3% for RA and LD TGM configurations. The third part provides a luma BD-rate gains of -12.2%, -11.5% for RA and LD TGM configurations.
Systematic analysis of coding and noncoding DNA sequences using methods of statistical linguistics
Mantegna, R. N.; Buldyrev, S. V.; Goldberger, A. L.; Havlin, S.; Peng, C. K.; Simons, M.; Stanley, H. E.
1995-01-01
We compare the statistical properties of coding and noncoding regions in eukaryotic and viral DNA sequences by adapting two tests developed for the analysis of natural languages and symbolic sequences. The data set comprises all 30 sequences of length above 50 000 base pairs in GenBank Release No. 81.0, as well as the recently published sequences of C. elegans chromosome III (2.2 Mbp) and yeast chromosome XI (661 Kbp). We find that for the three chromosomes we studied the statistical properties of noncoding regions appear to be closer to those observed in natural languages than those of coding regions. In particular, (i) a n-tuple Zipf analysis of noncoding regions reveals a regime close to power-law behavior while the coding regions show logarithmic behavior over a wide interval, while (ii) an n-gram entropy measurement shows that the noncoding regions have a lower n-gram entropy (and hence a larger "n-gram redundancy") than the coding regions. In contrast to the three chromosomes, we find that for vertebrates such as primates and rodents and for viral DNA, the difference between the statistical properties of coding and noncoding regions is not pronounced and therefore the results of the analyses of the investigated sequences are less conclusive. After noting the intrinsic limitations of the n-gram redundancy analysis, we also briefly discuss the failure of the zeroth- and first-order Markovian models or simple nucleotide repeats to account fully for these "linguistic" features of DNA. Finally, we emphasize that our results by no means prove the existence of a "language" in noncoding DNA.
Research on universal combinatorial coding.
Lu, Jun; Zhang, Zhuo; Mo, Juan
2014-01-01
The conception of universal combinatorial coding is proposed. Relations exist more or less in many coding methods. It means that a kind of universal coding method is objectively existent. It can be a bridge connecting many coding methods. Universal combinatorial coding is lossless and it is based on the combinatorics theory. The combinational and exhaustive property make it closely related with the existing code methods. Universal combinatorial coding does not depend on the probability statistic characteristic of information source, and it has the characteristics across three coding branches. It has analyzed the relationship between the universal combinatorial coding and the variety of coding method and has researched many applications technologies of this coding method. In addition, the efficiency of universal combinatorial coding is analyzed theoretically. The multicharacteristic and multiapplication of universal combinatorial coding are unique in the existing coding methods. Universal combinatorial coding has theoretical research and practical application value.
Correcting for telluric absorption: Methods, case studies, and release of the TelFit code
Energy Technology Data Exchange (ETDEWEB)
Gullikson, Kevin; Kraus, Adam [Department of Astronomy, University of Texas, 2515 Speedway, Stop C1400, Austin, TX 78712 (United States); Dodson-Robinson, Sarah [Department of Physics and Astronomy, 217 Sharp Lab, Newark, DE 19716 (United States)
2014-09-01
Ground-based astronomical spectra are contaminated by the Earth's atmosphere to varying degrees in all spectral regions. We present a Python code that can accurately fit a model to the telluric absorption spectrum present in astronomical data, with residuals of ∼3%-5% of the continuum for moderately strong lines. We demonstrate the quality of the correction by fitting the telluric spectrum in a nearly featureless A0V star, HIP 20264, as well as to a series of dwarf M star spectra near the 819 nm sodium doublet. We directly compare the results to an empirical telluric correction of HIP 20264 and find that our model-fitting procedure is at least as good and sometimes more accurate. The telluric correction code, which we make freely available to the astronomical community, can be used as a replacement for telluric standard star observations for many purposes.
Mandrekas, John
2004-08-01
GTNEUT is a two-dimensional code for the calculation of the transport of neutral particles in fusion plasmas. It is based on the Transmission and Escape Probabilities (TEP) method and can be considered a computationally efficient alternative to traditional Monte Carlo methods. The code has been benchmarked extensively against Monte Carlo and has been used to model the distribution of neutrals in fusion experiments. Program summaryTitle of program: GTNEUT Catalogue identifier: ADTX Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTX Computer for which the program is designed and others on which it has been tested: The program was developed on a SUN Ultra 10 workstation and has been tested on other Unix workstations and PCs. Operating systems or monitors under which the program has been tested: Solaris 8, 9, HP-UX 11i, Linux Red Hat v8.0, Windows NT/2000/XP. Programming language used: Fortran 77 Memory required to execute with typical data: 6 219 388 bytes No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: No No. of bytes in distributed program, including test data, etc.: 300 709 No. of lines in distributed program, including test data, etc.: 17 365 Distribution format: compressed tar gzip file Keywords: Neutral transport in plasmas, Escape probability methods Nature of physical problem: This code calculates the transport of neutral particles in thermonuclear plasmas in two-dimensional geometric configurations. Method of solution: The code is based on the Transmission and Escape Probability (TEP) methodology [1], which is part of the family of integral transport methods for neutral particles and neutrons. The resulting linear system of equations is solved by standard direct linear system solvers (sparse and non-sparse versions are included). Restrictions on the complexity of the problem: The current version of the code can
The Digital Encryption Method of the Webpage Code%网页代码数字加密法
Institute of Scientific and Technical Information of China (English)
瞿波
2013-01-01
介绍了一种利用JavaScript函数将网页源代码转变成数字代码的加密方法，即网页代码数字加密法。该加密法既能保证网页在浏览器中正常的显示，又十分巧妙的对网页源代码进行了保护，具有较高的实用价值。在对该加密法的原理做了详细说明的基础上，给出了该加密法的源程序。%This paper has recommended one kind the source code of webpage into digital code using JavaScript function. This method of digital encryption methods that changes can guarantee the normal showing in the browser of webpages,also can protect the source code of the webpage again, have relatively high practical value. This text has been done on the foundation of elaboration in the principle to this encryption method, also give the source program which has offered this encryption method.
Directory of Open Access Journals (Sweden)
Ken Takahashi
2015-08-01
Full Text Available Considerable research has been conducted on systems that collect real-world information by using numerous energy harvesting wireless sensors. The sensors need to be tiny, cheap, and consume ultra-low energy. However, such sensors have some functional limits, including being restricted to wireless communication transmission. Therefore, when more than one sensor simultaneously transmits information in these systems, the receiver may not be able to demodulate if the sensors cannot accommodate multiple access. To solve this problem, a number of proposals have been made based on spread spectrum technologies for resistance to interference. In this paper, we point out some problems regarding the application of such sensors, and explain the assumption of spread codes assignment based on passive radio frequency identification (RFID communication. During the spread codes assignment, the system cannot work. Hence, efficient assignment method is more appropriate. We consider two assignment methods and assessed them in terms of total assignment time through an experiment. The results show the total assignment time in case of Electronic Product Code (EPC Global Class-1 Generation-2 which is an international standard for wireless protocols and the relationship between the ratio of the time taken by the read/write command and the ratio of total assignment time by the two methods. This implies that more efficient methods are obtained by considering the time ratio of read/write command.
Method for computing self-consistent solution in a gun code
Nelson, Eric M
2014-09-23
Complex gun code computations can be made to converge more quickly based on a selection of one or more relaxation parameters. An eigenvalue analysis is applied to error residuals to identify two error eigenvalues that are associated with respective error residuals. Relaxation values can be selected based on these eigenvalues so that error residuals associated with each can be alternately reduced in successive iterations. In some examples, relaxation values that would be unstable if used alone can be used.
Source Code Plagiarism Detection Method Using Protégé Built Ontologies
Ion SMEUREANU; Bogdan IANCU
2013-01-01
Software plagiarism is a growing and serious problem that affects computer science universities in particular and the quality of education in general. More and more students tend to copy their thesis’s software from older theses or internet databases. Checking source codes manually, to detect if they are similar or the same, is a laborious and time consuming job, maybe even impossible due to existence of large digital repositories. Ontology is a way of describing a document’s semantic, so it ...
Finley, Dennis B.
1995-01-01
This report documents results from the Euler Technology Assessment program. The objective was to evaluate the efficacy of Euler computational fluid dynamics (CFD) codes for use in preliminary aircraft design. Both the accuracy of the predictions and the rapidity of calculations were to be assessed. This portion of the study was conducted by Lockheed Fort Worth Company, using a recently developed in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages for this study, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaptation of the volume grid during the solution convergence to resolve high-gradient flow regions. This proved beneficial in resolving the large vortical structures in the flow for several configurations examined in the present study. The SPLITFLOW code predictions of the configuration forces and moments are shown to be adequate for preliminary design analysis, including predictions of sideslip effects and the effects of geometry variations at low and high angles of attack. The time required to generate the results from initial surface definition is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.
Portable implementation of implicit methods for the UEDGE and BOUT codes on parallel computers
Energy Technology Data Exchange (ETDEWEB)
Rognlien, T D; Xu, X Q
1999-02-17
A description is given of the parallelization algorithms and results for two codes used ex- tensively to model edge-plasmas in magnetic fusion energy devices. The codes are UEDGE, which calculates two-dimensional plasma and neutral gas profiles, and BOUT, which cal- culates three-dimensional plasma turbulence using experimental or UEDGE profiles. Both codes describe the plasma behavior using fluid equations. A domain decomposition model is used for parallelization by dividing the global spatial simulation region into a set of domains. This approach allows the used of two recently developed LLNL Newton-Krylov numerical solvers, PVODE and KINSOL. Results show an order of magnitude speed up in execution time for the plasma equations with UEDGE. A problem which is identified for UEDGE is the solution of the fluid gas equations on a highly anisotropic mesh. The speed up of BOUT is closer to two orders of magnitude, especially if one includes the initial improvement from switching to the fully implicit Newton-Krylov solver. The turbulent transport coefficients obtained from BOUT guide the use of anomalous transport models within UEDGE, with the eventual goal of a self-consistent coupling.
Mu, Y.; Sheng, G. M.; Sun, P. N.
2017-05-01
The technology of real-time fault diagnosis for nuclear power plants(NPP) has great significance to improve the safety and economy of reactor. The failure samples of nuclear power plants are difficult to obtain, and support vector machine is an effective algorithm for small sample problem. NPP is a very complex system, so in fact the type of NPP failure may occur very much. ECOC is constructed by the Hadamard error correction code, and the decoding method is Hamming distance method. The base models are established by lib-SVM algorithm. The result shows that this method can diagnose the faults of the NPP effectively.
Directory of Open Access Journals (Sweden)
Wei-I Lee
2016-12-01
Full Text Available The New Taipei City Government developed a Code-checking System (CCS using Building Information Modeling (BIM technology to facilitate an architectural design review in 2014. This system was intended to solve problems caused by cognitive gaps between designer and reviewer in the design review process. Along with considering information technology, the most important issue for the system’s development has been the logicalization of literal building codes. Therefore, to enhance the reliability and performance of the CCS, this study uses the Fuzzy Delphi Method (FDM on the basis of design thinking and communication theory to investigate the semantic difference and cognitive gaps among participants in the design review process and to propose the direction of system development. Our empirical results lead us to recommend grouping multi-stage screening and weighted assisted logicalization of non-quantitative building codes to improve the operability of CCS. Furthermore, CCS should integrate the Expert Evaluation System (EES to evaluate the design value under qualitative building codes.
A Method to Assess Robustness of GPS C/A Code in Presence of CW Interferences
Directory of Open Access Journals (Sweden)
Beatrice Motella
2010-01-01
Full Text Available Navigation/positioning platforms integrated with wireless communication systems are being used in a rapidly growing number of new applications. The mutual benefits they can obtain from each other are intrinsically related to the interoperability level and to a properly designed coexistence. In this paper a new family of curves, called Interference Error Envelope (IEE, is used to assess the impact of possible interference due to other systems (e.g., communications transmitting in close bandwidths to Global Navigation Satellite System (GNSS signals. The focus is on the analysis of the GPS C/A code robustness against Continuous Wave (CW interference.
Raptor Code预编码技术研究%Research on Precoding Method in Raptor Code
Institute of Scientific and Technical Information of China (English)
孟庆春; 王晓京
2007-01-01
在介绍LT Code的基础上,进一步探讨了Raptor Code.预编码技术是Raptor Code采用的核心技术,该技术能够克服LT Code解码代价不固定的缺点,有鉴于该文分析了多层校验预编码技术,并以此为基础提出基于RS Code的改进方法.该方法具有解码率高等优点,适合解决网络传输的安全问题.
Efficient image coding method based on adaptive Gabor discrete cosine transforms
Wang, Hang; Yan, Hong
1993-01-01
The Gabor transform is very useful for image compression, but its implementation is very complicated and time consuming because the Gabor elementary functions are not mutually orthogonal. An efficient algorithm that combines the successive overrelaxation iteration and the look-up table techniques can be used to carry out the Gabor transform. The performance of the Gabor transform can be improved by using a modified transform, a Gabor discrete cosine transform (DCT). We present an adaptive Gabor DCT image coding algorithm. Experimental results show that a better performance can be achieved with the adaptive Gabor DCT than with the Gabor DCT.
一种RS码快速盲识别方法%A last blind recognition method of RS codes
Institute of Scientific and Technical Information of China (English)
戚林; 郝士琦; 王磊; 王勇
2011-01-01
提出了一种RS码的快速盲识别方法.该方法基于RS码的等效二进制分组码的循环移位特性,通过欧几里德算法计算循环移位前后码字的最大公约式,根据最大公约式指数的相关性来估计码长,并快速剔除含错码字,进而利用伽罗华域的傅里叶变换(Galois Field Fourier Transform,GFFT)实现RS码的本原多项式和生成多项式的识别.仿真结果表明,该算法复杂度低,计算量小,在误码率为10-3的情况下,对RS码的识别概率高于90%.%A fast blind recognition method of RS codes is proposed. The circular shift property of equivalent binary block codes of RS codes is taken into account in the method. By using Euclidean algorithm between RS codeword and its circular shift codeword, the greatest common divisor is obtained. The RS codes length is estimated and the error codeword is eliminated fast by the correlation character of the index of the greatest common divisor. Then the primitive polynomial and generator polynomial are recognized using Galois Field Fourier Transform (GFFT). The simulation experiments show that the proposed method has less running time and low complicacy and the recognition probability is above 90％ at a bit error rate of 1×10-3.
Energy Technology Data Exchange (ETDEWEB)
Girardi, E.; Guerin, P. [Electricite de France - RandD, 1 av. du General de Gaulle, 92141, Clamart (France); Dulla, S.; Nervo, M.; Ravetto, P. [Dipartimento di Energetica, Politecnico di Torino, 24, c.so Duca degli Abruzzi, 10129, Torino (Italy)
2012-07-01
Quasi-Static (QS) methods are quite popular in the reactor physics community and they exhibit two main advantages. First, these methods overcome both the limits of the Point Kinetic (PK) approach and the issues of the computational effort related to the direct discretization of the time-dependent neutron transport equation. Second, QS methods can be implemented in such a way that they can be easily coupled to very different external spatial solvers. In this paper, the results of the coupling between the QS methods developed by Politecnico di Torino and the EDF R and D core code COCAGNE are presented. The goal of these activities is to evaluate the performances of QS methods (in term of computational cost and precision) with respect to the direct kinetic solver (e.g. {theta}-scheme) already available in COCAGNE. Additionally, they allow to perform an extensive cross-validation of different kinetic models (QS and direct methods). (authors)
Comparison of methods for auto-coding causation of injury narratives.
Bertke, S J; Meyers, A R; Wurzelbacher, S J; Measure, A; Lampl, M P; Robins, D
2016-03-01
Manually reading free-text narratives in large databases to identify the cause of an injury can be very time consuming and recently, there has been much work in automating this process. In particular, the variations of the naïve Bayes model have been used to successfully auto-code free text narratives describing the event/exposure leading to the injury of a workers' compensation claim. This paper compares the naïve Bayes model with an alternative logistic model and found that this new model outperformed the naïve Bayesian model. Further modest improvements were found through the addition of sequences of keywords in the models as opposed to consideration of only single keywords. The programs and weights used in this paper are available upon request to researchers without a training set wishing to automatically assign event codes to large data-sets of text narratives. The utility of sharing this program was tested on an outside set of injury narratives provided by the Bureau of Labor Statistics with promising results.
Phase transfer function based method to alleviate image artifacts in wavefront coding imaging system
Mo, Xutao; Wang, Jinjiang
2013-09-01
Wavefront coding technique can extend the depth of filed (DOF) of the incoherent imaging system. Several rectangular separable phase masks (such as cubic type, exponential type, logarithmic type, sinusoidal type, rational type, et al) have been proposed and discussed, because they can extend the DOF up to ten times of the DOF of ordinary imaging system. But according to the research on them, researchers have pointed out that the images are damaged by the artifacts, which usually come from the non-linear phase transfer function (PTF) differences between the PTF used in the image restoration filter and the PTF related to real imaging condition. In order to alleviate the image artifacts in imaging systems with wavefront coding, an optimization model based on the PTF was proposed to make the PTF invariance with the defocus. Thereafter, an image restoration filter based on the average PTF in the designed depth of field was introduced along with the PTF-based optimization. The combination of the optimization and the image restoration proposed can alleviate the artifacts, which was confirmed by the imaging simulation of spoke target. The cubic phase mask (CPM) and exponential phase mask (EPM) were discussed as example.
Energy Technology Data Exchange (ETDEWEB)
Ukai, Shigeharu [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center
1995-03-01
On the assumption of fuel pin failure, the breached pin performance analysis code SAFFRON was developed to evaluate the fuel pin behavior in relation to the delayed neutron signal response during operational mode beyond the cladding failure. Following characteristic behavior in breached fuel pin is modeled in 3-dimensional finite element method : pellet swelling by fuel-sodium reaction, fuel temperature change, and resultant cladding breach extension and delayed neutron precursors release into coolant. Particularly, practical algorithm of numerical procedure in finite element method was originally developed in order to solve the 3-dimensional non-linear contact problem between the swollen pellet due to fuel-sodium reaction and breached cladding. (author).
Institute of Scientific and Technical Information of China (English)
Waleej Haider; Seema Ansari; Muhammad Nouman Durrani
2009-01-01
A method for allocating Walsh codes by group in a CDMA(Code Division Multiple Access)cellular system is disclosed.The proposed system provides a method for grouping,allocating,removing and detecting of the minimum traffic group to minimize tlle time for allocating a call or transmitted data to an idle Walsh code.thereby,improving the performance of the system and reducing the time required to set up the call.The new concept of CGIWC has been presented tO solve the calls or data allocating and remoral from the Walsh Code.Preferably,these steps are performed by a BCS(Base station Call control Processor)at a CDMA base station.Moreover,a comparison with the previous work has been shown for the support of our related work.At the end,the future direction in which the related work can be employed,are highlighted.
Yeh, Pen-Shu (Inventor)
1998-01-01
A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.
Energy Technology Data Exchange (ETDEWEB)
Zielinska, M. [CEA Saclay, IRFU/SPhN, Gif-sur-Yvette (France); Gaffney, L.P. [KU Leuven, Instituut voor Kern- en Stralingsfysica, Leuven (Belgium); University of the West of Scotland, School of Engineering, Paisley (United Kingdom); Wrzosek-Lipska, K. [KU Leuven, Instituut voor Kern- en Stralingsfysica, Leuven (Belgium); University of Warsaw, Heavy Ion Laboratory, Warsaw (Poland); Clement, E. [GANIL, Caen Cedex (France); Grahn, T.; Pakarinen, J. [University of Jyvaskylae, Department of Physics, Jyvaskylae (Finland); University of Helsinki, Helsinki Institute of Physics, Helsinki (Finland); Kesteloot, N. [KU Leuven, Instituut voor Kern- en Stralingsfysica, Leuven (Belgium); SCK-CEN, Belgian Nuclear Research Centre, Mol (Belgium); Napiorkowski, P. [University of Warsaw, Heavy Ion Laboratory, Warsaw (Poland); Duppen, P. van [KU Leuven, Instituut voor Kern- en Stralingsfysica, Leuven (Belgium); Warr, N. [Technische Universitaet Darmstadt, Institut fuer Kernphysik, Darmstadt (Germany)
2016-04-15
With the recent advances in radioactive ion beam technology, Coulomb excitation at safe energies becomes an important experimental tool in nuclear-structure physics. The usefulness of the technique to extract key information on the electromagnetic properties of nuclei has been demonstrated since the 1960s with stable beam and target combinations. New challenges present themselves when studying exotic nuclei with this technique, including dealing with low statistics or number of data points, absolute and relative normalisation of the measured cross-sections and a lack of complementary experimental data, such as excited-state lifetimes and branching ratios. This paper addresses some of these common issues and presents analysis techniques to extract transition strengths and quadrupole moments utilising the least-squares fit code, GOSIA. (orig.)
Jiang, Jun; Zhou, Zongtan; Yin, Erwei; Yu, Yang; Liu, Yadong; Hu, Dewen
2015-11-01
Motor imagery (MI)-based brain-computer interfaces (BCIs) allow disabled individuals to control external devices voluntarily, helping us to restore lost motor functions. However, the number of control commands available in MI-based BCIs remains limited, limiting the usability of BCI systems in control applications involving multiple degrees of freedom (DOF), such as control of a robot arm. To address this problem, we developed a novel Morse code-inspired method for MI-based BCI design to increase the number of output commands. Using this method, brain activities are modulated by sequences of MI (sMI) tasks, which are constructed by alternately imagining movements of the left or right hand or no motion. The codes of the sMI task was detected from EEG signals and mapped to special commands. According to permutation theory, an sMI task with N-length allows 2 × (2(N)-1) possible commands with the left and right MI tasks under self-paced conditions. To verify its feasibility, the new method was used to construct a six-class BCI system to control the arm of a humanoid robot. Four subjects participated in our experiment and the averaged accuracy of the six-class sMI tasks was 89.4%. The Cohen's kappa coefficient and the throughput of our BCI paradigm are 0.88 ± 0.060 and 23.5bits per minute (bpm), respectively. Furthermore, all of the subjects could operate an actual three-joint robot arm to grasp an object in around 49.1s using our approach. These promising results suggest that the Morse code-inspired method could be used in the design of BCIs for multi-DOF control. Copyright © 2015 Elsevier Ltd. All rights reserved.
一种CIM模型的Java代码生成方法%Java Code Generation Method of CIM Model
Institute of Scientific and Technical Information of China (English)
余永忠; 王永才
2013-01-01
介绍一种将CIM模型转换为Java代码的方法。 CIM模型是对电力企业的对象进行建模，是进行CIM应用的基础，将CIM模型转换为Java代码是为了实用的需要。对CIM模型及EMF框架进行简要的介绍，说明CIM模型通过转换为EMF模型从而生成Java代码的方法实现，为CIM模型的落地实用化提供参考。%Introduces a method of transforming CIM model to Java code. CIM model is the model of the power enterprise, is the basis for the CIM application, the CIM model into Java code for practi-cal needs. Gives a brief introduction of CIM model and EMF framework, from the CIM model to EMF model, and generate Java code, as the reference of the CIM model application.
Chetty, Indrin J; Moran, Jean M; McShan, Daniel L; Fraass, Benedick A; Wilderman, Scott J; Bielajew, Alex F
2002-06-01
A comprehensive set of measurements and calculations has been conducted to investigate the accuracy of the Dose Planning Method (DPM) Monte Carlo code for dose calculations from 10 and 50 MeV scanned electron beams produced from a racetrack microtron. Central axis depth dose measurements and a series of profile scans at various depths were acquired in a water phantom using a Scanditronix type RK ion chamber. Source spatial distributions for the Monte Carlo calculations were reconstructed from in-air ion chamber measurements carried out across the two-dimensional beam profile at 100 cm downstream from the source. The in-air spatial distributions were found to have full width at half maximum of 4.7 and 1.3 cm, at 100 cm from the source, for the 10 and 50 MeV beams, respectively. Energy spectra for the 10 and 50 MeV beams were determined by simulating the components of the microtron treatment head using the code MCNP4B. DPM calculations are on average within +/- 2% agreement with measurement for all depth dose and profile comparisons conducted in this study. The accuracy of the DPM code illustrated in this work suggests that DPM may be used as a valuable tool for electron beam dose calculations.
Darazi, R.; Gouze, A.; Macq, B.
2009-01-01
Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.
Zhou, Zhenggan; Ma, Baoquan; Jiang, Jingtao; Yu, Guang; Liu, Kui; Zhang, Dongmei; Liu, Weiping
2014-10-01
Air-coupled ultrasonic testing (ACUT) technique has been viewed as a viable solution in defect detection of advanced composites used in aerospace and aviation industries. However, the giant mismatch of acoustic impedance in air-solid interface makes the transmission efficiency of ultrasound low, and leads to poor signal-to-noise (SNR) ratio of received signal. The utilisation of signal-processing techniques in non-destructive testing is highly appreciated. This paper presents a wavelet filtering and phase-coded pulse compression hybrid method to improve the SNR and output power of received signal. The wavelet transform is utilised to filter insignificant components from noisy ultrasonic signal, and pulse compression process is used to improve the power of correlated signal based on cross-correction algorithm. For the purpose of reasonable parameter selection, different families of wavelets (Daubechies, Symlet and Coiflet) and decomposition level in discrete wavelet transform are analysed, different Barker codes (5-13 bits) are also analysed to acquire higher main-to-side lobe ratio. The performance of the hybrid method was verified in a honeycomb composite sample. Experimental results demonstrated that the proposed method is very efficient in improving the SNR and signal strength. The applicability of the proposed method seems to be a very promising tool to evaluate the integrity of high ultrasound attenuation composite materials using the ACUT.
Nijhof, André; Cludts, Stephan; Fisscher, Olaf; Laan, Albertus
2003-01-01
More and more organisations formulate a code of conduct in order to stimulate responsible behaviour among their members. Much time and energy is usually spent fixing the content of the code but many organisations get stuck in the challenge of implementing and maintaining the code. The code then turn
Arikan, Erdal
2008-01-01
A method is proposed, called channel polarization, to construct code sequences that achieve the symmetric capacity $I(W)$ of any given binary-input discrete memoryless channel (B-DMC) $W$. The symmetric capacity is the highest rate achievable subject to using the input letters of the channel with equal probability. Channel polarization refers to the fact that it is possible to synthesize, out of $N$ independent copies of a given B-DMC $W$, a second set of $N$ binary-input channels $\\{W_N^{(i)...
Motion estimation using low-band-shift method for wavelet-based moving-picture coding.
Park, H W; Kim, H S
2000-01-01
The discrete wavelet transform (DWT) has several advantages of multiresolution analysis and subband decomposition, which has been successfully used in image processing. However, the shift-variant property is intrinsic due to the decimation process of the wavelet transform, and it makes the wavelet-domain motion estimation and compensation inefficient. To overcome the shift-variant property, a low-band-shift method is proposed and a motion estimation and compensation method in the wavelet-domain is presented. The proposed method has a superior performance to the conventional motion estimation methods in terms of the mean absolute difference (MAD) as well as the subjective quality. The proposed method can be a model method for the motion estimation in wavelet-domain just like the full-search block matching in the spatial domain.
A Simple Method for Static Load Balancing of Parallel FDTD Codes
DEFF Research Database (Denmark)
Franek, Ondrej
2016-01-01
A static method for balancing computational loads in parallel implementations of the finite-difference timedomain method is presented. The procedure is fairly straightforward and computationally inexpensive, thus providing an attractive alternative to optimization techniques. The method is descri......A static method for balancing computational loads in parallel implementations of the finite-difference timedomain method is presented. The procedure is fairly straightforward and computationally inexpensive, thus providing an attractive alternative to optimization techniques. The method...... is described for partitioning in a single mesh dimension, but it is shown that it can be adapted also for 2D and 3D partitioning in approximate way, with good results. It is applicable to both homogeneous and heterogeneous parallel architectures, and can also be used for balancing memory on distributed memory...
Wang, Qiang; Bi, Sheng
2017-01-01
To predict the peak signal-to-noise ratio (PSNR) quality of decoded images in fractal image coding more efficiently and accurately, an improved method is proposed. After some derivations and analyses, we find that the linear correlation coefficients between coded range blocks and their respective best-matched domain blocks can determine the dynamic range of their collage errors, which can also provide the minimum and the maximum of the accumulated collage error (ACE) of uncoded range blocks. Moreover, the dynamic range of the actual percentage of accumulated collage error (APACE), APACEmin to APACEmax, can be determined as well. When APACEmin reaches a large value, such as 90%, APACEmin to APACEmax will be limited in a small range and APACE can be computed approximately. Furthermore, with ACE and the approximate APACE, the ACE of all range blocks and the average collage error (ACER) can be obtained. Finally, with the logarithmic relationship between ACER and the PSNR quality of decoded images, the PSNR quality of decoded images can be predicted directly. Experiments show that compared with the previous similar method, the proposed method can predict the PSNR quality of decoded images more accurately and needs less computation time simultaneously.
Energy Technology Data Exchange (ETDEWEB)
Ravishankar, C., Hughes Network Systems, Germantown, MD
1998-05-08
Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the
Directory of Open Access Journals (Sweden)
Zhiqin Zhu
2016-11-01
Full Text Available The multi-focus image fusion method is used in image processing to generate all-focus images that have large depth of field (DOF based on original multi-focus images. Different approaches have been used in the spatial and transform domain to fuse multi-focus images. As one of the most popular image processing methods, dictionary-learning-based spare representation achieves great performance in multi-focus image fusion. Most of the existing dictionary-learning-based multi-focus image fusion methods directly use the whole source images for dictionary learning. However, it incurs a high error rate and high computation cost in dictionary learning process by using the whole source images. This paper proposes a novel stochastic coordinate coding-based image fusion framework integrated with local density peaks. The proposed multi-focus image fusion method consists of three steps. First, source images are split into small image patches, then the split image patches are classified into a few groups by local density peaks clustering. Next, the grouped image patches are used for sub-dictionary learning by stochastic coordinate coding. The trained sub-dictionaries are combined into a dictionary for sparse representation. Finally, the simultaneous orthogonal matching pursuit (SOMP algorithm is used to carry out sparse representation. After the three steps, the obtained sparse coefficients are fused following the max L1-norm rule. The fused coefficients are inversely transformed to an image by using the learned dictionary. The results and analyses of comparison experiments demonstrate that fused images of the proposed method have higher qualities than existing state-of-the-art methods.
Energy Technology Data Exchange (ETDEWEB)
Delbecq, J.M
1999-07-01
The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)
Directory of Open Access Journals (Sweden)
Kohei Arai
2012-12-01
Full Text Available Method for face identification based on eigen value decomposition together with tracing trajectories in the eigen space after the eigen value decomposition is proposed. The proposed method allows person to person differences due to faces in the different emotions. By using the well known action unit approach, the proposed method admits the faces in the different emotions. Experimental results show that recognition performance depends on the number of targeted peoples. The face identification rate is 80% for four peoples of targeted number while 100% is achieved for the number of targeted number of peoples is two.
Parallel implementation of a dynamic unstructured chimera method in the DLR finite volume TAU-code
Energy Technology Data Exchange (ETDEWEB)
Madrane, A.; Raichle, A.; Stuermer, A. [German Aerospace Center, DLR, Numerical Methods, Inst. of Aerodynamics and Flow Technology, Braunschweig (Germany)]. E-mail: aziz.madrane@dlr.de
2004-07-01
Aerodynamic problems involving moving geometries have many applications, including store separation, high-speed train entering into a tunnel, simulation of full configurations of the helicopter and fast maneuverability. Overset grid method offers the option of calculating these procedures. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping unstructured grids that update and exchange boundary information through interpolation. However, such computations are complicated and time consuming. Parallel computing offers a very effective way to improve the productivity in doing computational fluid dynamics (CFD). Therefore the purpose of this study is to develop an efficient parallel computation algorithm for analyzing the flowfield of complex geometries using overset grids method. The strategy adopted in the parallelization of the overset grids method including the use of data structures and communication, is described. Numerical results are presented to demonstrate the efficiency of the resulting parallel overset grids method. (author)
Energy Technology Data Exchange (ETDEWEB)
Pacilio, M.; Lanconelli, N.; Lo Meo, S.; Betti, M.; Montani, L.; Torres Aroche, L. A.; Coca Perez, M. A. [Department of Medical Physics, Azienda Ospedaliera S. Camillo Forlanini, Piazza Forlanini 1, Rome 00151 (Italy); Department of Physics, Alma Mater Studiorum University of Bologna, Viale Berti-Pichat 6/2, Bologna 40127 (Italy); Department of Medical Physics, Azienda Ospedaliera S. Camillo Forlanini, Piazza Forlanini 1, Rome 00151 (Italy); Department of Medical Physics, Azienda Ospedaliera Sant' Andrea, Via di Grotarossa 1035, Rome 00189 (Italy); Department of Medical Physics, Center for Clinical Researches, Calle 34 North 4501, Havana 11300 (Cuba)
2009-05-15
Several updated Monte Carlo (MC) codes are available to perform calculations of voxel S values for radionuclide targeted therapy. The aim of this work is to analyze the differences in the calculations obtained by different MC codes and their impact on absorbed dose evaluations performed by voxel dosimetry. Voxel S values for monoenergetic sources (electrons and photons) and different radionuclides ({sup 90}Y, {sup 131}I, and {sup 188}Re) were calculated. Simulations were performed in soft tissue. Three general-purpose MC codes were employed for simulating radiation transport: MCNP4C, EGSnrc, and GEANT4. The data published by the MIRD Committee in Pamphlet No. 17, obtained with the EGS4 MC code, were also included in the comparisons. The impact of the differences (in terms of voxel S values) among the MC codes was also studied by convolution calculations of the absorbed dose in a volume of interest. For uniform activity distribution of a given radionuclide, dose calculations were performed on spherical and elliptical volumes, varying the mass from 1 to 500 g. For simulations with monochromatic sources, differences for self-irradiation voxel S values were mostly confined within 10% for both photons and electrons, but with electron energy less than 500 keV, the voxel S values referred to the first neighbor voxels showed large differences (up to 130%, with respect to EGSnrc) among the updated MC codes. For radionuclide simulations, noticeable differences arose in voxel S values, especially in the bremsstrahlung tails, or when a high contribution from electrons with energy of less than 500 keV is involved. In particular, for {sup 90}Y the updated codes showed a remarkable divergence in the bremsstrahlung region (up to about 90% in terms of voxel S values) with respect to the EGS4 code. Further, variations were observed up to about 30%, for small source-target voxel distances, when low-energy electrons cover an important part of the emission spectrum of the radionuclide
Energy Technology Data Exchange (ETDEWEB)
Both, J.P.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B
2003-07-01
This manual relates to Version 4.3 TRIPOLI-4 code. TRIPOLI-4 is a computer code simulating the transport of neutrons, photons, electrons and positrons. It can be used for radiation shielding calculations (long-distance propagation with flux attenuation in non-multiplying media) and neutronic calculations (fissile medium, criticality or sub-criticality basis). This makes it possible to calculate k{sub eff} (for criticality), flux, currents, reaction rates and multi-group cross-sections. TRIPOLI-4 is a three-dimensional code that uses the Monte-Carlo method. It allows for point-wise description in terms of energy of cross-sections and multi-group homogenized cross-sections and features two modes of geometrical representation: surface and combinatorial. The code uses cross-section libraries in ENDF/B format (such as JEF2-2, ENDF/B-VI and JENDL) for point-wise description cross-sections in APOTRIM format (from the APOLLO2 code) or a format specific to TRIPOLI-4 for multi-group description. (authors)
Energy Technology Data Exchange (ETDEWEB)
Zhi-Gang Feng
2012-05-31
The simulation of particulate flows for industrial applications often requires the use of two-fluid models, where the solid particles are considered as a separate continuous phase. One of the underlining uncertainties in the use of the two-fluid models in multiphase computations comes from the boundary condition of the solid phase. Typically, the gas or liquid fluid boundary condition at a solid wall is the so called no-slip condition, which has been widely accepted to be valid for single-phase fluid dynamics provided that the Knudsen number is low. However, the boundary condition for the solid phase is not well understood. The no-slip condition at a solid boundary is not a valid assumption for the solid phase. Instead, several researchers advocate a slip condition as a more appropriate boundary condition. However, the question on the selection of an exact slip length or a slip velocity coefficient is still unanswered. Experimental or numerical simulation data are needed in order to determinate the slip boundary condition that is applicable to a two-fluid model. The goal of this project is to improve the performance and accuracy of the boundary conditions used in two-fluid models such as the MFIX code, which is frequently used in multiphase flow simulations. The specific objectives of the project are to use first principles embedded in a validated Direct Numerical Simulation particulate flow numerical program, which uses the Immersed Boundary method (DNS-IB) and the Direct Forcing scheme in order to establish, modify and validate needed energy and momentum boundary conditions for the MFIX code. To achieve these objectives, we have developed a highly efficient DNS code and conducted numerical simulations to investigate the particle-wall and particle-particle interactions in particulate flows. Most of our research findings have been reported in major conferences and archived journals, which are listed in Section 7 of this report. In this report, we will present a
Clipping and Coding Audio Files: A Research Method to Enable Participant Voice
Directory of Open Access Journals (Sweden)
Susan Crichton
2005-09-01
Full Text Available Qualitative researchers have long used ethnographic methods to make sense of complex human activities and experiences. Their blessing is that through them, researchers can collect a wealth of raw data. Their challenge is that they require the researcher to find patterns and organize the various themes and concepts that emerge during the analysis stage into a coherent narrative that a reader can follow. In this article, the authors introduce a technology-enhanced data collection and analysis method based on clipped audio files. They suggest not only that the use of appropriate software and hardware can help in this process but, in fact, that their use can honor the participants' voices, retaining the original three-dimensional recording well past the data collection stage.
Energy Technology Data Exchange (ETDEWEB)
Reginatto, M.; Goldhagen, P.
1998-06-01
The problem of analyzing data from a multisphere neutron spectrometer to infer the energy spectrum of the incident neutrons is discussed. The main features of the code MAXED, a computer program developed to apply the maximum entropy principle to the deconvolution (unfolding) of multisphere neutron spectrometer data, are described, and the use of the code is illustrated with an example. A user`s guide for the code MAXED is included in an appendix. The code is available from the authors upon request.
Systems and methods to control multiple peripherals with a single-peripheral application code
Ransom, Ray M.
2013-06-11
Methods and apparatus are provided for enhancing the BIOS of a hardware peripheral device to manage multiple peripheral devices simultaneously without modifying the application software of the peripheral device. The apparatus comprises a logic control unit and a memory in communication with the logic control unit. The memory is partitioned into a plurality of ranges, each range comprising one or more blocks of memory, one range being associated with each instance of the peripheral application and one range being reserved for storage of a data pointer related to each peripheral application of the plurality. The logic control unit is configured to operate multiple instances of the control application by duplicating one instance of the peripheral application for each peripheral device of the plurality and partitioning a memory device into partitions comprising one or more blocks of memory, one partition being associated with each instance of the peripheral application. The method then reserves a range of memory addresses for storage of a data pointer related to each peripheral device of the plurality, and initializes each of the plurality of peripheral devices.
Deschamps, Kevin; Staes, Filip; Desmet, Dirk; Roosen, Philip; Matricali, Giovanni Arnoldo; Keijsers, Noel; Nobels, Frank; Tits, Jos; Bruyninckx, Herman
2015-03-01
Comparing plantar pressure measurements (PPM) of a patient following an intervention or between a reference group and a patient-group is common practice in clinical gait analysis. However, this process is often time consuming and complex, and commercially available software often lacks powerful visualization and interpretation tools. In this paper, we propose a simple method for displaying pixel-level PPM deviations relative to a so-called reference PPM pattern. The novel method contains 3 distinct stages: (1) a normalization of pedobarographic fields (for foot length and width), (2) a pixel-level z-score based calculation and, (3) color coding of the normalized pedobarographic fields. The methodological steps associated to this novel method are precisely described and clinical output illustrated. We believe that the advantages of the novel method cover several domains. The strongest advantage of the novel method is that it provides a straightforward visual interpretation of PPM without decreasing the resolution perspective. A second advantage is that it may guide the selection of a local mapping technique (data reduction technique). Finally, it may be easily used as education tool during the therapist-patient interaction.
Modeling Methods for the Main Switch of High Pulsed-Power Facilities Based on Transmission Line Code
Hu, Yixiang; Zeng, Jiangtao; Sun, Fengju; Wei, Hao; Yin, Jiahui; Cong, Peitian; Qiu, Aici
2014-09-01
Based on the transmission line code (TLCODE), a circuit model is developed here for analyses of main switches in the high pulsed-power facilities. With the structure of the ZR main switch as an example, a circuit model topology of the switch is proposed, and in particular, calculation methods of the dynamic inductance and resistance of the switching arc are described. Moreover, a set of closed equations used for calculations of various node voltages are theoretically derived and numerically discretized. Based on these discrete equations and the Matlab program, a simulation procedure is established for analyses of the ZR main switch. Voltages and currents at different key points are obtained, and comparisons are made with those of a PSpice L-C model. The comparison results show that these two models are perfectly in accord with each other with discrepancy less than 0.1%, which verifies the effectiveness of the TLCODE model to a certain extent.
快速定位的QR码校正方法%QR code correction method by real-time location
Institute of Scientific and Technical Information of China (English)
王雄华; 张昕; 朱同林
2015-01-01
针对传统的 QR码校正算法在光照、拍摄角度影响下会导致低校正率、高运算量问题，提出一种基于图像特征的QR码校正算法。对图像二值化，在进行行扫描时引入冗余点剔除过程，准确获取条码左上、右上和左下3个顶点坐标，基于边界黑色像素点间隔抽样和斜率偏离度容错处理快速找到第4个顶点，采用逆投影变换完成图像的几何校正。该算法抗光照干扰能力强，具有较高的识别成功率，可以在多种不同光照条件下，对不同拍摄角度方向的图像进行定位校正。实验结果表明，该算法能有效提高 QR码识别成功率，满足其实时性的需求。%Aiming at the problem that the traditional QR code correction algorithms have low correction rate and a large amount of calculation when the images are collected in the insufficient light or at the changing shooting angle,a correction algorithm of QR code based on image characteristic was proposed.Image binarization was used to pretreat the barcode,and the upper left, upper right and lower left three vertices of the quadrilateral were obtained accurately using the method of line scanning and re-dundant point elimination procedure,besides the fourth vertex was got based on the boundary of black pixel interval sampling and the j udgment of slope deviation.Inverse perspective transformation was used to geometrically correct image effectively.The al-gorithm has stronger resistant ability of light interference and higher recognition success rate under a variety of different illumina-tion conditions and at different shooting angle directions.Experimental results show that the method can greatly improve the QR code recognition efficiency,and meet the needs of real-time performance.
Garcia-Rubio, Rocio; Gil, Horacio; Monteiro, Maria Candida; Pelaez, Teresa; Mellado, Emilia
2016-01-01
Aspergillus fumigatus is a saprotrophic mold fungus ubiquitously found in the environment and is the most common species causing invasive aspergillosis in immunocompromised individuals. For A. fumigatus genotyping, the short tandem repeat method (STRAf) is widely accepted as the first choice. However, difficulties associated with PCR product size and required technology have encouraged the development of novel typing techniques. In this study, a new genotyping method based on hypervariable tandem repeats within exons of surface protein coding genes (TRESP) was designed. A. fumigatus isolates were characterized by PCR amplification and sequencing with a panel of three TRESP encoding genes: cell surface protein A; MP-2 antigenic galactomannan protein; and hypothetical protein with a CFEM domain. The allele sequence repeats of each of the three targets were combined to assign a specific genotype. For the evaluation of this method, 126 unrelated A. fumigatus strains were analyzed and 96 different genotypes were identified, showing a high level of discrimination [Simpson's index of diversity (D) 0.994]. In addition, 49 azole resistant strains were analyzed identifying 26 genotypes and showing a lower D value (0.890) among them. This value could indicate that these resistant strains are closely related and share a common origin, although more studies are needed to confirm this hypothesis. In summary, a novel genotyping method for A. fumigatus has been developed which is reproducible, easy to perform, highly discriminatory and could be especially useful for studying outbreaks.
Garcia-Rubio, Rocio; Gil, Horacio; Monteiro, Maria Candida; Pelaez, Teresa; Mellado, Emilia
2016-01-01
Aspergillus fumigatus is a saprotrophic mold fungus ubiquitously found in the environment and is the most common species causing invasive aspergillosis in immunocompromised individuals. For A. fumigatus genotyping, the short tandem repeat method (STRAf) is widely accepted as the first choice. However, difficulties associated with PCR product size and required technology have encouraged the development of novel typing techniques. In this study, a new genotyping method based on hypervariable tandem repeats within exons of surface protein coding genes (TRESP) was designed. A. fumigatus isolates were characterized by PCR amplification and sequencing with a panel of three TRESP encoding genes: cell surface protein A; MP-2 antigenic galactomannan protein; and hypothetical protein with a CFEM domain. The allele sequence repeats of each of the three targets were combined to assign a specific genotype. For the evaluation of this method, 126 unrelated A. fumigatus strains were analyzed and 96 different genotypes were identified, showing a high level of discrimination [Simpson’s index of diversity (D) 0.994]. In addition, 49 azole resistant strains were analyzed identifying 26 genotypes and showing a lower D value (0.890) among them. This value could indicate that these resistant strains are closely related and share a common origin, although more studies are needed to confirm this hypothesis. In summary, a novel genotyping method for A. fumigatus has been developed which is reproducible, easy to perform, highly discriminatory and could be especially useful for studying outbreaks. PMID:27701437
Directory of Open Access Journals (Sweden)
Young Ah Goo
2008-01-01
Full Text Available Recently, several research groups have published methods for the determination of proteomic expression profiling by mass spectrometry without the use of exogenously added stable isotopes or stable isotope dilution theory. These so-called label-free, methods have the advantage of allowing data on each sample to be acquired independently from all other samples to which they can later be compared in silico for the purpose of measuring changes in protein expression between various biological states. We developed label free software based on direct measurement of peptide ion current area (PICA and compared it to two other methods, a simpler label free method known as spectral counting and the isotope coded affinity tag (ICAT method. Data analysis by these methods of a standard mixture containing proteins of known, but varying, concentrations showed that they performed similarly with a mean squared error of 0.09. Additionally, complex bacterial protein mixtures spiked with known concentrations of standard proteins were analyzed using the PICA label-free method. These results indicated that the PICA method detected all levels of standard spiked proteins at the 90% confidence level in this complex biological sample. This finding confirms that label-free methods, based on direct measurement of the area under a single ion current trace, performed as well as the standard ICAT method. Given the fact that the label-free methods provide ease in experimental design well beyond pair-wise comparison, label-free methods such as our PICA method are well suited for proteomic expression profiling of large numbers of samples as is needed in clinical analysis.
Hoh, Siew Sin; Rapie, Nurul Nadiah; Lim, Edwin Suh Wen; Tan, Chun Yuan; Yavar, Alireza; Sarmani, Sukiman; Majid, Amran Ab.; Khoo, Kok Siong
2013-05-01
Instrumental Neutron Activation Analysis (INAA) is often used to determine and calculate the elemental concentrations of a sample at The National University of Malaysia (UKM) typically in Nuclear Science Programme, Faculty of Science and Technology. The objective of this study was to develop a database code-system based on Microsoft Access 2010 which could help the INAA users to choose either comparator method, k0-method or absolute method for calculating the elemental concentrations of a sample. This study also integrated k0data, Com-INAA, k0Concent, k0-Westcott and Abs-INAA to execute and complete the ECC-UKM database code-system. After the integration, a study was conducted to test the effectiveness of the ECC-UKM database code-system by comparing the concentrations between the experiments and the code-systems. 'Triple Bare Monitor' Zr-Au and Cr-Mo-Au were used in k0Concent, k0-Westcott and Abs-INAA code-systems as monitors to determine the thermal to epithermal neutron flux ratio (f). Calculations involved in determining the concentration were net peak area (Np), measurement time (tm), irradiation time (tirr), k-factor (k), thermal to epithermal neutron flux ratio (f), parameters of the neutron flux distribution epithermal (α) and detection efficiency (ɛp). For Com-INAA code-system, certified reference material IAEA-375 Soil was used to calculate the concentrations of elements in a sample. Other CRM and SRM were also used in this database codesystem. Later, a verification process to examine the effectiveness of the Abs-INAA code-system was carried out by comparing the sample concentrations between the code-system and the experiment. The results of the experimental concentration values of ECC-UKM database code-system were performed with good accuracy.
Institute of Scientific and Technical Information of China (English)
李灵华; 刘勇奎
2013-01-01
文中通过大量的实验,在研究现有的基于Freeman方向链码的方法的基础上,对提高Freeman四方向链码压缩率的方法进行了深入的研究.从改变码值含义定义并对码值进行Huffman编码,进而对出现频率最高的码值进行计算编码等不同角度,进行大量的实验、比较与分析.提出了一个Freeman四方向链码新方法:计算编码不等长相对四方向Freeman链码——AVRF4.实验结果表明,其链码压缩率比Freeman八方向链码提高了26％,而比原始Freeman四方向链码提高了15％.%To study the methods for improving the efficiency of 4-direction Freeman chain code, the methods based on existing Freeman direction chain code is researched though a large number of experiments. A large number of experiments, comparison and analysis are carried from the different views, such as changing the definition of code elements and employing combining encoding for code elements, and applying arithmetic encoding for the code elements with highest probability. At last, a new method based on 4-direction Freeman chain code entitled arithmetic encoding variable-length relative 4-direction Freeman chain code, namely AVRF4 is put forward. The experimental results show that the compressibility of AVRF4 increases 26% more than 8-di-rection Freeman chain code and 15% more than 4-direction Freeman chain code.
Mason, Marc A; Fanelli Kuczmarski, Marie; Allegro, Deanne; Zonderman, Alan B; Evans, Michele K
2015-08-01
Analysing dietary data to capture how individuals typically consume foods is dependent on the coding variables used. Individual foods consumed simultaneously, like coffee with milk, are given codes to identify these combinations. Our literature review revealed a lack of discussion about using combination codes in analysis. The present study identified foods consumed at mealtimes and by race when combination codes were or were not utilized. Duplicate analysis methods were performed on separate data sets. The original data set consisted of all foods reported; each food was coded as if it was consumed individually. The revised data set was derived from the original data set by first isolating coded foods consumed as individual items from those foods consumed simultaneously and assigning a code to designate a combination. Foods assigned a combination code, like pancakes with syrup, were aggregated and associated with a food group, defined by the major food component (i.e. pancakes), and then appended to the isolated coded foods. Healthy Aging in Neighborhoods of Diversity across the Life Span study. African-American and White adults with two dietary recalls (n 2177). Differences existed in lists of foods most frequently consumed by mealtime and race when comparing results based on original and revised data sets. African Americans reported consumption of sausage/luncheon meat and poultry, while ready-to-eat cereals and cakes/doughnuts/pastries were reported by Whites on recalls. Use of combination codes provided more accurate representation of how foods were consumed by populations. This information is beneficial when creating interventions and exploring diet-health relationships.
Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.
1990-01-01
Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.
Barman, Ranjan Kumar; Mukhopadhyay, Anirban; Das, Santasabuj
2017-04-01
Bacterial small non-coding RNAs (sRNAs) are not translated into proteins, but act as functional RNAs. They are involved in diverse biological processes like virulence, stress response and quorum sensing. Several high-throughput techniques have enabled identification of sRNAs in bacteria, but experimental detection remains a challenge and grossly incomplete for most species. Thus, there is a need to develop computational tools to predict bacterial sRNAs. Here, we propose a computational method to identify sRNAs in bacteria using support vector machine (SVM) classifier. The primary sequence and secondary structure features of experimentally-validated sRNAs of Salmonella Typhimurium LT2 (SLT2) was used to build the optimal SVM model. We found that a tri-nucleotide composition feature of sRNAs achieved an accuracy of 88.35% for SLT2. We validated the SVM model also on the experimentally-detected sRNAs of E. coli and Salmonella Typhi. The proposed model had robustly attained an accuracy of 81.25% and 88.82% for E. coli K-12 and S. Typhi Ty2, respectively. We confirmed that this method significantly improved the identification of sRNAs in bacteria. Furthermore, we used a sliding window-based method and identified sRNAs from complete genomes of SLT2, S. Typhi Ty2 and E. coli K-12 with sensitivities of 89.09%, 83.33% and 67.39%, respectively.
Shoriki, Takuya; Ichikawa-Seki, Madoka; Suganuma, Keisuke; Naito, Ikunori; Hayashi, Kei; Nakao, Minoru; Aita, Junya; Mohanta, Uday Kumar; Inoue, Noboru; Murakami, Kenji; Itagaki, Tadashi
2016-06-01
Fasciolosis is an economically important disease of livestock caused by Fasciola hepatica, Fasciola gigantica, and aspermic Fasciola flukes. The aspermic Fasciola flukes have been discriminated morphologically from the two other species by the absence of sperm in their seminal vesicles. To date, the molecular discrimination of F. hepatica and F. gigantica has relied on the nucleotide sequences of the internal transcribed spacer 1 (ITS1) region. However, ITS1 genotypes of aspermic Fasciola flukes cannot be clearly differentiated from those of F. hepatica and F. gigantica. Therefore, more precise and robust methods are required to discriminate Fasciola spp. In this study, we developed PCR restriction fragment length polymorphism and multiplex PCR methods to discriminate F. hepatica, F. gigantica, and aspermic Fasciola flukes on the basis of the nuclear protein-coding genes, phosphoenolpyruvate carboxykinase and DNA polymerase delta, which are single locus genes in most eukaryotes. All aspermic Fasciola flukes used in this study had mixed fragment pattern of F. hepatica and F. gigantica for both of these genes, suggesting that the flukes are descended through hybridization between the two species. These molecular methods will facilitate the identification of F. hepatica, F. gigantica, and aspermic Fasciola flukes, and will also prove useful in etiological studies of fasciolosis. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Mudassar Raza
2012-08-01
Full Text Available Space research organizations, hospitals and military air surveillance activities, among others, produce a huge amountof data in the form of images hence a large storage space is required to record this information. In hospitals, dataproduced during medical examination is in the form of a sequence of images and are very much correlated; becausethese images have great importance, some kind of lossless image compression technique is needed. Moreover, theseimages are often required to be transmitted over the network. Since the availability of storage and bandwidth islimited, a compression technique is required to reduce the number of bits to store these images and take less time totransmit them over the network. For this purpose, there are many state-of the-art lossless image compressionalgorithms like CALIC, LOCO-I, JPEG-LS, JPEG20000; Nevertheless, these compression algorithms take only asingle file to compress and cannot exploit the correlation among the sequence frames of MRI or CE images. Toexploit the correlation, a new algorithm is proposed in this paper. The primary goals of the proposed compressionmethod are to minimize the memory resource during storage of compressed data as well as minimize the bandwidthrequirement during transmission of compressed data. For achieving these goals, the proposed compression methodcombines the single image compression technique called super spatial structure prediction with inter-frame coding toacquire grater compression ratio. An efficient compression method requires elimination of redundancy of data duringcompression; therefore, for elimination of redundancy of data, initially, the super spatial structure prediction algorithmis applied with the fast block matching approach and later Huffman coding is applied for reducing the number of bitsrequired for transmitting and storing single pixel value. Also, to speed up the block-matching process during motionestimation, the proposed method compares those blocks
Babor, Thomas F; Xuan, Ziming; Damon, Donna
2013-10-01
This study evaluated the use of a modified Delphi technique in combination with a previously developed alcohol advertising rating procedure to detect content violations in the U.S. Beer Institute Code. A related aim was to estimate the minimum number of raters needed to obtain reliable evaluations of code violations in television commercials. Six alcohol ads selected for their likelihood of having code violations were rated by community and expert participants (N = 286). Quantitative rating scales were used to measure the content of alcohol advertisements based on alcohol industry self-regulatory guidelines. The community group participants represented vulnerability characteristics that industry codes were designed to protect (e.g., age code violations. The Delphi technique facilitates consensus development around code violations in alcohol ad content and may enhance the ability of regulatory agencies to monitor the content of alcoholic beverage advertising when combined with psychometric-based rating procedures. Copyright © 2013 by the Research Society on Alcoholism.
A Bipartite Network-based Method for Prediction of Long Non-coding RNA–protein Interactions
Directory of Open Access Journals (Sweden)
Mengqu Ge
2016-02-01
Full Text Available As one large class of non-coding RNAs (ncRNAs, long ncRNAs (lncRNAs have gained considerable attention in recent years. Mutations and dysfunction of lncRNAs have been implicated in human disorders. Many lncRNAs exert their effects through interactions with the corresponding RNA-binding proteins. Several computational approaches have been developed, but only few are able to perform the prediction of these interactions from a network-based point of view. Here, we introduce a computational method named lncRNA–protein bipartite network inference (LPBNI. LPBNI aims to identify potential lncRNA–interacting proteins, by making full use of the known lncRNA–protein interactions. Leave-one-out cross validation (LOOCV test shows that LPBNI significantly outperforms other network-based methods, including random walk (RWR and protein-based collaborative filtering (ProCF. Furthermore, a case study was performed to demonstrate the performance of LPBNI using real data in predicting potential lncRNA–interacting proteins.
A Bipartite Network-based Method for Prediction of Long Non-coding RNA-protein Interactions
Institute of Scientific and Technical Information of China (English)
Mengqu Ge; Ao Li; Minghui Wang
2016-01-01
As one large class of non-coding RNAs (ncRNAs), long ncRNAs (lncRNAs) have gained considerable attention in recent years. Mutations and dysfunction of lncRNAs have been implicated in human disorders. Many lncRNAs exert their effects through interactions with the corresponding RNA-binding proteins. Several computational approaches have been developed, but only few are able to perform the prediction of these interactions from a network-based point of view. Here, we introduce a computational method named lncRNA–protein bipartite network inference (LPBNI). LPBNI aims to identify potential lncRNA–interacting proteins, by making full use of the known lncRNA–protein interactions. Leave-one-out cross validation (LOOCV) test shows that LPBNI significantly outperforms other network-based methods, including random walk (RWR) and protein-based collaborative filtering (ProCF). Furthermore, a case study was performed to demonstrate the performance of LPBNI using real data in predicting potential lncRNA–interacting proteins.
基于相关匹配的QR码识别方法%QR code recognition method based on correlation match
Institute of Scientific and Technical Information of China (English)
熊用; 汪鲁才; 艾琼龙
2011-01-01
QR码图像识别是QR码应用中的关键技术.Hough变换、曲面拟合去背景、控制点变换等方法为QR码识别过程中图像预处理的基本方法,针对图像预处理后图像识别率低的缺点,提出了一种基于相关匹配法的QR码识别方法.基于曲面拟合的改进自适应阈值法分割图像.采用Hough变换和控制点变换方法校正图像的几何失真变形.利用模板对QR码进行相关匹配,对相干系数阈值处理得出取样网格.实验表明,本文算法能有效提高QR码识别效率和效果.%QR code recognition is the key technology in QR code application. Hough transformation, surface-fitting background removing and control point transformation are the essential methods of image preprocessing in QR code recognition. An improved QR code recognition method based on correlation match is proposed in this paper, which improves the low efficiency of the image recognition after image preprocessing. An improved adaptive threshold value method based on surface fitting is used to segment the QR code image. Hough transform and control point transformation methods are used to correct the geometric distortion deformation of the image. A template is used to carry out the match of the QR code image; and the coherent coefficient image is obtained easily. Then, a selected coherent coefficient threshold is processed and the sample grid image is obtained. Simulated results prove that the proposed method can greatly improve the QR code recognition efficiency and effect.
Nested Quantum Error Correction Codes
Wang, Zhuo; Fan, Hen; Vedral, Vlatko
2009-01-01
The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.
Dynamic Server-Based KML Code Generator Method for Level-of-Detail Traversal of Geospatial Data
Baxes, Gregory; Mixon, Brian; Linger, TIm
2013-01-01
Web-based geospatial client applications such as Google Earth and NASA World Wind must listen to data requests, access appropriate stored data, and compile a data response to the requesting client application. This process occurs repeatedly to support multiple client requests and application instances. Newer Web-based geospatial clients also provide user-interactive functionality that is dependent on fast and efficient server responses. With massively large datasets, server-client interaction can become severely impeded because the server must determine the best way to assemble data to meet the client applications request. In client applications such as Google Earth, the user interactively wanders through the data using visually guided panning and zooming actions. With these actions, the client application is continually issuing data requests to the server without knowledge of the server s data structure or extraction/assembly paradigm. A method for efficiently controlling the networked access of a Web-based geospatial browser to server-based datasets in particular, massively sized datasets has been developed. The method specifically uses the Keyhole Markup Language (KML), an Open Geospatial Consortium (OGS) standard used by Google Earth and other KML-compliant geospatial client applications. The innovation is based on establishing a dynamic cascading KML strategy that is initiated by a KML launch file provided by a data server host to a Google Earth or similar KMLcompliant geospatial client application user. Upon execution, the launch KML code issues a request for image data covering an initial geographic region. The server responds with the requested data along with subsequent dynamically generated KML code that directs the client application to make follow-on requests for higher level of detail (LOD) imagery to replace the initial imagery as the user navigates into the dataset. The approach provides an efficient data traversal path and mechanism that can be
DEFF Research Database (Denmark)
Sessarego, Matias; Ramos García, Néstor; Sørensen, Jens Nørkær
2017-01-01
Aerodynamic and structural dynamic performance analysis of modern wind turbines are routinely estimated in the wind energy field using computational tools known as aeroelastic codes. Most aeroelastic codes use the blade element momentum (BEM) technique to model the rotor aerodynamics and a modal...
Hewson, Claire
2007-08-01
E-learning approaches have received increasing attention in recent years. Accordingly, a number of tools have become available to assist the nonexpert computer user in constructing and managing virtual learning environments, and implementing computer-based and/or online procedures to support pedagogy. Both commercial and free packages are now available, with new developments emerging periodically. Commercial products have the advantage of being comprehensive and reliable, but tend to require substantial financial investment and are not always transparent to use. They may also restrict pedagogical choices due to their predetermined ranges of functionality. With these issues in mind, several authors have argued for the pedagogical benefits of developing freely available, open source e-learning resources, which can be shared and further developed within a community of educational practitioners. The present paper supports this objective by presenting a set of methods, along with supporting freely available, downloadable, open source programming code, to allow administration of online multiple choice question assessments to students.
Directory of Open Access Journals (Sweden)
Fabio Burderi
2007-05-01
Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.
Energy Technology Data Exchange (ETDEWEB)
Vergnaud, Th.; Nimal, J.C.; Chiron, M
2001-07-01
The TRIPOLI-3 code applies the Monte Carlo method to neutron, gamma-ray and coupled neutron and gamma-ray transport calculations in three-dimensional geometries, either in steady-state conditions or having a time dependence. It can be used to study problems where there is a high flux attenuation between the source zone and the result zone (studies of shielding configurations or source driven sub-critical systems, with fission being taken into account), as well as problems where there is a low flux attenuation (neutronic calculations -- in a fuel lattice cell, for example -- where fission is taken into account, usually with the calculation on the effective multiplication factor, fine structure studies, numerical experiments to investigate methods approximations, etc). TRIPOLI-3 has been operational since 1995 and is the version of the TRIPOLI code that follows on from TRIPOLI-2; it can be used on SUN, RISC600 and HP workstations and on PC using the Linux or Windows/NT operating systems. The code uses nuclear data libraries generated using the THEMIS/NJOY system. The current libraries were derived from ENDF/B6 and JEF2. There is also a response function library based on a number of evaluations, notably the dosimetry libraries IRDF/85, IRDF/90 and also evaluations from JEF2. The treatment of particle transport is the same in version 3.5 as in version 3.4 of the TRIPOLI code; but the version 3.5 is more convenient for preparing the input data and for reading the output. The french version of the user's manual exists. (authors)
Defeating the coding monsters.
Colt, Ross
2007-02-01
Accuracy in coding is rapidly becoming a required skill for military health care providers. Clinic staffing, equipment purchase decisions, and even reimbursement will soon be based on the coding data that we provide. Learning the complicated myriad of rules to code accurately can seem overwhelming. However, the majority of clinic visits in a typical outpatient clinic generally fall into two major evaluation and management codes, 99213 and 99214. If health care providers can learn the rules required to code a 99214 visit, then this will provide a 90% solution that can enable them to accurately code the majority of their clinic visits. This article demonstrates a step-by-step method to code a 99214 visit, by viewing each of the three requirements as a monster to be defeated.
A Single Loop Vectorization Method Based on Assemble Code%一种基于汇编代码的单重循环向量化方法
Institute of Scientific and Technical Information of China (English)
陆洪毅; 戴葵; 王志英
2003-01-01
Through loops vectorization in instruction sequence, the vector power provided by hardware can be fully utilized. This paper analyzes the RISC instructton set, and presents a single loop vectorization method that is based on assemble code, it can efficiently detect single loops in instruct sequence and vectorize them.
Witt, J.; Elwyn, G.; Wood, F.; Rogers, M.T.; Menon, U.; Brain, K.
2014-01-01
OBJECTIVE: To test whether the coping in deliberation (CODE) framework can be adapted to a specific preference-sensitive medical decision: risk-reducing bilateral salpingo-oophorectomy (RRSO) in women at increased risk of ovarian cancer. METHODS: We performed a systematic literature search to
Reusing the legacy code based on the method of LC-WS%基于LC-WS的遗留代码重用
Institute of Scientific and Technical Information of China (English)
赵媛; 周立军; 宦婧
2016-01-01
针对目前存在的大量遗留代码,提出基于LC-WS将遗留代码进行包装、部署并且重用.通过Web Services的方式提供给访问者调用.采用LC-WS的方法,只需要付出低廉的代价就可以实现大量遗留代码在信息集成平台中的重新利用,既可以缩短开发周期,还可以降低开发风险.通过在已搭建信息集成平台中的实际应用,证明这一方法是可行的.%There are lots of legacy code in the old system, a new method LC-WS to reuse the legacy code is presented in this paper. The legacy code is wrapped and published into services which can be accessed by service invoker . By the method of LC-WS, a lot of legacy code can be reused at lower cost. The obtained result not only can shorten the research period, but also can lower exploitation risk. Experiments have proven the method is feasible.
Directory of Open Access Journals (Sweden)
Daogang Lu
2016-01-01
Full Text Available A three-dimensional, multigroup, diffusion code based on a high order nodal expansion method for hexagonal-z geometry (HNHEX was developed to perform the neutronic analysis of hexagonal-z geometry. In this method, one-dimensional radial and axial spatially flux of each node and energy group are defined as quadratic polynomial expansion and four-order polynomial expansion, respectively. The approximations for one-dimensional radial and axial spatially flux both have second-order accuracy. Moment weighting is used to obtain high order expansion coefficients of the polynomials of one-dimensional radial and axial spatially flux. The partially integrated radial and axial leakages are both approximated by the quadratic polynomial. The coarse-mesh rebalance method with the asymptotic source extrapolation is applied to accelerate the calculation. This code is used for calculation of effective multiplication factor, neutron flux distribution, and power distribution. The numerical calculation in this paper for three-dimensional SNR and VVER 440 benchmark problems demonstrates the accuracy of the code. In addition, the results show that the accuracy of the code is improved by applying quadratic approximation for partially integrated axial leakage and four-order approximation for one-dimensional axial spatially flux in comparison to flat approximation for partially integrated axial leakage and quadratic approximation for one-dimensional axial spatially flux.
Gläser, Jochen; Laudel, Grit
2013-01-01
Qualitative research aimed at "mechanismic" explanations poses specific challenges to qualitative data analysis because it must integrate existing theory with patterns identified in the data. We explore the utilization of two methods—coding and qualitative content analysis—for the first steps in the
Gläser, Jochen; Laudel, Grit
2013-01-01
Qualitative research aimed at "mechanismic" explanations poses specific challenges to qualitative data analysis because it must integrate existing theory with patterns identified in the data. We explore the utilization of two methods—coding and qualitative content analysis—for the first steps in the
Latorre, Jose I
2015-01-01
There exists a remarkable four-qutrit state that carries absolute maximal entanglement in all its partitions. Employing this state, we construct a tensor network that delivers a holographic many body state, the H-code, where the physical properties of the boundary determine those of the bulk. This H-code is made of an even superposition of states whose relative Hamming distances are exponentially large with the size of the boundary. This property makes H-codes natural states for a quantum memory. H-codes exist on tori of definite sizes and get classified in three different sectors characterized by the sum of their qutrits on cycles wrapped through the boundaries of the system. We construct a parent Hamiltonian for the H-code which is highly non local and finally we compute the topological entanglement entropy of the H-code.
A Research of Channel Coding Simulation Method Based on UAV Data Link%无人机数据链信道编码模拟方法研究
Institute of Scientific and Technical Information of China (English)
郭淑霞; 刘冰; 高颖; 黄国栋
2011-01-01
Channel coding is an important approach to improve the reliability of communication.To satisfy reliability requirements of unmanned aerial vehicle (UAV) data link when transmits remote measurement and remote-controlled commands, this paper researches one channel coding simulation method based on UAV data link.Through code stream generating of channel coding、 transmission channel model loading、 microwave instruments real-time drive、 multithread programming and thread synchronization these key technologies, the method have simulated the principle and method of encode and decode in microwave anechoic chamber with different code rate of convolution codes、Turbo codes and LDPC codes; The testified result of simulation has shown that it has validated channel encode and decode approach of UAV data link through the simulated method of channel coding, and made the data link system's bit-error-ratio below 10-5, successfully to meet the high reliability transmission requirements of the UAV data link.%信道编码是提高通信可靠性的重要途径,针对无人机数据链传输遥控/遥测指令时的高可靠性要求,研究了无人机数据链信道编码模拟方法;该方法通过信道编码码流生成、传输信道模型的解算与加载、微波仪表实时驱动、多线程编程与线程同步等技术,在微波暗室环境内模拟了码率可变的卷积码、Turbo码、LDPC码的编解码算法;仿真验证结果表明,通过信道编码模拟方法,验证了无人机效据链的信道编码与解码方案,使数据链的误码率达到了10-5以下,满足了无人机数据链高可靠性要求.
Kubilius, Jonas
2014-01-01
Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.
Kaneko, Nobu-hisa; Maruyama, Michitaka; Urano, Chiharu; Kiryu, Shogo
2012-01-01
A method of AC waveform synthesis with quantum-mechanical accuracy has been developed on the basis of the Josephson effect in national metrology institutes, not only for its scientific interest but its potential benefit to industries. In this paper, we review the development of Josephson arbitrary waveform synthesizers based on the two types of Josephson junction array and their distinctive driving methods. We also discuss a new operation technique with multibit delta-sigma modulation and a thermometer code, which possibly enables the generation of glitch-free waveforms with high voltage levels. A Josephson junction array for this method has equally weighted branches that are operated by thermometer-coded bias current sources with multibit delta-sigma conversion.
Directory of Open Access Journals (Sweden)
Seyed Abolfazl Hosseini
2016-02-01
Full Text Available In the present paper, development of the three-dimensional (3D computational code based on Galerkin finite element method (GFEM for solving the multigroup forward/adjoint diffusion equation in both rectangular and hexagonal geometries is reported. Linear approximation of shape functions in the GFEM with unstructured tetrahedron elements is used in the calculation. Both criticality and fixed source calculations may be performed using the developed GFEM-3D computational code. An acceptable level of accuracy at a low computational cost is the main advantage of applying the unstructured tetrahedron elements. The unstructured tetrahedron elements generated with Gambit software are used in the GFEM-3D computational code through a developed interface. The forward/adjoint multiplication factor, forward/adjoint flux distribution, and power distribution in the reactor core are calculated using the power iteration method. Criticality calculations are benchmarked against the valid solution of the neutron diffusion equation for International Atomic Energy Agency (IAEA-3D and Water-Water Energetic Reactor (VVER-1000 reactor cores. In addition, validation of the calculations against the P1 approximation of the transport theory is investigated in relation to the liquid metal fast breeder reactor benchmark problem. The neutron fixed source calculations are benchmarked through a comparison with the results obtained from similar computational codes. Finally, an analysis of the sensitivity of calculations to the number of elements is performed.
DEFF Research Database (Denmark)
Soon, Winnie
2014-01-01
, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...
DEFF Research Database (Denmark)
Cox, Geoff
Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...
2014-12-01
QPSK Gaussian channels . .......................................................................... 39 vi 1. INTRODUCTION Forward error correction (FEC...Capacity of BSC. 7 Figure 5. Capacity of AWGN channel . 8 4. INTRODUCTION TO POLAR CODES Polar codes were introduced by E. Arikan in [1]. This paper...Under authority of C. A. Wilgenbusch, Head ISR Division EXECUTIVE SUMMARY This report describes the results of the project “More reliable wireless
Transplantation Method of QR Code Decoding Program Based on Embedded Platforms%嵌入式平台QR码译码程序的移植方法
Institute of Scientific and Technical Information of China (English)
杨柏松; 高美凤
2016-01-01
A transplantation method of QR code decoding program based on embedded platform was presented. The UP- NETARM2410-S was selected as the hardware development platform. Firstly, the system hardware composition was given. Then, the whole development process of QR code decoding program using Qt-Creator was introduced in detail. In the test phase, in order to simulate the running state of QR code decoding, the qvfb visual screen was used. Finally, the program was transplanted to true embedded platform. Test results showed that the decoding program can run normally on the embedded platform and correctly decode QR code information. The proposed transplantation method may have certain reference significance for different platforms QR code decoding.%给出一种嵌入式平台QR码译码程序的移植方法.选用UP-NETARM2410-S嵌入式平台作为硬件开发平台,首先介绍了系统的硬件平台的组成,然后,对使用Qt-Creator进行QR码译码程序的开发的具体流程进行详细的介绍,测试阶段,利用qvfb虚拟屏,对程序在开发平台的运行情况进行模拟.最后,将程序向嵌入式平台移植.测试结果表明,译码程序能够在嵌入式平台正常运行,并能进行QR码的译码.所提出的移植方法,对于不同平台QR码译码程序移植有一定的借鉴意义.
Authorship Attribution of Source Code
Tennyson, Matthew F.
2013-01-01
Authorship attribution of source code is the task of deciding who wrote a program, given its source code. Applications include software forensics, plagiarism detection, and determining software ownership. A number of methods for the authorship attribution of source code have been presented in the past. A review of those existing methods is…
WSN中降低喷泉码存储冗余量的方法研究%Research on Storage Redundancy Reduction Method of Fountain Code in WSN
Institute of Scientific and Technical Information of China (English)
袁博; 赵旦峰; 钱晋希
2014-01-01
针对由于数字喷泉码的冗余编码数据包和所需内存空间较大，导致无线传感器网络(WSN)实时性较差的问题，设计一种平均分帧长LT码的编译码系统。建立典型拓扑结构模型，应用网络编码和数字喷泉码的级联形式进行数据传输，并对平均分帧长LT码的生成矩阵进行压缩编码。通过加权平均法和多比特打包法，在不破坏喷泉码特性的前提下降低无线整个传感器网络的存储冗余量。实验结果表明，该系统能使数字喷泉码降低103量级的存储冗余量，并提高WSN编译码效率及数据中心的数据恢复率。%For the problems that the redundant encoded data packets of fountain code are big and require large memory space, resulting in poor real-time Wireless Sensor Network(WSN) problems. A system of average framing length of Luby Transform(LT) codes split encoding and decoding is designed. The typical topology model is built, the cascade form of the network coding and fountain codes in data transmission is applied, and the improvement coding compression algorithm in the average framing length LT code generator matrix is introduced. The weighted average method and the multi-bit packaging method are introduced in the hierarchy of WSN, which greatly reduces the amount of storage redundancy without damaging the characteristic of fountain codes. Experimental results show that the system makes the reduction amount of the compression ratio of the storage redundancy in the WSN to 103, promotes the encoding rate and decoding rate in the WSN and improves the recovery rate of the data center.
基于元素区间编码的GML数据索引方法%GML data index method based on element interval coding
Institute of Scientific and Technical Information of China (English)
於时才; 郭润牛; 吴衍智
2013-01-01
According to the demand of data query of GML,a GML indexing method was proposed based on extending the element interval coding,and analyzing the XML file coding techniques and spatial indexing method.Firstly through extending the interval coding method to encode the element,attribute,text,and geometric object in GML file.Then the non-spatial nodes,spatial nodes,and element nodes were separated from GML file tree to generate sequence of element coding based on element coding algorithm.On this basis and according to the difference among the nodes,a B+ tree index was built up for attribute and text notes to realize value query and a R tree index was built up for on geometric object note to realize spatial data analysis,and by means of query optimization algorithm the unnecessary overall query of the nodes was avoided,so that the query efficiency was further improved.Experimental result showed that the indexing method based on the element interval coding was feasible and high-efficient.%根据GML数据查询的需要,在分析XML文档编码和空间索引技术的基础上,提出一种基于扩展的元素区间编码的GML索引方法.首先通过扩展的区间编码方法对GML文档中的元素、属性、文本、几何体等要素进行编码；其次依据元素编码算法并将非空间节点、空间节点、元素节点从GML文档树中分离,产生元素编码序列；在此基础上根据节点类型的不同对属性和文本节点建立B+树索引以实现值查询,对几何体节点建立R树索引以实现空间数据的分析操作,并在查询处理时通过查询优化算法避免不必要的节点的遍历,进一步提高查询效率.实验结果表明,基于元素区间编码的GML数据索引方法是可行的、高效的.
Energy Technology Data Exchange (ETDEWEB)
Parks, C.V.; Broadhead, B.L.; Hermann, O.W.; Tang, J.S.; Cramer, S.N.; Gauthey, J.C.; Kirk, B.L.; Roussin, R.W.
1988-07-01
This report provides a preliminary assessment of the computational tools and existing methods used to obtain radiation dose rates from shielded spent nuclear fuel and high-level radioactive waste (HLW). Particular emphasis is placed on analysis tools and techniques applicable to facilities/equipment designed for the transport or storage of spent nuclear fuel or HLW. Applications to cask transport, storage, and facility handling are considered. The report reviews the analytic techniques for generating appropriate radiation sources, evaluating the radiation transport through the shield, and calculating the dose at a desired point or surface exterior to the shield. Discrete ordinates, Monte Carlo, and point kernel methods for evaluating radiation transport are reviewed, along with existing codes and data that utilize these methods. A literature survey was employed to select a cadre of codes and data libraries to be reviewed. The selection process was based on specific criteria presented in the report. Separate summaries were written for several codes (or family of codes) that provided information on the method of solution, limitations and advantages, availability, data access, ease of use, and known accuracy. For each data library, the summary covers the source of the data, applicability of these data, and known verification efforts. Finally, the report discusses the overall status of spent fuel shielding analysis techniques and attempts to illustrate areas where inaccuracy and/or uncertainty exist. The report notes the advantages and limitations of several analysis procedures and illustrates the importance of using adequate cross-section data sets. Additional work is recommended to enable final selection/validation of analysis tools that will best meet the US Department of Energy's requirements for use in developing a viable HLW management system. 188 refs., 16 figs., 27 tabs.
Truchet, G.; Leconte, P.; Peneliau, Y.; Santamarina, A.; Malvagi, F.
2014-06-01
Pile-oscillation experiments are performed in the MINERVE reactor at the CEA Cadarache to improve nuclear data accuracy. In order to precisely calculate small reactivity variations (experiments, a reference calculation need to be achieved. This calculation may be accomplished using the continuous-energy Monte Carlo code TRIPOLI-4® by using the eigenvalue difference method. This "direct" method has shown limitations in the evaluation of very small reactivity effects because it needs to reach a very small variance associated to the reactivity in both states. To answer this problem, it has been decided to implement the exact perturbation theory in TRIPOLI-4® and, consequently, to calculate a continuous-energy adjoint flux. The Iterated Fission Probability (IFP) method was chosen because it has shown great results in some other Monte Carlo codes. The IFP method uses a forward calculation to compute the adjoint flux, and consequently, it does not rely on complex code modifications but on the physical definition of the adjoint flux as a phase-space neutron importance. In the first part of this paper, the IFP method implemented in TRIPOLI-4® is described. To illustrate the effciency of the method, several adjoint fluxes are calculated and compared with their equivalent obtained by the deterministic code APOLLO-2. The new implementation can calculate angular adjoint flux. In the second part, a procedure to carry out an exact perturbation calculation is described. A single cell benchmark has been used to test the accuracy of the method, compared with the "direct" estimation of the perturbation. Once again the method based on the IFP shows good agreement for a calculation time far more inferior to the "direct" method. The main advantage of the method is that the relative accuracy of the reactivity variation does not depend on the magnitude of the variation itself, which allows us to calculate very small reactivity perturbations with high precision. Other applications of
DEFF Research Database (Denmark)
Nielsen, Rasmus Refslund
2002-01-01
This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed.......This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed....
Anderson, John B
2017-01-01
Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.
Directory of Open Access Journals (Sweden)
M. Abbasian Motlagh
2014-04-01
Full Text Available For their appropriate temporal resolution, scintillator detectors are used in the Alborz observatory. In this work, the behavior of the scintillation detectors for the passage of electrons with different energies and directions were studied using the simulation code GEANT4. Pulse shapes of scintillation light, and such characteristics as the total number of photons, the rise time and the falling time for the optical pulses were computed for the passage of electrons with energies of 10, 100 and 1000 MeV. Variations of the characteristics of optical pulse of scintillation with incident angle and the location of electrons were also investigated
QR code sampling method based on adaptive match%基于自适应匹配的QR码取样方法
Institute of Scientific and Technical Information of China (English)
宋贤媛; 张多英
2015-01-01
The QR code acquired by camera always comes with some distortion, so it needs to be recognised to the standard QR code before decode. Aimming at the QR coderecognition, distortion and correction is analyzed and studied in this paper. Some inevitable distortion still existed based on the tilt correction and geometric correction;the traditional method can’t sample the QR code accurately. According to the problem, this paper proposes the adaptive match method, acquire the effective sampling region of QRcode by the matching rate of two adjacent pixel row(column). Experiment shows that the method is real-time with good stability, it can sampling the QR code fast and accurately.%通常由相机获取的QR码图像都带有一些失真，所以在译码前需要对获取的QR码图像进行识别以得到标准规格的QR码。针对QR码识别中的失真和校正进行了分析研究，解决了某些QR码经过倾斜校正和几何校正后仍存在一些无法避免的失真而无法被传统方法准确取样的问题，提出了一种自适应匹配取样法，根据相邻行（列）像素的匹配度准确获取QR码的模块有效取样区域。实验证明该方法稳定性好，能够快速准确地对QR码进行取样。
Long-code Signal Waveform Monitoring Method for Navigation Satellites%卫星导航长码信号波形监测方法
Institute of Scientific and Technical Information of China (English)
刘建成; 王宇; 宫磊; 徐晓燕
2016-01-01
Due to the weakness of signal,signal waveform monitoring for navigation satellites in orbit is one of the difficulties in satellite navigation signal quality monitoring research,so a signal waveform monitoring method for navigation satellites in orbit is pro⁃posed.Based on the Vernier sampling principle,a large⁃diameter parabolic antenna is used for in⁃orbit satellite signal collection.After in⁃itial phase and residual frequency elimination,accumulation and combination,a clear chip waveform is obtained.For civilian and long⁃code signals with the same code rate,the PN code phase bias can be determined.By using a large⁃diameter parabolic antenna for COM⁃PASS satellite tracking,the civilian and long⁃code chip waveforms of several COMPASS satellites in B1 band are obtained,and the PN code phase bias of the satellite signals are got.The results show that there is little difference between the civilian signal waveform and long⁃code signal waveform,but there is a code phase bias between them.%由于信号微弱，如何获得在轨导航卫星的清晰信号波形是卫星导航信号质量监测研究中的难点之一，为此提出了一种在轨导航卫星的信号波形监测方法。该方法基于Vernier采样原理，利用大口径抛物面天线对在轨卫星进行信号采集，经过消除初相和残余频率、累加平均和数据组合等处理，获得清晰的码片波形。对于相同码速率的民用信号和长码信号，可确定民用信号和长码信号的伪码相位偏差。利用大口径抛物面天线对北斗卫星进行跟踪，获得了多颗北斗卫星B1频点民用信号和长码信号的码片波形。结果表明，民用信号和长码信号的码片波形的轮廓差异较小，但伪码相位存在偏差。
Coding and Decoding Method for Periodic Permutation Color Structured Light%周期组合颜色结构光编解码方法
Institute of Scientific and Technical Information of China (English)
秦绪佳; 马吉跃; 张勤锋; 郑红波; 徐晓刚
2014-01-01
A periodic permutation color structured light coding and decoding method is presented.The method use red,green and blue three primary colors as the encoding stripe pattern,and make any adjacent three color stripes as a group.So the stripe's order is unique.Then use white stripes to mark the periodic color stripe patterns to distinguish different coding groups.This method can achieve a larger coding space with less colors,increase the noise immunity and make the decoding easier.In order to accurately decode,an adaptive color stripes segmentation method based on improved Canny edge-detection operator is presented.It includes two aspects:(1) sequentially decoding the color stripes based on the white stripes; (2) omitted color stripes decoding.Experimental results show that the method has a large coding periodic space,and can extract stripes easily.It can also ensure the accuracy of the stripes decoding,and achieve a good coding and decoding result.%提出一种周期组合颜色编解码方法.采用红、绿、蓝三种基本色形成彩色条纹,将任意相邻三条彩色条纹作为一组,其排列顺序是唯一的,再利用白色条纹来标记周期编号,该编码方法用较少的颜色数实现了较大的编码空间,增加了抗干扰性,且解码较容易.为精确解码,本文对彩色条纹分割进行了研究,提出基于改进Canny边缘检测算子的自适应彩色条纹分割算法.在此基础之上,对彩色条纹进行解码,主要包括:①基于白色条纹逐级解码算法;②遗漏彩色条纹解码算法.实验结果表明,该方法既具有较大的编码周期,又容易提取条纹,保证了条纹解码的准确度,达到了较好的结果.
Quantum codes from linear codes over finite chain rings
Liu, Xiusheng; Liu, Hualu
2017-10-01
In this paper, we provide two methods of constructing quantum codes from linear codes over finite chain rings. The first one is derived from the Calderbank-Shor-Steane (CSS) construction applied to self-dual codes over finite chain rings. The second construction is derived from the CSS construction applied to Gray images of the linear codes over finite chain ring {\\mathbb {F}}_{p^{2m}}+u{\\mathbb {F}}_{p^{2m}}. The good parameters of quantum codes from cyclic codes over finite chain rings are obtained.
Rate-adaptive BCH codes for distributed source coding
DEFF Research Database (Denmark)
Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren
2013-01-01
This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...
Energy Technology Data Exchange (ETDEWEB)
St. John, C.M.; Sanjeevan, K. [Agapito (J.F.T.) and Associates, Inc., Grand Junction, CO (United States)
1991-12-01
The HEFF Code combines a simple boundary-element method of stress analysis with the closed form solutions for constant or exponentially decaying heat sources in an infinite elastic body to obtain an approximate method for analysis of underground excavations in a rock mass with heat generation. This manual describes the theoretical basis for the code, the code structure, model preparation, and step taken to assure that the code correctly performs its intended functions. The material contained within the report addresses the Software Quality Assurance Requirements for the Yucca Mountain Site Characterization Project. 13 refs., 26 figs., 14 tabs.
Measurement Method of Source Code Similarity Based on Word%基于单词的源程序相似度度量方法
Institute of Scientific and Technical Information of China (English)
朱红梅; 孙未; 王鲁; 张亮
2014-01-01
为了帮助教师快速准确地识别程序设计类作业中的抄袭现象，本文研究了一种源程序相似度度量方法，根据学生提交的源程序，基于单词统计程序源代码之间的编辑距离和最长公共子序列的长度，计算程序对之间的相似度，通过设定合理的动态阈值，判断源程序对之间是否存在抄袭。实验结果表明，该方法能够及时有效和准确地识别学生提交的相似源程序。%In order to help teachers to identify quickly and accurately the plagiarism among students' source codes, this paper works out a method of measuring the similarity of source codes. Based on editing distance be-tween words and the length of longest common subsequence, we calculate the similarity of the source programs submitted by students, and by setting a reasonable dynamic sensory threshold, we determine whether there is pla-giarism. Experimental results show that this method can identify effectively and accurately similar source codes.
Several ordinary optimizing methods for MATLAB code%基于MATLAB的几种常用代码优化方法
Institute of Scientific and Technical Information of China (English)
程宏辉; 刘红飞; 王佳; 孙玉晨; 黄新; 秦康生
2011-01-01
虽然MATLAB软件提供了大量专业化的工具箱,但是用户仍不免需要经常自行编程来解决某些实际工程问题.因此,如何根据该软件的自身特点来优化程序代码备受关注.阐述了关于MATLAB的几种常用代码优化方法.这些方法已经过长期实践检验,结果表明具有简单易行,操作性强的特点,对代码执行速度的提高具有良好效果.%Although MATLAB software provides a number of professional toolboxes, users still unavoidablly regular program on their own to resolve some practical engineering problems. Therefore, how to optimize the codes according to the inherent characteristics of the software catch our attention. In this paper, several ordinary optimizing methods for MATLAB code were described. In long - term engineering applications, it has been proven that these methods, with a simple and operable feature, could improve the code execution speed efficiently.
Combustion chamber analysis code
Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.
1993-05-01
A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.
Code Flows : Visualizing Structural Evolution of Source Code
Telea, Alexandru; Auber, David
2008-01-01
Understanding detailed changes done to source code is of great importance in software maintenance. We present Code Flows, a method to visualize the evolution of source code geared to the understanding of fine and mid-level scale changes across several file versions. We enhance an existing visual met
Code flows : Visualizing structural evolution of source code
Telea, Alexandru; Auber, David
2008-01-01
Understanding detailed changes done to source code is of great importance in software maintenance. We present Code Flows, a method to visualize the evolution of source code geared to the understanding of fine and mid-level scale changes across several file versions. We enhance an existing visual met
Valdivia, Valeska
2014-01-01
Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims. Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods. We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results. We find that the accuracy for the extinction of the tree-based method is better than 10%, while the ...
Energy Technology Data Exchange (ETDEWEB)
Junior, Reginaldo G., E-mail: reginaldo.junior@ifmg.edu.br [Instituto Federal de Minas Gerais (IFMG), Formiga, MG (Brazil). Departamento de Engenharia Eletrica; Oliveira, Arno H. de; Sousa, Romulo V., E-mail: arnoheeren@gmail.com, E-mail: romuloverdolin@yahoo.com.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear; Mourao, Arnaldo P., E-mail: apratabhz@gmail.com [Centro Federal de Educacao Tecnologica de Minas Gerais, Belo Horizonte, MG (Brazil)
2015-07-01
This paper reports the modeling of a linear accelerator Clinac 600 CD with BEAMnrc application, derived from EGSnrc radiation transport code, indicating relevant details of modeling that traditionally involve difficulties imposed on the process. This accelerator was commissioned by the confrontation of experimental dosimetric data with the computer data obtained by DOSXYZnrc application. The information compared in dosimetry process were: field profiles and dose percentage curves obtained in a water phantom with cubic edge of 30 cm. In all comparisons made, the computational data showed satisfactory precision and discrepancies with the experimental data did not exceed 3%, proving the electiveness of the model. Both the accelerator model and the computational dosimetry methodology, revealed the need for adjustments that probably will allow obtaining more accurate data than those obtained in the simulations presented here. These adjustments are mainly associated to improve the resolution of the eld profiles, the voxelization in phantom and optimization of computing time. (author)
Fulachier, J; The ATLAS collaboration; Albrand, S; Lambert, F
2014-01-01
The “ATLAS Metadata Interface” framework (AMI) has been developed in the context of ATLAS, one of the largest scientific collaborations. AMI can be considered to be a mature application, since its basic architecture has been maintained for over 10 years. In this paper we will briefly describe the architecture and the main uses of the framework within the experiment (TagCollector for release management and Dataset Discovery). These two applications, which share almost 2000 registered users, are superficially quite different, however much of the code is shared and they have been developed and maintained over a decade almost completely by the same team of 3 people. We will discuss how the architectural principles established at the beginning of the project have allowed us to continue both to integrate the new technologies and to respond to the new metadata use cases which inevitably appear over such a time period.
Hutchison, Michael G; Comper, Paul; Meeuwisse, Willem H; Echemendia, Ruben J
2014-01-01
Development of effective strategies for preventing concussions is a priority in all sports, including ice hockey. Digital video records of sports events contain a rich source of valuable information, and are therefore a promising resource for analysing situational factors and injury mechanisms related to concussion. To determine whether independent raters reliably agreed on the antecedent events and mechanisms of injury when using a standardised observational tool known as the heads-up checklist (HUC) to code digital video records of concussions in the National Hockey League (NHL). The study occurred in two phases. In phase 1, four raters (2 naïve and 2 expert) independently viewed and completed HUCs for 25 video records of NHL concussions randomly chosen from the pool of concussion events from the 2006-2007 regular season. Following initial analysis, three additional factors were added to the HUC, resulting in a total of 17 factors of interest. Two expert raters then viewed the remaining concussion events from the 2006-2007 season, as well as all digital video records of concussion events up to 31 December 2009 (n=174). For phase 1, the majority of the factors had a κ value of 0.6 or higher (8 of 15 factors for naïve raters; 11 of 15 factors for expert raters). For phase 2, all the factors had a total percent agreement value greater than 0.8 and κ values of >0.65 for the expert raters. HUC is an objective, reliable tool for coding the antecedent events and mechanisms of concussions in the NHL.
DCT Transform Domain Filtering Code Acquisition Method%DCT变换域滤波码捕获方法
Institute of Scientific and Technical Information of China (English)
李小捷; 许录平
2012-01-01
Focusing on the satellite signal acquisition with low tune and frequency uncertainty, we propose a novel code acquisition algorithm based on discrete cosine transform (DCT). Firstly, we obtained a set of time-domain related vectors by partial matched filter (IMF). Then we performed the transform domain filtering and signal reconstruction for every candidate code phase. Because the signals and noise produced by PMF have different time-varying property, noise is greatly reduced and the signals nave almost no loss, thereby increasing the probability of detection under the same probability of false alarm. The theoretical analysis and simulation results show that the detection algorithm can effectively improve the detection probability,and has a lower complexity.%针对较小时频不确定度的卫星信号捕获,提出了一种结合离散余弦变换(DCT)的码捕获算法.首先对信号进行部分匹配滤波(PMF),然后对各个码相位对应的PMF输出矢量进行DCT变换域滤波及信号重构,最后对信号进行基于能量的检测.由于PMF输出信号和噪声时变特性不同,滤波重构后信号能量几乎无损,而噪声能量得到了明显降低,从而提高了相同虚警概率下的捕获概率.理论分析和仿真结果表明本文检测算法可以有效提升检测概率,并且具有较低的复杂度.
Energy Technology Data Exchange (ETDEWEB)
Williams, P. T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dickson, T. L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Yin, S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2007-12-01
The current regulations to insure that nuclear reactor pressure vessels (RPVs) maintain their structural integrity when subjected to transients such as pressurized thermal shock (PTS) events were derived from computational models developed in the early-to-mid 1980s. Since that time, advancements and refinements in relevant technologies that impact RPV integrity assessment have led to an effort by the NRC to re-evaluate its PTS regulations. Updated computational methodologies have been developed through interactions between experts in the relevant disciplines of thermal hydraulics, probabilistic risk assessment, materials embrittlement, fracture mechanics, and inspection (flaw characterization). Contributors to the development of these methodologies include the NRC staff, their contractors, and representatives from the nuclear industry. These updated methodologies have been integrated into the Fracture Analysis of Vessels -- Oak Ridge (FAVOR, v06.1) computer code developed for the NRC by the Heavy Section Steel Technology (HSST) program at Oak Ridge National Laboratory (ORNL). The FAVOR, v04.1, code represents the baseline NRC-selected applications tool for re-assessing the current PTS regulations. This report is intended to document the technical bases for the assumptions, algorithms, methods, and correlations employed in the development of the FAVOR, v06.1, code.
Moon, Yong Ho; Yoon, Kun Su; Ha, Seok Wun
2009-12-01
A fast coeff_token decoding method based on new memory architecture is proposed to implement an efficient context-based adaptive variable length-coding (CAVLC) decoder. The heavy memory access needed in CAVLC decoding is a significant issue in designing a real system, such as digital multimedia broadcasting players, portable media players, and mobile phones with video, because it results in high power consumption and delay in operations. Recently, a new coeff_token variable-length decoding method has been suggested to achieve memory access reduction. However, it still requires a large portion of the total memory access in CAVLC decoding. In this work, an effective memory architecture is designed through careful examination of codewords in variable-length code tables. In addition, a novel fast decoding method is proposed to further reduce the memory accesses required for reconstructing the coeff_token element. Only one memory access is used for reconstructing each coeff_token element in the proposed method.
Rohée, E.; Coulon, R.; Carrel, F.; Dautremer, T.; Barat, E.; Montagu, T.; Normand, S.; Jammes, C.
2016-11-01
Radionuclide identification and quantification are a serious concern for many applications as for in situ monitoring at nuclear facilities, laboratory analysis, special nuclear materials detection, environmental monitoring, and waste measurements. High resolution gamma-ray spectrometry based on high purity germanium diode detectors is the best solution available for isotopic identification. Over the last decades, methods have been developed to improve gamma spectra analysis. However, some difficulties remain in the analysis when full energy peaks are folded together with high ratio between their amplitudes, and when the Compton background is much larger compared to the signal of a single peak. In this context, this study deals with the comparison between a conventional analysis based on "iterative peak fitting deconvolution" method and a "nonparametric Bayesian deconvolution" approach developed by the CEA LIST and implemented into the SINBAD code. The iterative peak fit deconvolution is used in this study as a reference method largely validated by industrial standards to unfold complex spectra from HPGe detectors. Complex cases of spectra are studied from IAEA benchmark protocol tests and with measured spectra. The SINBAD code shows promising deconvolution capabilities compared to the conventional method without any expert parameter fine tuning.
Gassmöller, Rene; Bangerth, Wolfgang
2016-04-01
Particle-in-cell methods have a long history and many applications in geodynamic modelling of mantle convection, lithospheric deformation and crustal dynamics. They are primarily used to track material information, the strain a material has undergone, the pressure-temperature history a certain material region has experienced, or the amount of volatiles or partial melt present in a region. However, their efficient parallel implementation - in particular combined with adaptive finite-element meshes - is complicated due to the complex communication patterns and frequent reassignment of particles to cells. Consequently, many current scientific software packages accomplish this efficient implementation by specifically designing particle methods for a single purpose, like the advection of scalar material properties that do not evolve over time (e.g., for chemical heterogeneities). Design choices for particle integration, data storage, and parallel communication are then optimized for this single purpose, making the code relatively rigid to changing requirements. Here, we present the implementation of a flexible, scalable and efficient particle-in-cell method for massively parallel finite-element codes with adaptively changing meshes. Using a modular plugin structure, we allow maximum flexibility of the generation of particles, the carried tracer properties, the advection and output algorithms, and the projection of properties to the finite-element mesh. We present scaling tests ranging up to tens of thousands of cores and tens of billions of particles. Additionally, we discuss efficient load-balancing strategies for particles in adaptive meshes with their strengths and weaknesses, local particle-transfer between parallel subdomains utilizing existing communication patterns from the finite element mesh, and the use of established parallel output algorithms like the HDF5 library. Finally, we show some relevant particle application cases, compare our implementation to a
DEFF Research Database (Denmark)
Cox, Geoff
; alternatives to mainstream development, from performances of the live-coding scene to the organizational forms of commons-based peer production; the democratic promise of social media and their paradoxical role in suppressing political expression; and the market’s emptying out of possibilities for free...... development, Speaking Code unfolds an argument to undermine the distinctions between criticism and practice, and to emphasize the aesthetic and political aspects of software studies. Not reducible to its functional aspects, program code mirrors the instability inherent in the relationship of speech...... expression in the public realm. The book’s line of argument defends language against its invasion by economics, arguing that speech continues to underscore the human condition, however paradoxical this may seem in an era of pervasive computing....
A Fast and Effective Localization Method of Quick Response Code%一种快速有效的QR码定位方法
Institute of Scientific and Technical Information of China (English)
王景中; 贺磊
2015-01-01
为解决在复杂背景下，由于QR码无法定位而导致的识别率较低的问题，提出了一种新的QR条码定位方法。考虑到QR码的结构特征，先对QR码进行轮廓定位，确定QR码可能所在的区域，然后对QR码进行精确定位。 QR码轮廓定位是用Hough变换检测近似正方形的区域，然后合并嵌套的正方形区域，最后进行区域调整。精确定位的过程利用了KMP算法的思想，提高了寻找满足特定比例线段的速度，从而提高了精确定位的速度。实验结果表明，相比于传统的QR码定位的方法，该方法可以准确快速地定位QR条码，整体的识别速度和识别率都有了较大的提高，同时具有很高的实用价值。%To solve the low recognition rate of QR code under complex background caused by the invalid localization,propose a new ap-proach for QR code localization. Taking the structure of QR code into account,the first step is contour localization that determines the possible regions of QR code and the second step is accurate localization. Contour localization applies Hough transform to detect regions approximate to square,then merge those squares which are nested and made region adjustment at last. The thought of KMP algorithm is used in the process of accurate localization to enhance the speed of finding the special ratio line,improving the speed of localization. The results of experiments show that this method is able to locate the QR code fast and precisely and the speed of recognition as well as recog-nition rate are greatly improved compared with the conventional method and has high practical value as well.
Energy Technology Data Exchange (ETDEWEB)
Villamizar, M.; Martorell, S.; Villanueva, J. F.; Carlos, S.; Sanchez, A.; Pelayo, F.; Mendizabal, R.; Sol, I.
2012-11-01
Statistical methods for the analysis of safety margins through BE+U codes: This paper presents tools for statistical analysis (PLS, PCS,Variance Decomposition) to understand the relationships between input variables (defined by parameters of the model thermal-hydraulics distribution functions) and output variable, e. g. the PCT variable. The objective is to identify the most important input variables in order to the effect on the output variables. In addition, it is possible to quantify the contribution of the uncertainty of each input variable in the uncertainly of results. the case of application develops a Large Break LOCA in PWR. (Author) 16 refs.
Energy Technology Data Exchange (ETDEWEB)
Morris, R; Albanese, K; Lakshmanan, M; Greenberg, J; Kapadia, A [Duke University Medical Center, Durham, NC, Carl E Ravin Advanced Imaging Laboratories, Durham, NC (United States)
2015-06-15
Purpose: This study intends to characterize the spectral and spatial resolution limits of various fan beam geometries for differentiation of normal and neoplastic breast structures via coded aperture coherent scatter spectral imaging techniques. In previous studies, pencil beam raster scanning methods using coherent scatter computed tomography and selected volume tomography have yielded excellent results for tumor discrimination. However, these methods don’t readily conform to clinical constraints; primarily prolonged scan times and excessive dose to the patient. Here, we refine a fan beam coded aperture coherent scatter imaging system to characterize the tradeoffs between dose, scan time and image quality for breast tumor discrimination. Methods: An X-ray tube (125kVp, 400mAs) illuminated the sample with collimated fan beams of varying widths (3mm to 25mm). Scatter data was collected via two linear-array energy-sensitive detectors oriented parallel and perpendicular to the beam plane. An iterative reconstruction algorithm yields images of the sample’s spatial distribution and respective spectral data for each location. To model in-vivo tumor analysis, surgically resected breast tumor samples were used in conjunction with lard, which has a form factor comparable to adipose (fat). Results: Quantitative analysis with current setup geometry indicated optimal performance for beams up to 10mm wide, with wider beams producing poorer spatial resolution. Scan time for a fixed volume was reduced by a factor of 6 when scanned with a 10mm fan beam compared to a 1.5mm pencil beam. Conclusion: The study demonstrates the utility of fan beam coherent scatter spectral imaging for differentiation of normal and neoplastic breast tissues has successfully reduced dose and scan times whilst sufficiently preserving spectral and spatial resolution. Future work to alter the coded aperture and detector geometries could potentially allow the use of even wider fans, thereby making coded
Brémaud, Pierre
2017-01-01
The emphasis in this book is placed on general models (Markov chains, random fields, random graphs), universal methods (the probabilistic method, the coupling method, the Stein-Chen method, martingale methods, the method of types) and versatile tools (Chernoff's bound, Hoeffding's inequality, Holley's inequality) whose domain of application extends far beyond the present text. Although the examples treated in the book relate to the possible applications, in the communication and computing sciences, in operations research and in physics, this book is in the first instance concerned with theory. The level of the book is that of a beginning graduate course. It is self-contained, the prerequisites consisting merely of basic calculus (series) and basic linear algebra (matrices). The reader is not assumed to be trained in probability since the first chapters give in considerable detail the background necessary to understand the rest of the book. .
Urban, Peter; Philipp, Carsten M.; Weinberg, Lutz; Berlien, Hans-Peter
1997-12-01
Aim of the study was the comparative investigation of cutaneous and subcutaneous vascular lesions. By means of color coded duplex sonography (CCDS), laser doppler perfusion imaging (LDPI) and infrared thermography (IT) we examined hemangiomas, vascular malformations and portwine stains to get some evidence about depth, perfusion and vascularity. LDI is a helpful method to get an impression of the capillary part of vascular lesions and the course of superficial vessels. CCDS has disadvantages in the superficial perfusion's detection but connections to deeper vascularizations can be examined precisely, in some cases it is the only method for visualizing vascular malformations. IT gives additive hints on low blood flow areas or indicates arterial-venous-shunts. Only the combination of all imaging methods allows a complete assessment, not only for planning but also for controlling the laser treatment of vascular lesions.
Optimal codes as Tanner codes with cyclic component codes
DEFF Research Database (Denmark)
Høholdt, Tom; Pinero, Fernando; Zeng, Peng
2014-01-01
In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe...... the codes succinctly using Gröbner bases....
Energy Technology Data Exchange (ETDEWEB)
Nelson, R.N. (ed.)
1985-05-01
This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.
Calculations of radioactivity and afterheat in the components of the CSNS target station
Institute of Scientific and Technical Information of China (English)
Yu Quan-Zhi; Liang Tian-Jiao; Yin Wen; Yan Qi-Wei; Jia Xue-Jun; Wang Fang-Wei
2009-01-01
This paper shows the calculations of radioactivity and afterheat in the components of the China Spallation Neutron Source (CSNS) target station,with the Monte Carlo codes LAHET,MCNP4C and the multigroup code CINDER'90.These calculations provide essential data for the detailed design and maintenance of the CSNS target station.
Schimeczek, C.; Engel, D.; Wunner, G.
2014-05-01
account the shielding of the core potential for outer electrons by inner electrons, and an optimal finite-element decomposition of each individual longitudinal wave function. These measures largely enhance the convergence properties compared to the previous code and lead to speed-ups by factors up to two orders of magnitude compared with the implementation of the Hartree-Fock-Roothaan method used by Engel and Wunner in [D. Engel, G. Wunner, Phys. Rev. A 78, 032515 (2008)].
A method of improving the security of QR code%一种提高QR码安全性的方法
Institute of Scientific and Technical Information of China (English)
张雅奇; 张定会; 江平
2012-01-01
The QR code has many advantages. With the widely application of the QR code, the decoding tools were developed rapidly. The security has been concerned. In this paper, a method that the sensitive information of QR code is encrypted with SHA-1 is put forward. Then, the sensitive information is replaced widi its message digest. The new QR code is consisted of the message digest of the sensitive information and the remained non-sensitive message in the original QR code. The attackers can hardly get the sensitive information dirough the decoding tools. Even if the encrypted information of the sensitive information is intercepted, decoding it is infeasible in the calculation for the good nature of oneway of SHA-1.%QR码凭借诸多优势得以广泛应用的同时,QR码解码工具也迅速发展,随之而来的QR码的信息安全问题也备受关注.文中提出了一种用哈希函数SHA-1对QR码的部分敏感信息进行加密,用加密生成的摘要信息替换原始QR码中的敏感信息,用敏感信息的摘要信息和原始QR码中的非敏感信息重新生成新的QR码.用新的QR码替换原始QR码,这样攻击者就无法通过解码工具来直接获取原始QR码中的敏感信息.攻击者即使获得了原QR码中敏感信息的摘要信息,由于SHA-1良好的单向性等性质,要求出其对应的原始敏感信息至少在计算上也是不可行的.
Energy Technology Data Exchange (ETDEWEB)
George A. Zyvoloski; Bruce A. Robinson; Zora V. Dash; Lynn L. Trease
1997-07-01
The mathematical models and numerical methods employed by the FEHM application, a finite-element heat- and mass-transfer computer code that can simulate nonisothermal multiphase multi-component flow in porous media, are described. The use of this code is applicable to natural-state studies of geothermal systems and groundwater flow. A primary use of the FEHM application will be to assist in the understanding of flow fields and mass transport in the saturated and unsaturated zones below the proposed Yucca Mountain nuclear waste repository in Nevada. The component models of FEHM are discussed. The first major component, Flow- and Energy-Transport Equations, deals with heat conduction; heat and mass transfer with pressure- and temperature-dependent properties, relative permeabilities and capillary pressures; isothermal air-water transport; and heat and mass transfer with noncondensible gas. The second component, Dual-Porosity and Double-Porosity/Double-Permeability Formulation, is designed for problems dominated by fracture flow. Another component, The Solute-Transport Models, includes both a reactive-transport model that simulates transport of multiple solutes with chemical reaction and a particle-tracking model. Finally, the component, Constitutive Relationships, deals with pressure- and temperature-dependent fluid/air/gas properties, relative permeabilities and capillary pressures, stress dependencies, and reactive and sorbing solutes. Each of these components is discussed in detail, including purpose, assumptions and limitations, derivation, applications, numerical method type, derivation of numerical model, location in the FEHM code flow, numerical stability and accuracy, and alternative approaches to modeling the component.
Method of Turbo Code Based on 3GPP Wireless Standards%一种3GPP标准下的TURBO编解码实现方案
Institute of Scientific and Technical Information of China (English)
邓恰; 王云飞
2011-01-01
Turbo码以其优异的纠错性能被广泛应用于宽带无线通信领域，文章针对3GPP标准中制定的Turbo编解码方案，提出了一种使用TMS320C6416T芯片进行Turbo编解码操作的实现方案，文中重点介绍了TCP协处理器详细配置步骤，并从时间和性能两个方面分析了TCP协处理器同MATLAB算法仿真性能之间的差异。%Turbo code is widely applied in wireless communication fields for its excellent performance in correction. In this paper, a design method of Turbo code using TMS320C6416T is discussed, which is based on 3GPP wireless standards. The paper also gives exactly the method of how to configure TCP and compares the performance between TCP and algorithm simulation.
On constructing disjoint linear codes
Institute of Scientific and Technical Information of China (English)
ZHANG Weiguo; CAI Mian; XIAO Guozhen
2007-01-01
To produce a highly nonlinear resilient function,the disjoint linear codes were originally proposed by Johansson and Pasalic in IEEE Trans.Inform.Theory,2003,49(2):494-501.In this paper,an effective method for finding a set of such disjoint linear codes is presented.When n≥2k,we can find a set of[n,k] disjoint linear codes with joint linear codes exists with cardinality at least 2.We also describe a result on constructing a set of [n,k] disjoint linear codes with minimum distance at least some fixed positive integer.
Energy Technology Data Exchange (ETDEWEB)
Bernal Garcia, A.
2014-07-01
The objective of this work is the development of a modal neutronic code of diffusion in 2D and 3D steady using the finite volume method, from free codes and can be applied to reactors of any geometry. Currently, numerical methods most commonly used in the broadcasting codes provide good results in structured mesh, but its application to non-structured mesh is not easy and may present problems of convergence and stability of the solution. Regarding the non-structured mesh, its use is justified by their easy adaptation to complex geometries and the development of coupled Thermo-hydraulic-neutronic codes, as well as the development of codes fluid dynamic (CFD) that encourage the development of a neutronic code that has the same mesh as the codes of fluid dynamics, which in general tends to be unstructured. On the other hand, refining the mesh and its adaptation to complex geometries is another stimulus of face to learn more about what is happening at the core of the reactor. Finally, the code has been validated with a homogeneous reactor simulation and other heterogeneous for 2D and 3D. (Author)
Institute of Scientific and Technical Information of China (English)
石陆魁; 刘倩倩; 王靖鑫; 张军
2014-01-01
In protein secondary structure prediction, the codes from the existing amino acid coding methods have higher dimension. And these coding methods don’t also use the statistic information in the amino acid sequence. To do that, a new coding method based on word frequency statistics was presented, which counted the frequency of each amino acid emerging in amino acids sequence. A 20 dimensional vector was obtained after coding the amino acid sequence with the new coding method. In contrast to other the coding methods, the codes from the new coding method have lower dimension and fully utilize all information in the amino acid sequence. In experiments, we compared the methods combing different coding methods and SVM with BP neural network. Experiment results show that the method combing word frequency statistics coding method and SVM greatly improve the prediction accuracy of protein secondary structure and is superior to other methods.%在蛋白质二级结构预测中，常用的氨基酸序列编码方法产生的编码除了具有较高的维数外，也没有利用氨基酸序列片段中的统计信息。为此，提出了一种新的氨基酸序列编码方法--基于词频统计的编码方法，该方法统计每个氨基酸在氨基酸序列片段中出现的频率，利用该编码方法对氨基酸序列片段编码后得到一个20维的向量。与其它编码方法相比不但具有较低的维数，而且也充分利用了氨基酸片段内部所有氨基酸对目标氨基酸的影响。在实验中比较了四种编码方法结合支持向量机和BP神经网络的预测结果，实验结果表明，通过结合词频统计编码和支持向量机来预测蛋白质二级结构极大地提高了预测精度，远优于其它方法的预测结果。
Energy Technology Data Exchange (ETDEWEB)
Vu, Cung Khac; Nihei, Kurt; Johnson, Paul A; Guyer, Robert; Ten Cate, James A; Le Bas, Pierre-Yves; Larmat, Carene S
2014-12-30
A system and a method for investigating rock formations includes generating, by a first acoustic source, a first acoustic signal comprising a first plurality of pulses, each pulse including a first modulated signal at a central frequency; and generating, by a second acoustic source, a second acoustic signal comprising a second plurality of pulses. A receiver arranged within the borehole receives a detected signal including a signal being generated by a non-linear mixing process from the first-and-second acoustic signal in a non-linear mixing zone within the intersection volume. The method also includes-processing the received signal to extract the signal generated by the non-linear mixing process over noise or over signals generated by a linear interaction process, or both.
Vu, Cung Khac; Nihei, Kurt; Johnson, Paul A; Guyer, Robert; Ten Cate, James A; Le Bas, Pierre-Yves; Larmat, Carene S
2014-12-30
A system and a method for investigating rock formations includes generating, by a first acoustic source, a first acoustic signal comprising a first plurality of pulses, each pulse including a first modulated signal at a central frequency; and generating, by a second acoustic source, a second acoustic signal comprising a second plurality of pulses. A receiver arranged within the borehole receives a detected signal including a signal being generated by a non-linear mixing process from the first-and-second acoustic signal in a non-linear mixing zone within the intersection volume. The method also includes-processing the received signal to extract the signal generated by the non-linear mixing process over noise or over signals generated by a linear interaction process, or both.
Lewis, Michael
1994-01-01
Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.
Tate, A Rosemary; Dungey, Sheena; Glew, Simon; Beloff, Natalia; Williams, Rachael; Williams, Tim
2017-01-01
Objective To assess the effect of coding quality on estimates of the incidence of diabetes in the UK between 1995 and 2014. Design A cross-sectional analysis examining diabetes coding from 1995 to 2014 and how the choice of codes (diagnosis codes vs codes which suggest diagnosis) and quality of coding affect estimated incidence. Setting Routine primary care data from 684 practices contributing to the UK Clinical Practice Research Datalink (data contributed from Vision (INPS) practices). Main outcome measure Incidence rates of diabetes and how they are affected by (1) GP coding and (2) excluding ‘poor’ quality practices with at least 10% incident patients inaccurately coded between 2004 and 2014. Results Incidence rates and accuracy of coding varied widely between practices and the trends differed according to selected category of code. If diagnosis codes were used, the incidence of type 2 increased sharply until 2004 (when the UK Quality Outcomes Framework was introduced), and then flattened off, until 2009, after which they decreased. If non-diagnosis codes were included, the numbers continued to increase until 2012. Although coding quality improved over time, 15% of the 666 practices that contributed data between 2004 and 2014 were labelled ‘poor’ quality. When these practices were dropped from the analyses, the downward trend in the incidence of type 2 after 2009 became less marked and incidence rates were higher. Conclusions In contrast to some previous reports, diabetes incidence (based on diagnostic codes) appears not to have increased since 2004 in the UK. Choice of codes can make a significant difference to incidence estimates, as can quality of recording. Codes and data quality should be checked when assessing incidence rates using GP data. PMID:28122831
Chou, Shin-Shang; Yan, Hsiu-Fang; Huang, Hsiu-Ya; Tseng, Kuan-Jui; Kuo, Shu-Chen
2012-01-01
This study intended to use a human-centered design study method to develop a bar-code technology in blood sampling process. By using the multilevel analysis to gather the information, the bar-code technology has been constructed to identify the patient's identification, simplify the work process, and prevent medical error rates. A Technology Acceptance Model questionnaire was developed to assess the effectiveness of system and the data of patient's identification and sample errors were collected daily. The average scores of 8 items users' perceived ease of use was 25.21(3.72), 9 items users' perceived usefulness was 28.53(5.00), and 14 items task-technology fit was 52.24(7.09), the rate of patient identification error and samples with order cancelled were down to zero, however, new errors were generated after the new system deployed; which were the position of barcode stickers on the sample tubes. Overall, more than half of nurses (62.5%) were willing to use the new system.
基于N-gram的VB源代码抄袭检测方法%A VB Source Code Plagiarism Detection Method Based on N-gram
Institute of Scientific and Technical Information of China (English)
吴斐; 唐雁; 补嘉
2012-01-01
为了有效地抑制VB程序代码抄袭现象,提出一个基于N-gram的VB源代码抄袭检测方法,利用N-gram来表示VB代码文件,以提高检测准确率。同时采用基于Fork-Join框架的并行计算技术来提高算法效率。通过与MOSS系统的对比实验,证明基于N-gram的VB源代码抄袭检测方法检测准确率高于MOSS系统,并具有处理大规模数据的能力。%With the rapid development text, the text plagiarism becomes more of information networks and the widespread use of electronic serious. In order to effectively curb the plagiarism phenome- gram to represent the VB source code files to improve the detection accuracy, and using the parallel computing technology based on Fork-Join to improve the efficiency of the algorithm. The experiment results showed our code plagiarism detection method achieves higher accuracy than the MOSS system, and has the ability to handle large-scale data.
Energy Technology Data Exchange (ETDEWEB)
Lautard, J.J.
1994-05-01
This paper presents new extension for the mixed dual finite element approximation of the diffusion equation in rectangular geometry. The mixed dual formulation has been extended in order to take into account discontinuity conditions. The iterative method is based on an alternating direction method which uses the current as unknown. This method is fully ``parallelizable`` and has very quick convergence properties. Some results for a 3D calculation on the CRAY computer are presented. (author). 6 refs., 8 figs., 4 tabs.
3维全电磁粒子软件NEPTUNE中的并行计算方法%Parallelization methods in 3D fully electromagnetic code NEPTUNE
Institute of Scientific and Technical Information of China (English)
陈军; 莫则尧; 董烨; 杨温渊; 董志伟
2011-01-01
NEPTUNE is a three-dimensional fully parallel electromagnetic code to solve electromagnetic problem in high power microwaveC HPM) devices with complex geometry. This paper introduces the following three parallelization methods used in the code. For massively computation, the "block-patch" two level parallel domain decomposition strategy is provided to scale the computation size to thousands of processor cores. Based on the geometry information, the mesh is reconfigured using the adaptive technology to get rid of invalid grid cells, and thus the storage amount and parallel execution time decrease sharply. On the basis of traditional Boris' successive over relaxation (SOR) iteration method, a parallel Poisson solver on irregular domains is provided with red and black ordering technology and geometry constraints. With the above methods, NEPTUNE can get 51. 8% parallel efficiency on 1 024 cores when simulating MILO devices.%介绍了NEPTUNE软件采用的一些并行计算方法:采用“块-网格片”二层并行区域分解方法,使计算规模能够扩展到上千个处理器核.基于复杂几何特征采用自适应技术并行生成结构网格,在原有规则区域的基础上剔除无效网格,大幅降低了存储量和并行执行时间.在经典的Boris和SOR迭代方法基础上,采用红黑排序和几何约束,提出了非规则区域上的Poisson方程并行求解方法.采用这些方法后,当使用NEP-TUNE软件模拟MILO器件时,可在1024个处理器核上获得51.8％的并行效率.
Steganography Method for Advanced Audio Coding%一种以AAC压缩音频为载体的隐写方法
Institute of Scientific and Technical Information of China (English)
王昱洁; 郭立; 王翠平
2011-01-01
通过对AAC编码原理的研究,提出一种基于MDCT量化系数小值区的秘密信息嵌入方法,从而实现了一种能在AAC压缩文件中隐藏大量秘密信息的隐写算法.算法先部分解码载体AAC文件,根据码表搜索出小值区,再通过码字得到一组量化系数,按规则修改每组的最后一个量化系数,然后进行部分编码得到嵌入后的AAC文件.该隐写算法可实现盲提取,且运算复杂度较低.实验结果表明,算法的嵌入容量较高,具有良好的不可感知性,并具有一定的抗隐写分析性,能够抵抗常用的LSB隐写分析方法以及Harmsen提出的基于加性噪声的隐写分析方法.%An information hiding method on little data region of quantized MDCT coefficients which can embed a great deal of secret information into AAC files is proposed based on the research of AAC coding standard. The proposed algorithm first partly decodes the cover AAC file to search for the little data region under code books, and then gets a set of quantized coefficients by a code word, and modifies the last quantized coefficient according to rules, and finally partly encodes to get the embedded AAC file. The secret information can be extracted blindly, and the computational complexity is low. Experimental results reveal that the proposed algorithm can obtain higher hidden data capacity, furthermore, its imperceptibility is good and it can resist the common steganalysis methods of LSB and the steganalysis method of additive noise proposed by Harmsen.
Requirements of a Better Secure Program Coding
Directory of Open Access Journals (Sweden)
Marius POPA
2012-01-01
Full Text Available Secure program coding refers to how manage the risks determined by the security breaches because of the program source code. The papers reviews the best practices must be doing during the software development life cycle for secure software assurance, the methods and techniques used for a secure coding assurance, the most known and common vulnerabilities determined by a bad coding process and how the security risks are managed and mitigated. As a tool of the better secure program coding, the code review process is presented, together with objective measures for code review assurance and estimation of the effort for the code improvement.
NOVEL BIPHASE CODE -INTEGRATED SIDELOBE SUPPRESSION CODE
Institute of Scientific and Technical Information of China (English)
Wang Feixue; Ou Gang; Zhuang Zhaowen
2004-01-01
A kind of novel binary phase code named sidelobe suppression code is proposed in this paper. It is defined to be the code whose corresponding optimal sidelobe suppression filter outputs the minimum sidelobes. It is shown that there do exist sidelobe suppression codes better than the conventional optimal codes-Barker codes. For example, the sidelobe suppression code of length 11 with filter of length 39 has better sidelobe level up to 17dB than that of Barker code with the same code length and filter length.
A class of Sudan-decodable codes
DEFF Research Database (Denmark)
Nielsen, Rasmus Refslund
2000-01-01
In this article, Sudan's algorithm is modified into an efficient method to list-decode a class of codes which can be seen as a generalization of Reed-Solomon codes. The algorithm is specialized into a very efficient method for unique decoding. The code construction can be generalized based...... on algebraic-geometry codes and the decoding algorithms are generalized accordingly. Comparisons with Reed-Solomon and Hermitian codes are made....
From concatenated codes to graph codes
DEFF Research Database (Denmark)
Justesen, Jørn; Høholdt, Tom
2004-01-01
We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...
Energy Technology Data Exchange (ETDEWEB)
Petersson, A
2009-01-29
The LDRD project 'A New Method for Wave Propagation in Elastic Media' developed several improvements to the traditional finite difference technique for seismic wave propagation, including a summation-by-parts discretization which is provably stable for arbitrary heterogeneous materials, an accurate treatment of non-planar topography, local mesh refinement, and stable outflow boundary conditions. This project also implemented these techniques in a parallel open source computer code called WPP, and participated in several seismic modeling efforts to simulate ground motion due to earthquakes in Northern California. This research has been documented in six individual publications which are summarized in this report. Of these publications, four are published refereed journal articles, one is an accepted refereed journal article which has not yet been published, and one is a non-refereed software manual. The report concludes with a discussion of future research directions and exit plan.
Superimposed Code Theorectic Analysis of DNA Codes and DNA Computing
2010-03-01
Bounds for DNA Codes Based on Fibonacci Ensembles of DNA Sequences ”, 2008 IEEE Proceedings of International Symposium on Information Theory, pp. 2292...5, June 2008, pp. 525-34. 32 28. A. Macula, et al., “Random Coding Bounds for DNA Codes Based on Fibonacci Ensembles of DNA Sequences ”, 2008...combinatorial method of bio-memory design and detection that encodes item or process information as numerical sequences represented in DNA. ComDMem is a
Başağaoğlu, Hakan; Blount, Justin; Blount, Jarred; Nelson, Bryant; Succi, Sauro; Westhart, Phil M.; Harwell, John R.
2017-04-01
This paper reports, for the first time, the computational performance of SequenceL for mesoscale simulations of large numbers of particles in a microfluidic device via the lattice-Boltzmann method. The performance of SequenceL simulations was assessed against the optimized serial and parallelized (via OpenMP directives) FORTRAN90 simulations. At present, OpenMP directives were not included in inter-particle and particle-wall repulsive (steric) interaction calculations due to difficulties that arose from inter-iteration dependencies between consecutive iterations of the do-loops. SequenceL simulations, on the other hand, relied on built-in automatic parallelism. Under these conditions, numerical simulations revealed that the parallelized FORTRAN90 outran the performance of SequenceL by a factor of 2.5 or more when the number of particles was 100 or less. SequenceL, however, outran the performance of the parallelized FORTRAN90 by a factor of 1.3 when the number of particles was 300. Our results show that when the number of particles increased by 30-fold, the computational time of SequenceL simulations increased linearly by a factor of 1.5, as compared to a 3.2-fold increase in serial and a 7.7-fold increase in parallelized FORTRAN90 simulations. Considering SequenceL's efficient built-in parallelism that led to a relatively small increase in computational time with increased number of particles, it could be a promising programming language for computationally-efficient mesoscale simulations of large numbers of particles in microfluidic experiments.
Focusing Automatic Code Inspections
Boogerd, C.J.
2010-01-01
Automatic Code Inspection tools help developers in early detection of defects in software. A well-known drawback of many automatic inspection approaches is that they yield too many warnings and require a clearer focus. In this thesis, we provide such focus by proposing two methods to prioritize
框架的五种重要运算方法及其SAS源代码%Five Important Operation Methods and the SAS Codes of These Methods for Frame
Institute of Scientific and Technical Information of China (English)
王萌; 罗纯; 纪忠杰; 张应山
2012-01-01
多边矩阵的定义主要基于框架，设计框架是应用多边矩阵理论最基础的部分。为了方便设计出各种复杂框架，现拓展了矩阵理论的相关运算方法，归纳了框架的五种重要运算方法，并提供了这些运算方法的SAS源代码。%The definition of the multilateral matrix is mainly based on the frames. So designing the frames is the basic part of applying multilateral matrix theory. To design complex frames conveniently, the correlative operation methods in the matrix theory are expanded and five important operation methods for frame are generalized. Finally, for other scholars to apply the methods conveniently, the SAS codes of these methods are provided.
Energy Technology Data Exchange (ETDEWEB)
Link, Hamilton E.; Schroeppel, Richard Crabtree; Neumann, William Douglas; Campbell, Philip LaRoche; Beaver, Cheryl Lynn; Pierson, Lyndon George; Anderson, William Erik
2004-10-01
If software is designed so that the software can issue functions that will move that software from one computing platform to another, then the software is said to be 'mobile'. There are two general areas of security problems associated with mobile code. The 'secure host' problem involves protecting the host from malicious mobile code. The 'secure mobile code' problem, on the other hand, involves protecting the code from malicious hosts. This report focuses on the latter problem. We have found three distinct camps of opinions regarding how to secure mobile code. There are those who believe special distributed hardware is necessary, those who believe special distributed software is necessary, and those who believe neither is necessary. We examine all three camps, with a focus on the third. In the distributed software camp we examine some commonly proposed techniques including Java, D'Agents and Flask. For the specialized hardware camp, we propose a cryptographic technique for 'tamper-proofing' code over a large portion of the software/hardware life cycle by careful modification of current architectures. This method culminates by decrypting/authenticating each instruction within a physically protected CPU, thereby protecting against subversion by malicious code. Our main focus is on the camp that believes that neither specialized software nor hardware is necessary. We concentrate on methods of code obfuscation to render an entire program or a data segment on which a program depends incomprehensible. The hope is to prevent or at least slow down reverse engineering efforts and to prevent goal-oriented attacks on the software and execution. The field of obfuscation is still in a state of development with the central problem being the lack of a basis for evaluating the protection schemes. We give a brief introduction to some of the main ideas in the field, followed by an in depth analysis of a technique called &apos
Allele coding in genomic evaluation
Directory of Open Access Journals (Sweden)
Christensen Ole F
2011-06-01
Full Text Available Abstract Background Genomic data are used in animal breeding to assist genetic evaluation. Several models to estimate genomic breeding values have been studied. In general, two approaches have been used. One approach estimates the marker effects first and then, genomic breeding values are obtained by summing marker effects. In the second approach, genomic breeding values are estimated directly using an equivalent model with a genomic relationship matrix. Allele coding is the method chosen to assign values to the regression coefficients in the statistical model. A common allele coding is zero for the homozygous genotype of the first allele, one for the heterozygote, and two for the homozygous genotype for the other allele. Another common allele coding changes these regression coefficients by subtracting a value from each marker such that the mean of regression coefficients is zero within each marker. We call this centered allele coding. This study considered effects of different allele coding methods on inference. Both marker-based and equivalent models were considered, and restricted maximum likelihood and Bayesian methods were used in inference. Results Theoretical derivations showed that parameter estimates and estimated marker effects in marker-based models are the same irrespective of the allele coding, provided that the model has a fixed general mean. For the equivalent models, the same results hold, even though different allele coding methods lead to different genomic relationship matrices. Calculated genomic breeding values are independent of allele coding when the estimate of the general mean is included into the values. Reliabilities of estimated genomic breeding values calculated using elements of the inverse of the coefficient matrix depend on the allele coding because different allele coding methods imply different models. Finally, allele coding affects the mixing of Markov chain Monte Carlo algorithms, with the centered coding being
Energy Technology Data Exchange (ETDEWEB)
Behringer, K
2001-08-01
A novel auto-correlation function (ACF) method has been investigated for determining the oscillation frequency and the decay ratio in BWR stability analyses. The report describes not only the method but also documents comprehensively the used and developed FORTRAN codes. The neutron signals are band-pass filtered to separate the oscillation peak in the power spectral density (PSD) from background. Two linear second-order oscillation models are considered. The ACF of each model, corrected for signal filtering and with the inclusion of a background term under the peak in the PSD, is then least-squares fitted to the ACF estimated on the previously filtered neutron signals, in order to determine the oscillation frequency and the decay ratio. The procedures of filtering and ACF estimation use fast Fourier transform techniques with signal segmentation. Gliding 'short-time' ACF estimates along a signal record allow the evaluation of uncertainties. Some numerical results are given which have been obtained from neutron signal data offered by the recent Forsmark I and Forsmark II NEA benchmark project. They are compared with those from other benchmark participants using different other analysis methods. (author)
Schimeczek, C.; Engel, D.; Wunner, G.
2012-07-01
account the shielding of the core potential for outer electrons by inner electrons, and an optimal finite-element decomposition of each individual longitudinal wave function. These measures largely enhance the convergence properties compared to the previous code, and lead to speed-ups by factors up to two orders of magnitude compared with the implementation of the Hartree-Fock-Roothaan method used by Engel and Wunner in [D. Engel, G. Wunner, Phys. Rev. A 78 (2008) 032515]. New version program summaryProgram title: HFFER II Catalogue identifier: AECC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: v 55 130 No. of bytes in distributed program, including test data, etc.: 293 700 Distribution format: tar.gz Programming language: Fortran 95 Computer: Cluster of 1-13 HP Compaq dc5750 Operating system: Linux Has the code been vectorized or parallelized?: Yes, parallelized using MPI directives. RAM: 1 GByte per node Classification: 2.1 External routines: MPI/GFortran, LAPACK, BLAS, FMlib (included in the package) Catalogue identifier of previous version: AECC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 302 Does the new version supersede the previous version?: Yes Nature of problem: Quantitative modellings of features observed in the X-ray spectra of isolated magnetic neutron stars are hampered by the lack of sufficiently large and accurate databases for atoms and ions up to the last fusion product, iron, at strong magnetic field strengths. Our code is intended to provide a powerful tool for calculating energies and oscillator strengths of medium-Z atoms and ions at neutron star magnetic field strengths with sufficient accuracy in a routine way to create such databases. Solution method: The
Making your code citable with the Astrophysics Source Code Library
Allen, Alice; DuPrie, Kimberly; Schmidt, Judy; Berriman, G. Bruce; Hanisch, Robert J.; Mink, Jessica D.; Nemiroff, Robert J.; Shamir, Lior; Shortridge, Keith; Taylor, Mark B.; Teuben, Peter J.; Wallin, John F.
2016-01-01
The Astrophysics Source Code Library (ASCL, ascl.net) is a free online registry of codes used in astronomy research. With nearly 1,200 codes, it is the largest indexed resource for astronomy codes in existence. Established in 1999, it offers software authors a path to citation of their research codes even without publication of a paper describing the software, and offers scientists a way to find codes used in refereed publications, thus improving the transparency of the research. It also provides a method to quantify the impact of source codes in a fashion similar to the science metrics of journal articles. Citations using ASCL IDs are accepted by major astronomy journals and if formatted properly are tracked by ADS and other indexing services. The number of citations to ASCL entries increased sharply from 110 citations in January 2014 to 456 citations in September 2015. The percentage of code entries in ASCL that were cited at least once rose from 7.5% in January 2014 to 17.4% in September 2015. The ASCL's mid-2014 infrastructure upgrade added an easy entry submission form, more flexible browsing, search capabilities, and an RSS feeder for updates. A Changes/Additions form added this past fall lets authors submit links for papers that use their codes for addition to the ASCL entry even if those papers don't formally cite the codes, thus increasing the transparency of that research and capturing the value of their software to the community.
Institute of Scientific and Technical Information of China (English)
CHENG Qiang; CAO JianWen; WANG Bin; ZHANG HaiBin
2009-01-01
The adjoint code generator (ADG) is developed to produce the adjoint codes, which are used to analytically calculate gradients and the Hessian-vector products with the costs independent of the number of the independent variables. Different from other automatic differentiation tools, the implementation of ADG has advantages of using the least program behavior decomposition method and several static dependence analysis techniques. In this paper we first address the concerned concepts and fundamentals, and then introduce the functionality and the features of ADG. In particular, we also discuss the design architecture of ADG and implementation details including the recomputation and storing strategy and several techniques for code optimization. Some experimental results in several applications are presented at the end.
基于JND和AR模型的感知视频编码方法%Perceptual video coding method based on JND and AR model
Institute of Scientific and Technical Information of China (English)
王翀; 赵力; 邹采荣
2010-01-01
为了达到减少比特数同时保持画面质量的目的,提出了一种基于最小可视失真(JND)和自回归(AR)模型的感知视频编码方法.首先,设计了基于JND的纹理分割算法,建立了空时JND模型,以MB为基本单元,通过计算其JND能量并与阈值做比较,用以分割出视频序列中的纹理区域. 然后,开发了AR模型来合成纹理区,在使用最小二乘法计算出AR模型的参数后,用相邻的前后参考帧对应像素的线性插值来生成重构像素. 最后,为了检验所提方法的效果,将其与H.264/AVC视频编码系统做比较,用不同的视频序列实验来验证所提方法的有效性.实验结果显示,对于具有不同纹理特点的实验序列,所提方法可以在保持感知质量的同时将比特率减少15%～58%.%In order to achieve better perceptual coding quality while using fewer bits, a novel perceptual video coding method based on the just-noticeable-distortion (JND) model and the auto-regressive (AR) model is explored. First, a new texture segmentation method exploiting the JND profile is devised to detect and classify texture regions in video scenes. In this step, a spatial-temporal JND model is proposed and the JND energy of every micro-block unit is computed and compared with the threshold. Secondly, in order to effectively remove temporal redundancies while preserving high visual quality, an AR model is applied to synthesize the texture regions. All the parameters of the AR model are obtained by the least-squares method and each pixel in the texture region is generated as a linear combination of pixels taken from the closest forward and backward reference frames. Finally, the proposed method is compared with the H.264/AVC video coding system to demonstrate the performance. Various sequences with different types of texture regions are used in the experiment and the results show that the proposed method can reduce the bit-rate by 15% to 58% while maintaining good perceptual
Criticality safety and sensitivity analyses of PWR spent nuclear fuel repository facilities
Maucec, M; Glumac, B
2005-01-01
Monte Carlo criticality safety and sensitivity calculations of pressurized water reactor (PWR) spent nuclear fuel repository facilities for the Slovenian nuclear power plant Krsko are presented. The MCNP4C code was deployed to model and assess the neutron multiplication parameters of pool-based stor
Criticality safety and sensitivity analyses of PWR spent nuclear fuel repository facilities
Maucec, M; Glumac, B
2005-01-01
Monte Carlo criticality safety and sensitivity calculations of pressurized water reactor (PWR) spent nuclear fuel repository facilities for the Slovenian nuclear power plant Krsko are presented. The MCNP4C code was deployed to model and assess the neutron multiplication parameters of pool-based stor
Energy Technology Data Exchange (ETDEWEB)
Bekar, Kursat B. [Department of Mechanical and Nuclear Engineering, Penn State University, University Park, PA 16802 (United States)], E-mail: bekarkb@ornl.gov; Azmy, Yousry Y. [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States)], E-mail: yyazmy@ncsu.edu
2009-04-15
We present the TORT solutions to the 3D transport codes' suite of benchmarks exercise. An overview of benchmark configurations is provided, followed by a description of the TORT computational model we developed to solve the cases comprising the benchmark suite. In the numerical experiments reported in this paper, we chose to refine the spatial and angular discretizations simultaneously, from the coarsest model (40 x 40 x 40, 200 angles) to the finest model (160 x 160 x 160, 800 angles). The MCNP reference solution is used for evaluating the effect of model-refinement on the accuracy of the TORT solutions. The presented results show that the majority of benchmark quantities are computed with good accuracy by TORT, and that the accuracy improves with model refinement. However, this deliberately severe test has exposed some deficiencies in both deterministic and stochastic solution approaches. Specifically, TORT fails to converge the inner iterations in some benchmark configurations while MCNP produces zero tallies, or drastically poor statistics for some benchmark quantities. We conjecture that TORT's failure to converge is driven by ray effects in configurations with low scattering ratio and/or highly skewed computational cells, i.e. aspect ratio far from unity. The failure of MCNP occurs in quantities tallied over a very small area or volume in physical space, or quantities tallied many ({approx}25) mean free paths away from the source. Hence automated, robust, and reliable variance reduction techniques are essential for obtaining high quality reference values of the benchmark quantities. Preliminary results of the benchmark exercise indicate that the occasionally poor performance of TORT is shared with other deterministic codes. Armed with this information, method developers can now direct their attention to regions in parameter space where such failures occur and design alternative solution approaches for such instances.
Directory of Open Access Journals (Sweden)
A Emami
2015-05-01
Full Text Available Background & aim: Polyomaviruses (BK is a comprehensive infection with more than of 80% prevalence in the world. One of the most important reasons of BK virus nephropathy is in the renal transplant recipients and rejection of transplanted tissue between them. Non Coding region of this virus play a regulatory role in replication and amplification of the virus. The aim of this study was to evaluate the genetic patterns of this area in renal graft at Namazi Transplantation Center, Shiraz, Iran. Methods: In the present experimental study, 380 renal allograft serums were collected. DNAs of 129 eligible samples were extracted and evaluated using a virus genome. The presence of the virus was determined by qualitative and sequencing. Of these, 129 samples were tested for the presence of virus according to the condition study, using quantitative, qualitative genomic amplification and sequencing. Results: The study showed symptoms of nephropathy, 76 (58.9% of them were males and 46 (35.7% were females with the mean age 38.0±.089 years of age. In general, 46 patients (35.7% percent were positive for BK Polyomaviruses. After comparing the genomic sequence with applications of molecular they were categorized in three groups and then recorded in gene bank. Conclusion: About 35% of renal transplant recipients with high creatinine levels were positive for the presence of BK virus. Non-coding region of respondents in the sample survey revealed that among patients with the most common genotypes were rearranged the entire transplant patients were observed at this tranplant center. Examination of these sequences indicated that this rearrangments had a specific pattern, different from the standard strain of archaea type.
Institute of Scientific and Technical Information of China (English)
杨凤霞
2012-01-01
针对当前分形图像编码面临如何改善重建图像视觉效果的问题,利用局部图像的特点,采取自适应的分块方法与缩短编码时间的多种块分类技术相结合设计图像编码算法,该算法明显改善了图像编码视觉效果,编码时间缩短上千倍,具有快速实现分形图像编码之功效.%To improve the reconstructed coding visual effect of fractal image, a new image coding algorithm is developed using adaptive block method integrated with classification techniques for multiple types of blocks which can reduce the coding time. Using proposed coding algorithm, the coding visual effect is improved a lot and the coding time is reduced by thousands of times. Finally,a rapid fractal image coding effect can be realized.
NETWORK CODING BY BEAM FORMING
DEFF Research Database (Denmark)
2013-01-01
Network coding by beam forming in networks, for example, in single frequency networks, can provide aid in increasing spectral efficiency. When network coding by beam forming and user cooperation are combined, spectral efficiency gains may be achieved. According to certain embodiments, a method...... cooperating with the plurality of user equipment to decode the received data....
Space Time Codes from Permutation Codes
Henkel, Oliver
2006-01-01
A new class of space time codes with high performance is presented. The code design utilizes tailor-made permutation codes, which are known to have large minimal distances as spherical codes. A geometric connection between spherical and space time codes has been used to translate them into the final space time codes. Simulations demonstrate that the performance increases with the block lengths, a result that has been conjectured already in previous work. Further, the connection to permutation codes allows for moderate complex en-/decoding algorithms.
Wang, Jim Jing-Yan
2014-07-06
Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.
Fundamentals of convolutional coding
Johannesson, Rolf
2015-01-01
Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual
Ficaro, Edward Patrick
The ^{252}Cf -source-driven noise analysis (CSDNA) requires the measurement of the cross power spectral density (CPSD) G_ {23}(omega), between a pair of neutron detectors (subscripts 2 and 3) located in or near the fissile assembly, and the CPSDs, G_{12}( omega) and G_{13}( omega), between the neutron detectors and an ionization chamber 1 containing ^{252}Cf also located in or near the fissile assembly. The key advantage of this method is that the subcriticality of the assembly can be obtained from the ratio of spectral densities,{G _sp{12}{*}(omega)G_ {13}(omega)over G_{11 }(omega)G_{23}(omega) },using a point kinetic model formulation which is independent of the detector's properties and a reference measurement. The multigroup, Monte Carlo code, KENO-NR, was developed to eliminate the dependence of the measurement on the point kinetic formulation. This code utilizes time dependent, analog neutron tracking to simulate the experimental method, in addition to the underlying nuclear physics, as closely as possible. From a direct comparison of simulated and measured data, the calculational model and cross sections are validated for the calculation, and KENO-NR can then be rerun to provide a distributed source k_ {eff} calculation. Depending on the fissile assembly, a few hours to a couple of days of computation time are needed for a typical simulation executed on a desktop workstation. In this work, KENO-NR demonstrated the ability to accurately estimate the measured ratio of spectral densities from experiments using capture detectors performed on uranium metal cylinders, a cylindrical tank filled with aqueous uranyl nitrate, and arrays of safe storage bottles filled with uranyl nitrate. Good agreement was also seen between simulated and measured values of the prompt neutron decay constant from the fitted CPSDs. Poor agreement was seen between simulated and measured results using composite ^6Li-glass-plastic scintillators at large subcriticalities for the tank of
Method for constructing QC-LDPC codes using the dayan sequence%利用大衍数列构造QC-LDPC码的方法
Institute of Scientific and Technical Information of China (English)
朱磊基; 汪涵; 施玉松; 邢涛; 王营冠
2012-01-01
For the purpose of constructing a channel coding solution with excellent performance, we present a method to construct Quasi-Cyclic Low-Density Parity-Check Codes based on the Dayan Sequence by analysing its characteristic. By making use of the feature that the fixed sequence difference in the Dayan Sequence has a steady increase value, this method is used to construct a parity check matrix that has no circle of length four, with the structure of quasi cyclic, and needs little storage space. Simulation results show that, under 10-5 BER, in the Additive White Gaussian Noise channel and Rayleigh fading channel, QC-LDPC based on the Dayan Sequence has a gain nearly 1 dB more than that by QC-LDPC based on the Fibonacci Sequence. Also, in the Additive White Gaussian Noise channel, it has a gain almost 3dB more than that by array LDPC.%提出了一种基于大衍数列构造准循环低密度校验码的方法.该方法利用大衍数列固定项差对应的值单调递增的特点,构造出的校验矩阵不含有长度为4的环,具有准循环结构,节省了校验矩阵的存储空间.仿真表明,取10-5误码率,在高斯白噪声信道和瑞利衰落信道下,基于大衍数列构造的准循环低密度奇偶校验(QC-LDPC)码比基于斐波那契数列构造的QC-LDPC码有接近1 dB的增益；在高斯白噪声信道下,基于大衍数列构造的QC-LDPC码比阵列低密度奇偶校验码有接近3dB的增益.
Valdivia, Valeska; Hennebelle, Patrick
2014-11-01
Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims: Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods: We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results: We find that the accuracy for the extinction of the tree-based method is better than 10%, while the relative error for the column density can be much more. We describe the implementation of a method based on precalculating the geometrical terms that noticeably reduces the calculation time. To study the influence of the screening on the statistical properties of molecular clouds we present the probability distribution function of gas and the associated temperature per density bin and the mass spectra for different density thresholds. Conclusions: The tree-based method is fast and accurate enough to be used during numerical simulations since no communication is needed between CPUs when using a fully threaded tree. It is then suitable to parallel computing. We show that the screening for far UV radiation mainly affects the dense gas, thereby favoring low temperatures and affecting the fragmentation. We show that when we include the screening, more structures are formed with higher densities in comparison to the case that does not include this effect. We
Practices in Code Discoverability: Astrophysics Source Code Library
Allen, Alice; Nemiroff, Robert J; Shamir, Lior
2012-01-01
Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysical source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL (http://ascl.net) has on average added 19 new codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available ei...
Garcia, F.; Mesa, J.; Arruda-Neto, J. D. T.; Helene, O.; Vanin, V.; Milian, F.; Deppman, A.; Rodrigues, T. E.; Rodriguez, O.
2007-03-01
The code STATFLUX, implementing a new and simple statistical procedure for the calculation of transfer coefficients in radionuclide transport to animals and plants, is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. Flow parameters were estimated by employing two different least-squares procedures: Derivative and Gauss-Marquardt methods, with the available experimental data of radionuclide concentrations as the input functions of time. The solution of the inverse problem, which relates a given set of flow parameter with the time evolution of concentration functions, is achieved via a Monte Carlo simulation procedure. Program summaryTitle of program:STATFLUX Catalogue identifier:ADYS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYS_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computer for which the program is designed and others on which it has been tested:Micro-computer with Intel Pentium III, 3.0 GHz Installation:Laboratory of Linear Accelerator, Department of Experimental Physics, University of São Paulo, Brazil Operating system:Windows 2000 and Windows XP Programming language used:Fortran-77 as implemented in Microsoft Fortran 4.0. NOTE: Microsoft Fortran includes non-standard features which are used in this program. Standard Fortran compilers such as, g77, f77, ifort and NAG95, are not able to compile the code and therefore it has not been possible for the CPC Program Library to test the program. Memory required to execute with typical data:8 Mbytes of RAM memory and 100 MB of Hard disk memory No. of bits in a word:16 No. of lines in distributed program, including test data, etc.:6912 No. of bytes in distributed program, including test data, etc.:229 541 Distribution format:tar.gz Nature of the physical problem:The investigation of transport mechanisms for
Source code retrieval using conceptual similarity
Mishne, G.A.; de Rijke, M.
2004-01-01
We propose a method for retrieving segments of source code from a large repository. The method is based on conceptual modeling of the code, combining information extracted from the structure of the code and standard informationdistance measures. Our results show an improvement over traditional retri
Dynamic Reverse Code Generation for Backward Execution
DEFF Research Database (Denmark)
Lee, Jooyong
2007-01-01
. In this paper, we present a method to generate reverse code, so that backtracking can be performed by executing reverse code. The novelty of our work is that we generate reverse code on-the-fly, while running a debugger, which makes it possible to apply the method even to debugging multi-threaded programs....
Strong Trinucleotide Circular Codes
Directory of Open Access Journals (Sweden)
Christian J. Michel
2011-01-01
Full Text Available Recently, we identified a hierarchy relation between trinucleotide comma-free codes and trinucleotide circular codes (see our previous works. Here, we extend our hierarchy with two new classes of codes, called DLD and LDL codes, which are stronger than the comma-free codes. We also prove that no circular code with 20 trinucleotides is a DLD code and that a circular code with 20 trinucleotides is comma-free if and only if it is a LDL code. Finally, we point out the possible role of the symmetric group ∑4 in the mathematical study of trinucleotide circular codes.
Directory of Open Access Journals (Sweden)
Miloš Milenković
2016-03-01
Full Text Available In the context of the forthcoming reactivation of the Serbian Ethnology and Anthropology Association and the normalization of the work of professional associa- tions, as well as their occasional adoption of their ethical codes in Serbia, we are analyzing the Code of ethics of the wolrd’sargest professional anthropological associ- ation – that of American Anthropological Association (AAA. Starting from the as- sumption that ethic codes are not "pure" moral algorithms of what is desirable/correct, but that they are laden by "hidden" theoretical and methodological assumptions, whether by their direct authors, or on the level of the tacit disciplinary knowledge, we are examining what can Serbian ethnological/anthropological community learn out of the genesis, structure, function and critique directed at AAA Code of Ethics. We are also considering the thesis that such codes could be approached as legitimizing narra- tive practices, not unlike those of magic, through which the discipline is attempting to transform itself from the status of fluid and generally socially unrecognized (and even obscure occupation into a generally recognized, formally licensed and respecta- ble profession. We suggest for anthropology in Serbia to construct its professional sta- tus by forming an alliance with applied ethics offering the services of customization of ethic codes to other professions, through cultural analysis of moral decision-ma- king, instead of legitimizing itself by a contradictory code of ethics, burdened with hollow magic principles and theoretical and methodological issues.
Dixit, Anant; Alouani, M.
2016-10-01
X-ray absorption and X-ray magnetic circular dichroism (XMCD) are very powerful tools for probing the orbital and spin moments of each atomic species orbital of magnetic materials. In this work, we present the implementation of a module for computing the X-ray absorption and XMCD spectra into the VASP code. We provide a derivation of the absorption cross-section in the electric dipole approximation. The matrix elements, which make up the X-ray absorption cross-section for a given polarization of light, are then computed using either the momentum operator p or the position operator r, within the projector augmented wave method. The core electrons are described using the relativistic basis-set whereas for the valence electrons, the spin-orbit coupling is added perturbatively to the semi-relativistic Hamiltonian. We show that both the p and the r implementations lead to the same results. The results for the K-edge and L23-edges of bcc-iron are then computed and compared to experiment.
Channel coding techniques for wireless communications
Deergha Rao, K
2015-01-01
The book discusses modern channel coding techniques for wireless communications such as turbo codes, low-density parity check (LDPC) codes, space–time (ST) coding, RS (or Reed–Solomon) codes and convolutional codes. Many illustrative examples are included in each chapter for easy understanding of the coding techniques. The text is integrated with MATLAB-based programs to enhance the understanding of the subject’s underlying theories. It includes current topics of increasing importance such as turbo codes, LDPC codes, Luby transform (LT) codes, Raptor codes, and ST coding in detail, in addition to the traditional codes such as cyclic codes, BCH (or Bose–Chaudhuri–Hocquenghem) and RS codes and convolutional codes. Multiple-input and multiple-output (MIMO) communications is a multiple antenna technology, which is an effective method for high-speed or high-reliability wireless communications. PC-based MATLAB m-files for the illustrative examples are provided on the book page on Springer.com for free dow...
实用的并行程序性能分析方法%REALISTIC PERFORMANCE ANALYSIS METHODS FOR PARALLEL CODES
Institute of Scientific and Technical Information of China (English)
莫则尧
2000-01-01
Firstly, with the discusses of main ingredients to exert the peak floatperformance for currently high performance mirco-processors in detail,this paper analyzed the principal motivations for the speedup ofparallel applied codes under the parallel computers consisted of thethese micro-processors. Secondly, this paper presented a suite ofperformance evaluations rules for parallel codes, which can reveal theoverall numerical and parallel performance with respect to the serialcodes, pose the performance improving strategies, explain exactly thereasons for super-linear Speedup. The numerical experimential results oftwo realistic applied codes under two parallel computer are also givenin this paper
Confocal coded aperture imaging
Energy Technology Data Exchange (ETDEWEB)
Tobin, Jr., Kenneth William (Harriman, TN); Thomas, Jr., Clarence E. (Knoxville, TN)
2001-01-01
A method for imaging a target volume comprises the steps of: radiating a small bandwidth of energy toward the target volume; focusing the small bandwidth of energy into a beam; moving the target volume through a plurality of positions within the focused beam; collecting a beam of energy scattered from the target volume with a non-diffractive confocal coded aperture; generating a shadow image of said aperture from every point source of radiation in the target volume; and, reconstructing the shadow image into a 3-dimensional image of the every point source by mathematically correlating the shadow image with a digital or analog version of the coded aperture. The method can comprise the step of collecting the beam of energy scattered from the target volume with a Fresnel zone plate.
Kuipers, J; Vermaseren, J A M
2013-01-01
We describe the implementation of output code optimization in the open source computer algebra system FORM. This implementation is based on recently discovered techniques of Monte Carlo tree search to find efficient multivariate Horner schemes, in combination with other optimization algorithms, such as common subexpression elimination. For systems for which no specific knowledge is provided it performs significantly better than other methods we could compare with. Because the method has a number of free parameters, we also show some methods by which to tune them to different types of problems.
On the Performance of a Multi-Edge Type LDPC Code for Coded Modulation
Cronie, Harm S.
2005-01-01
We present a method to combine error-correction coding and spectral-efficient modulation for transmission over the Additive White Gaussian Noise (AWGN) channel. The code employs signal shaping which can provide a so-called shaping gain. The code belongs to the family of sparse graph codes for which
Joint source channel coding using arithmetic codes
Bi, Dongsheng
2009-01-01
Based on the encoding process, arithmetic codes can be viewed as tree codes and current proposals for decoding arithmetic codes with forbidden symbols belong to sequential decoding algorithms and their variants. In this monograph, we propose a new way of looking at arithmetic codes with forbidden symbols. If a limit is imposed on the maximum value of a key parameter in the encoder, this modified arithmetic encoder can also be modeled as a finite state machine and the code generated can be treated as a variable-length trellis code. The number of states used can be reduced and techniques used fo
Scuflaire, R; Théado, S; Bourge, P -O; Miglio, A; Godart, M; Thoul, A; Noels, A
2007-01-01
The Liege Oscillation code can be used as a stand-alone program or as a library of subroutines that the user calls from a Fortran main program of his own to compute radial and non-radial adiabatic oscillations of stellar models. We describe the variables and the equations used by the program and the methods used to solve them. A brief account is given of the use and the output of the program.
Institute of Scientific and Technical Information of China (English)
邓家梅; 王喆; 李明; 曹家麟
2000-01-01
The parallel decoding method of a parallel concatenation of multiple codes is well known. In this paper, we present a new serial decoding method. The iterative gain in this method is always one. Therefore, this method does not need optimization of the iterative gain by using simulated annealing like the parallel decoding method. Though it is simpler than the parallel decoding method in calculation, it gives the same performance. We also use Pearl's propagation algorithm to show the appropriateness of the serial decoding method.
Directory of Open Access Journals (Sweden)
Yu Myeong-Hee
2010-03-01
Full Text Available Abstract Background Breast cancer is one of the leading causes of women's death worldwide. It is important to discover a reliable biomarker for the detection of breast cancer. Plasma is the most ideal source for cancer biomarker discovery since many cells cross-communicate through the secretion of soluble proteins into blood. Methods Plasma proteomes obtained from 6 breast cancer patients and 6 normal healthy women were analyzed by using the isotope-coded affinity tag (ICAT labeling approach and tandem mass spectrometry. All the plasma samples used were depleted of highly abundant 6 plasma proteins by immune-affinity column chromatography before ICAT labeling. Several proteins showing differential abundance level were selected based on literature searches and their specificity to the commercially available antibodies, and then verified by immunoblot assays. Results A total of 155 proteins were identified and quantified by ICAT method. Among them, 33 proteins showed abundance changes by more than 1.5-fold between the plasmas of breast cancer patients and healthy women. We chose 5 proteins for the follow-up confirmation in the individual plasma samples using immunoblot assay. Four proteins, α1-acid glycoprotein 2, monocyte differentiation antigen CD14, biotinidase (BTD, and glutathione peroxidase 3, showed similar abundance ratio to ICAT result. Using a blind set of plasmas obtained from 21 breast cancer patients and 21 normal healthy controls, we confirmed that BTD was significantly down-regulated in breast cancer plasma (Wilcoxon rank-sum test, p = 0.002. BTD levels were lowered in all cancer grades (I-IV except cancer grade zero. The area under the receiver operating characteristic curve of BTD was 0.78. Estrogen receptor status (p = 0.940 and progesterone receptor status (p = 0.440 were not associated with the plasma BTD levels. Conclusions Our study suggests that BTD is a potential serological biomarker for the detection of breast cancer.
Nonlocally Centralized Simultaneous Sparse Coding
Institute of Scientific and Technical Information of China (English)
雷阳; 宋占杰
2016-01-01
The concept of structured sparse coding noise is introduced to exploit the spatial correlations and nonlo-cal constraint of the local structure. Then the model of nonlocally centralized simultaneous sparse coding(NC-SSC)is proposed for reconstructing the original image, and an algorithm is proposed to transform the simultaneous sparse coding into reweighted low-rank approximation. Experimental results on image denoisng, deblurring and super-resolution demonstrate the advantage of the proposed NC-SSC method over the state-of-the-art image resto-ration methods.
Content layer progressive coding of digital maps
DEFF Research Database (Denmark)
Forchhammer, Søren; Jensen, Ole Riis
2000-01-01
A new lossless context based method is presented for content progressive coding of limited bits/pixel images, such as maps, company logos, etc., common on the WWW. Progressive encoding is achieved by separating the image into content layers based on other predefined information. Information from...... already coded layers are used when coding subsequent layers. This approach is combined with efficient template based context bi-level coding, context collapsing methods for multi-level images and arithmetic coding. Relative pixel patterns are used to collapse contexts. The number of contexts are analyzed....... The new methods outperform existing coding schemes coding digital maps and in addition provide progressive coding. Compared to the state-of-the-art PWC coder, the compressed size is reduced to 60-70% on our layered test images....
New Mexico Univ., Albuquerque. American Indian Law Center.
The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…
AST-based code plagiarism detection method%一种基于AST的代码抄袭检测方法
Institute of Scientific and Technical Information of China (English)
张丽萍; 刘东升; 李彦臣; 钟美
2011-01-01
Because the current research about code plagiarism detection mostly based on program source code similarity, lac-king of grammatical analysis to code, ignoring the syntax and semantics of the program, can not effectively detect plagiarism to the slight modification of the structure. This paper presented a code plagiarism detection based on the AST. It pre-formated code, analysis lexical and syntax and obtained the corresponding AST. Then it traversed AST to generate code sequences, cal-culated the similarity of the code sequence and got the code plagiarism detection report. Experimental results show that the ap- ' proach can verify the C code plagiarism effectively, and it has some versatility and scalability on the C++ , Java and other pla-giarism detection program code.%针对目前代码抄袭检测方面的研究大多是基于程序源码层面进行相似度比较,不需要对代码进行语法分析,由于忽略程序语法语义,对稍加结构修改的抄袭行为无法有效检测的现状,提出一种基于AST的代码抄袭检测方法.先将代码进行格式化预处理,再进行词法分析、语法分析,得到对应的AST；然后遍历AST生成代码序列,对代码序列进行相似度计算,从而得到代码的抄袭检测报告.实验结果表明,该方法能够有效检测出C程序代码的抄袭行为,并对C++、Java等多种程序代码的抄袭检测具有一定的通用性和可扩展性.
The Flutter Shutter Code Calculator
Directory of Open Access Journals (Sweden)
Yohann Tendero
2015-08-01
Full Text Available The goal of the flutter shutter is to make uniform motion blur invertible, by a"fluttering" shutter that opens and closes on a sequence of well chosen sub-intervals of the exposure time interval. In other words, the photon flux is modulated according to a well chosen sequence calledflutter shutter code. This article provides a numerical method that computes optimal flutter shutter codes in terms of mean square error (MSE. We assume that the observed objects follow a known (or learned random velocity distribution. In this paper, Gaussian and uniform velocity distributions are considered. Snapshots are also optimized taking the velocity distribution into account. For each velocity distribution, the gain of the optimal flutter shutter code with respectto the optimal snapshot in terms of MSE is computed. This symmetric optimization of theflutter shutter and of the snapshot allows to compare on an equal footing both solutions, i.e. camera designs. Optimal flutter shutter codes are demonstrated to improve substantially the MSE compared to classic (patented or not codes. A numerical method that permits to perform a reverse engineering of any existing (patented or not flutter shutter codes is also describedand an implementation is given. In this case we give the underlying velocity distribution fromwhich a given optimal flutter shutter code comes from. The combination of these two numerical methods furnishes a comprehensive study of the optimization of a flutter shutter that includes a forward and a backward numerical solution.
Multimedia Data Coding and its Development
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The requirements of data coding in multimedia applications are presented, the current technique of coding and relative standards is introduced, then the work that have been doing is presented, i.e. the wavelet-based coding method and the VE (Visual Entropy)-based coding method. The experiment results prove that these methods have gained a better perceptual quality of a reconstructed image and a lower bit rate. Their performance evaluations are better than JPEG (Joint Photographic Experts Group) coding. Finally, the future topics of study are put forward.
Using thermalizers in measuring 'Ukryttia' object's FCM neutron fluxes
Krasnyanskaya, O G; Odinokin, G I; Pavlovich, V N
2003-01-01
The results of research of a thermalizer (heater) width influence on neutron thermalization efficiency during FCM neutron flux measuring in the 'Ukryttia' are described. The calculations of neutron flux densities were performed by the Monte-Carlo method with the help of computer code MCNP-4C for FCM different models.Three possible installations of detectors were considered: on FCM surface,inside the FCM, and inside the concrete under the FCM layer. It was shown,that in order to increase the sensitivity of neutron detectors in intermediate and fast neutrons field,and consequently, to decrease the dependence of the readings of spectral distribution of neutron flux,it is necessary to position the detector inside the so-called thermalizer or heater. The most reasonable application of thick 'heaters' is the situation, when the detector is placed on FCM surface.
Distortion of neutron field during mice irradiation at Kinki University Reactor UTR-KINKI
Energy Technology Data Exchange (ETDEWEB)
Endo, Satoru [Research Institute for Radiation Biology and Medicine, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima 734-8553 (Japan)], E-mail: endos@hiroshima-u.ac.jp; Tanaka, Kenichi [Research Institute for Radiation Biology and Medicine, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima 734-8553 (Japan); Fujikawa, Kazuo; Horiguchi, Tetsuo; Itoh, Tetsuo [Atomic Energy Research Institute, Kinki University, 3-4-1 Kowakae, Higashi-Osaka 577-8502 (Japan); Bengua, Gerard [Research Reactor Institute, Kyoto University, Kumatori-cho, Sennan-gun, Osaka 590-0494 (Japan); Nomura, Taisei [Graduate Schools of Medicine and Engineering, Osaka University, B4 2-2 Yamadaoka, Suita, Osaka 565-0871 (Japan); Hoshi, Masaharu [Research Institute for Radiation Biology and Medicine, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima 734-8553 (Japan)
2007-09-15
A dosimetry study of mice irradiation at the Kinki University nuclear reactor (UTR-KINKI) has been carried out. Neutron and gamma-ray doses at the irradiation port in the presence of 0, 1, 2, 4 and 6 mice were measured using the paired chamber method. The results show that neutron dose is reduced with increasing numbers of mice. In the six-mice irradiation condition, neutron dose is about 15% smaller compared to a case where no mice were placed in the irradiation port. To investigate the distortion of the neutron spectrum during mice irradiation at UTR-KINKI, a Monte Carlo calculation using the MCNP4C code has been carried out. The measured variation in dose with respect to the total mouse mass was closely reproduced by the calculation results for neutron and gamma-ray dose. Distortion of the neutron spectrum was observed to occur between 1 keV and 1 MeV.
Energy Technology Data Exchange (ETDEWEB)
Vargas V, M.X
2003-07-01
In this work the unit of absorbed dose at the Secondary Standard Dosimetry Laboratory (SSDL) of Mexico, is characterized by means of the development of a primary standard of absorbed dose to water, D{sub agua}. The main purpose is to diminish the uncertainty in the service of dosimetric calibration of ionization chambers (employed in radiotherapy of extemal beams) that offers this laboratory. This thesis is composed of seven chapters: In Chapter 1 the position and justification of the problem is described, as well as the general and specific objectives. In Chapter 2, a presentation of the main quantities and units used in dosimetry is made, in accordance with the recommendations of the International Commission on Radiation Units and Measurements (ICRU) that establish the necessity to have a coherent system with the international system of units and dosimetric quantities. The concepts of equilibrium and transient equilibrium of charged particles (TCPE) are also presented, which are used later in the quantitative determination of D{sub agua}. Finally, since the proposed standard of D{sub agua} is of ionometric type, an explanation of the Bragg-Gray and Spencer-Attix cavity theories is made. These theories are the foundation of this type of standards. On the other hand, to guarantee the complete validity of the conditions demanded by these theories it is necessary to introduce correction factors. These factors are determined in Chapters 5 and 6. Since for the calculation of the correction factors Monte Carlo (MC) method is used in an important way, in Chapter 3 the fundamental concepts of this method are presented; in particular the principles of the code MCNP4C [Briesmeister 2000] are detailed, making emphasis on the basis of electron transport and variance reduction techniques used in this thesis. Because a phenomenological approach is carried out in the development of the standard of D{sub agua}, in Chapter 4 the characteristics of the Picker C/9 unit, the
Toric Codes, Multiplicative Structure and Decoding
DEFF Research Database (Denmark)
Hansen, Johan Peder
2017-01-01
Long linear codes constructed from toric varieties over finite fields, their multiplicative structure and decoding. The main theme is the inherent multiplicative structure on toric codes. The multiplicative structure allows for \\emph{decoding}, resembling the decoding of Reed-Solomon codes...... and aligns with decoding by error correcting pairs. We have used the multiplicative structure on toric codes to construct linear secret sharing schemes with \\emph{strong multiplication} via Massey's construction generalizing the Shamir Linear secret sharing shemes constructed from Reed-Solomon codes. We have...... constructed quantum error correcting codes from toric surfaces by the Calderbank-Shor-Steane method....
Efficient convolutional sparse coding
Energy Technology Data Exchange (ETDEWEB)
Wohlberg, Brendt
2017-06-20
Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.
基于程序流敏感的自修改代码混淆方法%A Program Flow-Sensitive Self-Modifying Code Obfuscation Method
Institute of Scientific and Technical Information of China (English)
何炎祥; 陈勇; 吴伟; 陈念; 徐超; 刘健博; 苏雯
2012-01-01
自修改代码混淆方法是一种隐藏程序重要信息的有效技术.为减少代码混淆造成的额外开销而又不影响代码混淆的质量,利用程序流敏感分析方法选择比较重要的指令进行混淆.为提高代码混淆的质量,有效地防止反汇编,提出一个二步比较混淆模型.该模型包括两个子混淆器,混淆器1采用程序流敏感分析方法获得混淆的指令并产生两个混淆代码文件和一个混淆代码映射文件.混淆器2通过比较两个混淆代码文件精确地定位混淆指令在二进制代码中的位置,然后利用混淆代码映射文件对二进制代码进行混淆,以进一步提高代码混淆的质量.通过实验分析,混淆后二进制文件的额外开销只占整个代码的3％左右,并且混淆后的反汇编代码明显异于原始的反汇编代码,甚至出现了一些无法识别的错误指令.%Self-modifying code obfuscation is an effective technique to hide the important information of programs. In this paper, we focus on reducing the cost of obfuscated codes and enhancing the degree of obfuscation to use a flow-sensitive method to select the obfuscated codes that are important relatively such as control instruction and propose a two-step comparing obfuscation model that can locate the obfuscated instructions in binary codes precisely that can help change these codes to illegal codes to defense the disassembly. The model contains two parts. The first part uses the flow-sensitive analyses to get the obfuscated instructions and generate two obfuscated codes and one obfuscated code mapping file. Then, the second part compares these two obfuscated codes to generate the final obfuscated codes containing the illegal instruction codes based on the obfuscated code mapping file. Through the experiments, the obfuscated instructions are about 3% of the whole codes and the disassemble codes are much different with the source codes and even some error instructions appear.
1988-01-01
Article 162 of this Mexican Code provides, among other things, that "Every person has the right freely, responsibly, and in an informed fashion to determine the number and spacing of his or her children." When a marriage is involved, this right is to be observed by the spouses "in agreement with each other." The civil codes of the following states contain the same provisions: 1) Baja California (Art. 159 of the Civil Code of 28 April 1972 as revised in Decree No. 167 of 31 January 1974); 2) Morelos (Art. 255 of the Civil Code of 26 September 1949 as revised in Decree No. 135 of 29 December 1981); 3) Queretaro (Art. 162 of the Civil Code of 29 December 1950 as revised in the Act of 9 January 1981); 4) San Luis Potosi (Art. 147 of the Civil Code of 24 March 1946 as revised in 13 June 1978); Sinaloa (Art. 162 of the Civil Code of 18 June 1940 as revised in Decree No. 28 of 14 October 1975); 5) Tamaulipas (Art. 146 of the Civil Code of 21 November 1960 as revised in Decree No. 20 of 30 April 1975); 6) Veracruz-Llave (Art. 98 of the Civil Code of 1 September 1932 as revised in the Act of 30 December 1975); and 7) Zacatecas (Art. 253 of the Civil Code of 9 February 1965 as revised in Decree No. 104 of 13 August 1975). The Civil Codes of Puebla and Tlaxcala provide for this right only in the context of marriage with the spouses in agreement. See Art. 317 of the Civil Code of Puebla of 15 April 1985 and Article 52 of the Civil Code of Tlaxcala of 31 August 1976 as revised in Decree No. 23 of 2 April 1984. The Family Code of Hidalgo requires as a formality of marriage a certification that the spouses are aware of methods of controlling fertility, responsible parenthood, and family planning. In addition, Article 22 the Civil Code of the Federal District provides that the legal capacity of natural persons is acquired at birth and lost at death; however, from the moment of conception the individual comes under the protection of the law, which is valid with respect to the
Energy Technology Data Exchange (ETDEWEB)
Vargas V, M.X
2003-07-01
In this work the unit of absorbed dose at the Secondary Standard Dosimetry Laboratory (SSDL) of Mexico, is characterized by means of the development of a primary standard of absorbed dose to water, D{sub agua}. The main purpose is to diminish the uncertainty in the service of dosimetric calibration of ionization chambers (employed in radiotherapy of extemal beams) that offers this laboratory. This thesis is composed of seven chapters: In Chapter 1 the position and justification of the problem is described, as well as the general and specific objectives. In Chapter 2, a presentation of the main quantities and units used in dosimetry is made, in accordance with the recommendations of the International Commission on Radiation Units and Measurements (ICRU) that establish the necessity to have a coherent system with the international system of units and dosimetric quantities. The concepts of equilibrium and transient equilibrium of charged particles (TCPE) are also presented, which are used later in the quantitative determination of D{sub agua}. Finally, since the proposed standard of D{sub agua} is of ionometric type, an explanation of the Bragg-Gray and Spencer-Attix cavity theories is made. These theories are the foundation of this type of standards. On the other hand, to guarantee the complete validity of the conditions demanded by these theories it is necessary to introduce correction factors. These factors are determined in Chapters 5 and 6. Since for the calculation of the correction factors Monte Carlo (MC) method is used in an important way, in Chapter 3 the fundamental concepts of this method are presented; in particular the principles of the code MCNP4C [Briesmeister 2000] are detailed, making emphasis on the basis of electron transport and variance reduction techniques used in this thesis. Because a phenomenological approach is carried out in the development of the standard of D{sub agua}, in Chapter 4 the characteristics of the Picker C/9 unit, the
DEFF Research Database (Denmark)
Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip
2012-01-01
This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....
Surface code implementation of block code state distillation
Fowler, Austin G.; Devitt, Simon J.; Jones, Cody
2013-01-01
State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved |A〉 state given 15 input copies. New block code state distillation methods can produce k improved |A〉 states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three. PMID:23736868
Generalized H-codes and type II codes over GF(4)
Institute of Scientific and Technical Information of China (English)
LIN Xin-qi; WEN Xiang-ming; ZHENG Wei
2008-01-01
The type II codes have been studied widely in applications since their appearance. With analysis of the algebraic structure of finite field of order 4 (i.e., GF(4)), some necessary and sufficient conditions that a generalized H-code (i.e., GH-code) is a type II code over GF(4) are given in this article, and an efficient and simple method to generate type II codes from GH-codes over GF(4) is shown. The conclusions further extend the coding theory of type II.
Feature-based Image Sequence Compression Coding
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
A novel compressing method for video teleconference applications is presented. Semantic-based coding based on human image feature is realized, where human features are adopted as parameters. Model-based coding and the concept of vector coding are combined with the work on image feature extraction to obtain the result.
Abraham, Nikhil
2015-01-01
Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill
Gao, Wen
2015-01-01
This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV
Phase-coded pulse aperiodic transmitter coding
Directory of Open Access Journals (Sweden)
I. I. Virtanen
2009-07-01
Full Text Available Both ionospheric and weather radar communities have already adopted the method of transmitting radar pulses in an aperiodic manner when measuring moderately overspread targets. Among the users of the ionospheric radars, this method is called Aperiodic Transmitter Coding (ATC, whereas the weather radar users have adopted the term Simultaneous Multiple Pulse-Repetition Frequency (SMPRF. When probing the ionosphere at the carrier frequencies of the EISCAT Incoherent Scatter Radar facilities, the range extent of the detectable target is typically of the order of one thousand kilometers – about seven milliseconds – whereas the characteristic correlation time of the scattered signal varies from a few milliseconds in the D-region to only tens of microseconds in the F-region. If one is interested in estimating the scattering autocorrelation function (ACF at time lags shorter than the F-region correlation time, the D-region must be considered as a moderately overspread target, whereas the F-region is a severely overspread one. Given the technical restrictions of the radar hardware, a combination of ATC and phase-coded long pulses is advantageous for this kind of target. We evaluate such an experiment under infinitely low signal-to-noise ratio (SNR conditions using lag profile inversion. In addition, a qualitative evaluation under high-SNR conditions is performed by analysing simulated data. The results show that an acceptable estimation accuracy and a very good lag resolution in the D-region can be achieved with a pulse length long enough for simultaneous E- and F-region measurements with a reasonable lag extent. The new experiment design is tested with the EISCAT Tromsø VHF (224 MHz radar. An example of a full D/E/F-region ACF from the test run is shown at the end of the paper.
Network Coding Fundamentals and Applications
Medard, Muriel
2011-01-01
Network coding is a field of information and coding theory and is a method of attaining maximum information flow in a network. This book is an ideal introduction for the communications and network engineer, working in research and development, who needs an intuitive introduction to network coding and to the increased performance and reliability it offers in many applications. This book is an ideal introduction for the research and development communications and network engineer who needs an intuitive introduction to the theory and wishes to understand the increased performance and reliabil
An Analysis of Syndrome Coding
Amiruzzaman, Md; Abdullah-Al-Wadud, M.; Chung, Yoojin
In this paper a detail analysis is presented based on BCH syndrome coding for covert channel data hiding methods. The experimented technique is nothing but a syndrome coding algorithm with a coset based approach, analyzed results are showing that the examined method has more flexibility to choose coset, also providing less modification distortion caused by data hiding. Analyzed method presented by clear mathematical way. As it is mathematical equation dependent, hence analyzed results are showing that the analyzed method has fast computation ability and find perfect roots for modification.
Enhanced motion coding in MC-EZBC
Chen, Junhua; Zhang, Wenjun; Wang, Yingkun
2005-07-01
Since hierarchical variable size block matching and bidirectional motion compensation are used in the motioncompensated embedded zero block coding (MC-EZBC), the motion information consists of motion vector quadtree map and motion vectors. In the conventional motion coding scheme, the quadtree structure is coded directly, the motion vector modes are coded with Huffman codes, and the motion vector differences are coded by an m-ary arithmetic coder with 0-order models. In this paper we propose a new motion coding scheme which uses an extension of the CABAC algorithm and new context modeling for quadtree structure coding and mode coding. In addition, we use a new scalable motion coding method which scales the motion vector quadtrees according to the rate-distortion slope of the tree nodes. Experimental results show that the new coding scheme increases the efficiency of the motion coding by more than 25%. The performance of the system is improved accordingly, especially in low bit rates. Moreover, with the scalable motion coding, the subjective and objective coding performance is further enhanced in low bit rate scenarios.
Android event code automatic generation method based on object relevance%基于对象关联的Android事件代码自动生成方法
Institute of Scientific and Technical Information of China (English)
李杨; 胡文
2012-01-01
为解决Android事件代码自动生成问题,结合对象关联理论,论述了控件对象关联关系,并给出控件对象关联关系定义并实现其构建过程,最终建立控件对象关联关系树COARTree,将其应用于Android事件代码生成过程中,解决了Android事件代码自动生成问题,并取得了良好的应用价值.以简易电话簿为实例,验证了Android事件代码自动生成的方法.%In order to solve the problem of Android event code automatically generated, this paper combined with the object of relevance theory (OAR) , discussed on the control object relationship, and gave the control object relationships theory ( COAR) defining and achieve their build process, and ultimately establish control object relationship tree(COARTree) applied to Android event code generation process to solve the problem of Android event code automatically generated, and have achieved good application value. Simple phone book, for instance, to verify the Android event code automatically generated.
Coding Long Contour Shapes of Binary Objects
Sánchez-Cruz, Hermilo; Rodríguez-Díaz, Mario A.
This is an extension of the paper appeared in [15]. This time, we compare four methods: Arithmetic coding applied to 3OT chain code (Arith-3OT), Arithmetic coding applied to DFCCE (Arith-DFCCE), Huffman coding applied to DFCCE chain code (Huff-DFCCE), and, to measure the efficiency of the chain codes, we propose to compare the methods with JBIG, which constitutes an international standard. In the aim to look for a suitable and better representation of contour shapes, our probes suggest that a sound method to represent contour shapes is 3OT, because Arithmetic coding applied to it gives the best results regarding JBIG, independently of the perimeter of the contour shapes.
Institute of Scientific and Technical Information of China (English)
刘原华; 牛新亮; 张美玲
2014-01-01
为增大QC-LDPC码围长的同时减少码中包含的短环，提高其纠错性能，提出了一种基于中国剩余定理( CRT)的QC-LDPC码改进联合构造方法。该方法将设计围长为g的长码长的QC-LD-PC码的问题简化为设计一个围长为g的短分量码的问题，然后通过对其余分量码校验矩阵的列块进行适当置换，使得构造出的QC-LDPC码具有更少的短环和更优的性能，更适于可靠性要求较高的通信系统。仿真结果表明，与已有的CRT联合构造方法设计的QC-LDPC码相比，新方法构造的QC-LDPC码具有更少的短环，在误码率为10-6时获得了1.2 dB的编码增益。%An improved combining method for designing quasi-cyclic low-density parity-check ( QC-LD-PC) codes based on Chinese Remainder Theorem( CRT) is proposed to improve the error-correcting per-formance by increasing the girth and decreasing the number of short cycles. With the CRT combining method,the difficult problem of designing QC-LDPC codes with girth g is translated into a task of desig-ning one component code with girth g. By properly permuting the column blocks of parity-check matrices of other component codes,a lot of QC-LDPC codes with much shorter cycles and better performance can be designed,which are more suitable for communication systems with request of high reliability. Simulations show that compared with existing CRT-based QC-LDPC codes,the proposed QC-LDPC codes have much shorter cycles and obtain 1. 2 dB coding gain at bit error rate(BER) of 10-6.
Locally Orderless Registration Code
DEFF Research Database (Denmark)
2012-01-01
This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....
Locally orderless registration code
DEFF Research Database (Denmark)
2012-01-01
This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....
Detecting non-coding selective pressure in coding regions
Directory of Open Access Journals (Sweden)
Blanchette Mathieu
2007-02-01
Full Text Available Abstract Background Comparative genomics approaches, where orthologous DNA regions are compared and inter-species conserved regions are identified, have proven extremely powerful for identifying non-coding regulatory regions located in intergenic or intronic regions. However, non-coding functional elements can also be located within coding region, as is common for exonic splicing enhancers, some transcription factor binding sites, and RNA secondary structure elements affecting mRNA stability, localization, or translation. Since these functional elements are located in regions that are themselves highly conserved because they are coding for a protein, they generally escaped detection by comparative genomics approaches. Results We introduce a comparative genomics approach for detecting non-coding functional elements located within coding regions. Codon evolution is modeled as a mixture of codon substitution models, where each component of the mixture describes the evolution of codons under a specific type of coding selective pressure. We show how to compute the posterior distribution of the entropy and parsimony scores under this null model of codon evolution. The method is applied to a set of growth hormone 1 orthologous mRNA sequences and a known exonic splicing elements is detected. The analysis of a set of CORTBP2 orthologous genes reveals a region of several hundred base pairs under strong non-coding selective pressure whose function remains unknown. Conclusion Non-coding functional elements, in particular those involved in post-transcriptional regulation, are likely to be much more prevalent than is currently known. With the numerous genome sequencing projects underway, comparative genomics approaches like that proposed here are likely to become increasingly powerful at detecting such elements.
Signal Constellations for Multilevel Coded Modulation with Sparse Graph Codes
Cronie, Harm S.
2005-01-01
A method to combine error-correction coding and spectral efficient modulation for transmission over channels with Gaussian noise is presented. The method of modulation leads to a signal constellation in which the constellation symbols have a nonuniform distribution. This gives a so-called shape gain
Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark
2012-01-01
A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…
Institute of Scientific and Technical Information of China (English)
2008-01-01
Quantum error correcting codes are indispensable for quantum information processing and quantum computation.In 1995 and 1996,Shor and Steane gave first several examples of quantum codes from classical error correcting codes.The construction of efficient quantum codes is now an active multi-discipline research field.In this paper we review the known several constructions of quantum codes and present some examples.
Buffer Overflow Detection on Binary Code
Institute of Scientific and Technical Information of China (English)
ZHENG Yan-fei; LI Hui; CHEN Ke-fei
2006-01-01
Most solutions for detecting buffer overflow are based on source code. But the requirement for source code is not always practical especially for business software. A new approach was presented to detect statically the potential buffer overflow vulnerabilities in the binary code of software. The binary code was translated into assembly code without the lose of the information of string operation functions. The feature code abstract graph was constructed to generate more accurate constraint statements, and analyze the assembly code using the method of integer range constraint. After getting the elementary report on suspicious code where buffer overflows possibly happen, the control flow sensitive analysis using program dependence graph was done to decrease the rate of false positive. A prototype was implemented which demonstrates the feasibility and efficiency of the new approach.
An implicit Smooth Particle Hydrodynamic code
Energy Technology Data Exchange (ETDEWEB)
Knapp, Charles E. [Univ. of New Mexico, Albuquerque, NM (United States)
2000-05-01
An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.
1981-01-01
ground plane was developed by Arnold Somerfeld (ref. 20).. While this solution has been used directly in integral-equation computer codes, excessive...FORMAT (///.A3) SN 158 159 20 FORMAT (4H lRo,13,/,(10E12.5)) SN 159 160 END SN 160- -396- BESSEL bESSEL ., PURP"OS Et To cumpute the Hessel functiton
Energy Technology Data Exchange (ETDEWEB)
Kim, Moo Hwan; Seo, Kyoung Woo [POSTECH, Pohang (Korea, Republic of)
2001-03-15
In the probability approach, the calculated CCFPs of all the scenarios were zero, which meant that it was expected that for all the accident scenarios the maximum pressure load induced by DCH was lower than the containment failure pressure obtained from the fragility curve. Thus, it can be stated that the KSNP containment is robust to the DCH threat. And uncertainty of computer codes used to be two (deterministic and probabilistic) approaches were reduced by the sensitivity tests and the research with the verification and comparison of the DCH models in each code. So, this research was to evaluate synthetic result of DCH issue and expose accurate methodology to assess containment integrity about operating PWR in Korea.
Forms and Linear Network Codes
DEFF Research Database (Denmark)
Hansen, Johan P.
spaces of Veronese varieties. Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possibly altered vector space. Ralf Koetter and Frank R. Kschischang introduced a metric on the set af vector spaces and showed that a minimal......We present a general theory to obtain linear network codes utilizing forms and obtain explicit families of equidimensional vector spaces, in which any pair of distinct vector spaces intersect in the same small dimension. The theory is inspired by the methods of the author utilizing the osculating...... them suitable for linear network coding. The parameters of the resulting linear network codes are determined....
Ultrasound imaging using coded signals
DEFF Research Database (Denmark)
Misaridis, Athanasios
Modulated (or coded) excitation signals can potentially improve the quality and increase the frame rate in medical ultrasound scanners. The aim of this dissertation is to investigate systematically the applicability of modulated signals in medical ultrasound imaging and to suggest appropriate...... methods for coded imaging, with the goal of making better anatomic and flow images and three-dimensional images. On the first stage, it investigates techniques for doing high-resolution coded imaging with improved signal-to-noise ratio compared to conventional imaging. Subsequently it investigates how...... coded excitation can be used for increasing the frame rate. The work includes both simulated results using Field II, and experimental results based on measurements on phantoms as well as clinical images. Initially a mathematical foundation of signal modulation is given. Pulse compression based...
光照不均QR码图像二值化研究%Study on QR-Coded Image Binarization Method under Non-uniformed Illumination
Institute of Scientific and Technical Information of China (English)
路阳; 高慧敏
2012-01-01
Ostu算法因其具有算法实现简单等特点成为常用的一种图像二值化算法，但其对于不均匀光照下的QR码图像二值化处理效果不佳。针对Ostu算法这一缺点，提出一种解决方法：首先使用改进的同态滤波去除QR图像的不均匀光照的影响，然后采用Ostu算法对QR图像进行二值化。实验表明，使用该算法能有效克服不均匀光照的影响，二值化效果良好，条码识别率提高。%Ostu algorithm has become commonly used as image binarization algorithm because of its simplicity in realization, but it can not deal with the QR-Coded image under non-uniformed illumination. This paper puts forward a solution to overcome this disadvantage. Firstly, it adopts the improved homomorphic filtering algorithm to enhance the QR-Coded image quality to eliminate the negative effect of the non-uniformed illumination, then the QR-Coded image is binarized by the Ostu algorithm. Experimental results show that this algorithm can effectively overcomes the influence of non-uniformed illumination, and the recognition rate of the OR code can be improved.
Energy Technology Data Exchange (ETDEWEB)
Bekar, Kursat B [ORNL; Azmy, Yousry [North Carolina State University
2009-01-01
Improved TORT solutions to the 3D transport codes' suite of benchmarks exercise are presented in this study. Preliminary TORT solutions to this benchmark indicate that the majority of benchmark quantities for most benchmark cases are computed with good accuracy, and that accuracy improves with model refinement. However, TORT fails to compute accurate results for some benchmark cases with aspect ratios drastically different from 1, possibly due to ray effects. In this work, we employ the standard approach of splitting the solution to the transport equation into an uncollided flux and a fully collided flux via the code sequence GRTUNCL3D and TORT to mitigate ray effects. The results of this code sequence presented in this paper show that the accuracy of most benchmark cases improved substantially. Furthermore, the iterative convergence problems reported for the preliminary TORT solutions have been resolved by bringing the computational cells' aspect ratio closer to unity and, more importantly, by using 64-bit arithmetic precision in the calculation sequence. Results of this study are also reported.
Energy Technology Data Exchange (ETDEWEB)
Bartosiewicz, Yann [Universite Catholique de Louvain (UCL), Faculty of Applied Sciences, Mechanical Engineering Department, TERM Division, Place du Levant 2, 1348 Louvain-la-Neuve (Belgium)], E-mail: yann.bartosiewicz@uclouvain.be; Lavieville, Jerome [Universite Catholique de Louvain (UCL), Faculty of Applied Sciences, Mechanical Engineering Department, TERM Division, Place du Levant 2, 1348 Louvain-la-Neuve (Belgium); Seynhaeve, Jean-Marie [Universite Catholique de Louvain (UCL), Faculty of Applied Sciences, Mechanical Engineering Department, TERM Division, Place du Levant 2, 1348 Louvain-la-Neuve (Belgium)], E-mail: jm.seynhaeve@uclouvain.be
2008-04-15
This paper presents some results concerning a first benchmark for the new European research code for thermal hydraulics computations: NEPTUNE{sub C}FD. This benchmark relies on the Thorpe experiment to model the occurrence of instabilities in a stratified two-phase flow. The first part of this work is to create a numerical trial case with the VOF approach. The results, in terms of time of onset of the instability, critical wave-number or wave phase speed, are rather good compared to linear inviscid theory and experimental data. Additional numerical tests showed the effect of the surface tension and density ratio on the growing dynamics of the instability and the structure of the waves. In the second part, a code to code (VOF/multi-field) comparison is performed for a case with zero surface tension. The results showed some discrepancies in terms of wave amplitudes, growing rates and a time shifting in the global dynamics. Afterward, two surface tension formulations are proposed in the multi-field approach. Both formulations provided similar results. The time for onset of the instability, the most amplified wave-number and its amplitude were in rather good agreement with the linear analysis and VOF results. However, the time-shifted dynamics was still observed.
Turbo Codes Extended with Outer BCH Code
DEFF Research Database (Denmark)
Andersen, Jakob Dahl
1996-01-01
The "error floor" observed in several simulations with the turbo codes is verified by calculation of an upper bound to the bit error rate for the ensemble of all interleavers. Also an easy way to calculate the weight enumerator used in this bound is presented. An extended coding scheme is proposed...
MCNP APPLICATIONS FOR THE 21ST CENTURY
Energy Technology Data Exchange (ETDEWEB)
G. MCKINNEY; T. BOOTH; ET AL
2000-10-01
The Los Alamos National Laboratory (LANL) Monte Carlo N-Particle radiation transport code, MCNP, has become an international standard for a wide spectrum of neutron, photon, and electron radiation transport applications. The latest version of the code, MCNP 4C, was released to the Radiation Safety Information Computational Center (RSICC) in February 2000. This paper describes the code development philosophy, new features and capabilities, applicability to various problems, and future directions.
MCNP application for the 21 century
Energy Technology Data Exchange (ETDEWEB)
McKinney, M.C. [and others
2000-08-01
The Los Alamos National Laboratory (LANL) Monte Carlo N-Particle radiation transport code, MCNP, has become an international standard for a wide spectrum of neutron, photon, and electron radiation transport applications. The latest version of the code, MCNP 4C, was released to the Radiation Safety Information Computational Center (RSICC) in February 2000. This paper describes the code development philosophy, new features and capabilities, applicability to various problems, and future directions.
Hybrid Noncoherent Network Coding
Skachek, Vitaly; Nedic, Angelia
2011-01-01
We describe a novel extension of subspace codes for noncoherent networks, suitable for use when the network is viewed as a communication system that introduces both dimension and symbol errors. We show that when symbol erasures occur in a significantly large number of different basis vectors transmitted through the network and when the min-cut of the networks is much smaller then the length of the transmitted codewords, the new family of codes outperforms their subspace code counterparts. For the proposed coding scheme, termed hybrid network coding, we derive two upper bounds on the size of the codes. These bounds represent a variation of the Singleton and of the sphere-packing bound. We show that a simple concatenated scheme that represents a combination of subspace codes and Reed-Solomon codes is asymptotically optimal with respect to the Singleton bound. Finally, we describe two efficient decoding algorithms for concatenated subspace codes that in certain cases have smaller complexity than subspace decoder...
The Proteomic Code: a molecular recognition code for proteins
Directory of Open Access Journals (Sweden)
Biro Jan C
2007-11-01
Full Text Available Abstract Background The Proteomic Code is a set of rules by which information in genetic material is transferred into the physico-chemical properties of amino acids. It determines how individual amino acids interact with each other during folding and in specific protein-protein interactions. The Proteomic Code is part of the redundant Genetic Code. Review The 25-year-old history of this concept is reviewed from the first independent suggestions by Biro and Mekler, through the works of Blalock, Root-Bernstein, Siemion, Miller and others, followed by the discovery of a Common Periodic Table of Codons and Nucleic Acids in 2003 and culminating in the recent conceptualization of partial complementary coding of interacting amino acids as well as the theory of the nucleic acid-assisted protein folding. Methods and conclusions A novel cloning method for the design and production of specific, high-affinity-reacting proteins (SHARP is presented. This method is based on the concept of proteomic codes and is suitable for large-scale, industrial production of specifically interacting peptides.
Supervised Transfer Sparse Coding
Al-Shedivat, Maruan
2014-07-27
A combination of the sparse coding and transfer learn- ing techniques was shown to be accurate and robust in classification tasks where training and testing objects have a shared feature space but are sampled from differ- ent underlying distributions, i.e., belong to different do- mains. The key assumption in such case is that in spite of the domain disparity, samples from different domains share some common hidden factors. Previous methods often assumed that all the objects in the target domain are unlabeled, and thus the training set solely comprised objects from the source domain. However, in real world applications, the target domain often has some labeled objects, or one can always manually label a small num- ber of them. In this paper, we explore such possibil- ity and show how a small number of labeled data in the target domain can significantly leverage classifica- tion accuracy of the state-of-the-art transfer sparse cod- ing methods. We further propose a unified framework named supervised transfer sparse coding (STSC) which simultaneously optimizes sparse representation, domain transfer and classification. Experimental results on three applications demonstrate that a little manual labeling and then learning the model in a supervised fashion can significantly improve classification accuracy.
Feature coding for image representation and recognition
Huang, Yongzhen
2015-01-01
This brief presents a comprehensive introduction to feature coding, which serves as a key module for the typical object recognition pipeline. The text offers a rich blend of theory and practice while reflects the recent developments on feature coding, covering the following five aspects: (1) Review the state-of-the-art, analyzing the motivations and mathematical representations of various feature coding methods; (2) Explore how various feature coding algorithms evolve along years; (3) Summarize the main characteristics of typical feature coding algorithms and categorize them accordingly; (4) D
Best Effort and Practice Activation Codes
Gans, G. de Koning; Verheul, E.
2011-01-01
Activation Codes are used in many different digital services and known by many different names including voucher, e-coupon and discount code. In this paper we focus on a specific class of ACs that are short, human-readable, fixed-length and represent value. Even though this class of codes is extensively used there are no general guidelines for the design of Activation Code schemes. We discuss different methods that are used in practice and propose BEPAC, a new Activation Code scheme that prov...
Network coding for computing: Linear codes
Appuswamy, Rathinakumar; Karamchandani, Nikhil; Zeger, Kenneth
2011-01-01
In network coding it is known that linear codes are sufficient to achieve the coding capacity in multicast networks and that they are not sufficient in general to achieve the coding capacity in non-multicast networks. In network computing, Rai, Dey, and Shenvi have recently shown that linear codes are not sufficient in general for solvability of multi-receiver networks with scalar linear target functions. We study single receiver networks where the receiver node demands a target function of the source messages. We show that linear codes may provide a computing capacity advantage over routing only when the receiver demands a `linearly-reducible' target function. % Many known target functions including the arithmetic sum, minimum, and maximum are not linearly-reducible. Thus, the use of non-linear codes is essential in order to obtain a computing capacity advantage over routing if the receiver demands a target function that is not linearly-reducible. We also show that if a target function is linearly-reducible,...
Sijoy, C. D.; Chaturvedi, S.
2016-06-01
Higher-order cell-centered multi-material hydrodynamics (HD) and parallel node-centered radiation transport (RT) schemes are combined self-consistently in three-temperature (3T) radiation hydrodynamics (RHD) code TRHD (Sijoy and Chaturvedi, 2015) developed for the simulation of intense thermal radiation or high-power laser driven RHD. For RT, a node-centered gray model implemented in a popular RHD code MULTI2D (Ramis et al., 2009) is used. This scheme, in principle, can handle RT in both optically thick and thin materials. The RT module has been parallelized using message passing interface (MPI) for parallel computation. Presently, for multi-material HD, we have used a simple and robust closure model in which common strain rates to all materials in a mixed cell is assumed. The closure model has been further generalized to allow different temperatures for the electrons and ions. In addition to this, electron and radiation temperatures are assumed to be in non-equilibrium. Therefore, the thermal relaxation between the electrons and ions and the coupling between the radiation and matter energies are required to be computed self-consistently. This has been achieved by using a node-centered symmetric-semi-implicit (SSI) integration scheme. The electron thermal conduction is calculated using a cell-centered, monotonic, non-linear finite volume scheme (NLFV) suitable for unstructured meshes. In this paper, we have described the details of the 2D, 3T, non-equilibrium, multi-material RHD code developed with a special attention to the coupling of various cell-centered and node-centered formulations along with a suite of validation test problems to demonstrate the accuracy and performance of the algorithms. We also report the parallel performance of RT module. Finally, in order to demonstrate the full capability of the code implementation, we have presented the simulation of laser driven shock propagation in a layered thin foil. The simulation results are found to be in good
Research on QR Code Methods of VehicIe Lookup and Position for Indoor Parking%基于QR Code的室内寻车定位方法研究
Institute of Scientific and Technical Information of China (English)
赵炎; 蓝箭; 李印; 李新雨
2015-01-01
随着城市化的发展，室内停车场建设的规模化、复杂化带来了寻车难的问题。然而，由于室内环境的特殊性和高精度定位要求，传统的依靠卫星和基站进行定位的方法已经无法实现室内车位的精准定位。因此，提出了一种利用Android手机的扫码定位方法，将QR Code（Quick Response Code）编解码算法[1－3]与SQLite轻量数据库相结合。手机扫码后，从数据库中查询出车位的像素点信息，将其标记在手机端2D地图中。从而实现了车位的精准定位和引导，为室内寻车定位提出了一种快速高效的实现方法。%A scanning code tag-based positioning method on Android smart-phone is proposed in this paper.It com-bines SQLite database and the QR Code (Quick Response Code) aIgorithm[1-3],the QR Code functions as smart IabeIs to store parking information,mobiIe phone acquires the pixeI information by querying the database after scanning QR Code,and dis-pIays the corresponding coordinates in 2D map,enabIing precise positioning and navigation of parking spaces.It provides a quick and effective positioning soIution for the indoor parking Iots.
Practices in Code Discoverability
Teuben, Peter; Nemiroff, Robert J; Shamir, Lior
2012-01-01
Much of scientific progress now hinges on the reliability, falsifiability and reproducibility of computer source codes. Astrophysics in particular is a discipline that today leads other sciences in making useful scientific components freely available online, including data, abstracts, preprints, and fully published papers, yet even today many astrophysics source codes remain hidden from public view. We review the importance and history of source codes in astrophysics and previous efforts to develop ways in which information about astrophysics codes can be shared. We also discuss why some scientist coders resist sharing or publishing their codes, the reasons for and importance of overcoming this resistance, and alert the community to a reworking of one of the first attempts for sharing codes, the Astrophysics Source Code Library (ASCL). We discuss the implementation of the ASCL in an accompanying poster paper. We suggest that code could be given a similar level of referencing as data gets in repositories such ...
Djordjevic, Ivan; Vasic, Bane
2010-01-01
This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.
Zhang, Linfan; Zheng, Shuang
2015-01-01
Quick Response code opens possibility to convey data in a unique way yet insufficient prevention and protection might lead into QR code being exploited on behalf of attackers. This thesis starts by presenting a general introduction of background and stating two problems regarding QR code security, which followed by a comprehensive research on both QR code itself and related issues. From the research a solution taking advantages of cloud and cryptography together with an implementation come af...
Ma, Xiao; Bai, Baoming; Zhang, Xiaoyi
2011-01-01
In this paper, we propose a new ensemble of rateless forward error correction (FEC) codes. The proposed codes are serially concatenated codes with Reed-Solomon (RS) codes as outer codes and Kite codes as inner codes. The inner Kite codes are a special class of prefix rateless low-density parity-check (PRLDPC) codes, which can generate potentially infinite (or as many as required) random-like parity-check bits. The employment of RS codes as outer codes not only lowers down error-floors but also ensures (with high probability) the correctness of successfully decoded codewords. In addition to the conventional two-stage decoding, iterative decoding between the inner code and the outer code are also implemented to improve the performance further. The performance of the Kite codes under maximum likelihood (ML) decoding is analyzed by applying a refined Divsalar bound to the ensemble weight enumerating functions (WEF). We propose a simulation-based optimization method as well as density evolution (DE) using Gaussian...
DEFF Research Database (Denmark)
Martins, Bo; Forchhammer, Søren
2000-01-01
of a halftone pattern dictionary.The decoder first decodes the gray-scale image. Then for each gray-scale pixel looks up the corresponding halftonepattern in the dictionary and places it in the reconstruction bitmap at the position corresponding to the gray-scale pixel. The coding method is inherently lossy......The emerging international standard for compression of bilevel images and bi-level documents, JBIG2,provides a mode dedicated for lossy coding of halftones. The encoding procedure involves descreening of the bi-levelimage into gray-scale, encoding of the gray-scale image, and construction...... and care must be taken to avoid introducing artifacts in the reconstructed image. We describe how to apply this coding method for halftones created by periodic ordered dithering, by clustered dot screening (offset printing), and by techniques which in effect dithers with blue noise, e.g., error diffusion...
A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok
2001-01-01
textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from
Bergstra, Jan A
2010-01-01
General definitions as well as rules of reasoning regarding control code production, distribution, deployment, and usage are described. The role of testing, trust, confidence and risk analysis is considered. A rationale for control code testing is sought and found for the case of safety critical embedded control code.
DEFF Research Database (Denmark)
Bombin Palomo, Hector
2015-01-01
Color codes are topological stabilizer codes with unusual transversality properties. Here I show that their group of transversal gates is optimal and only depends on the spatial dimension, not the local geometry. I also introduce a generalized, subsystem version of color codes. In 3D they allow...
Deursen, A. van; Moonen, L.M.F.; Bergh, A. van den; Kok, G.
2001-01-01
Two key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from refactoring product
ARC Code TI: CODE Software Framework
National Aeronautics and Space Administration — CODE is a software framework for control and observation in distributed environments. The basic functionality of the framework allows a user to observe a distributed...
ARC Code TI: ROC Curve Code Augmentation
National Aeronautics and Space Administration — ROC (Receiver Operating Characteristic) curve Code Augmentation was written by Rodney Martin and John Stutz at NASA Ames Research Center and is a modification of ROC...
Fountain Codes: LT And Raptor Codes Implementation
Directory of Open Access Journals (Sweden)
Ali Bazzi, Hiba Harb
2017-01-01
Full Text Available Digital fountain codes are a new class of random error correcting codes designed for efficient and reliable data delivery over erasure channels such as internet. These codes were developed to provide robustness against erasures in a way that resembles a fountain of water. A digital fountain is rateless in a way that sender can send limitless number of encoded packets. The receiver doesn’t care which packets are received or lost as long as the receiver gets enough packets to recover original data. In this paper, the design of the fountain codes is explored with its implementation of the encoding and decoding algorithm so that the performance in terms of encoding/decoding symbols, reception overhead, data length, and failure probability is studied.
Coherence protection by random coding
Energy Technology Data Exchange (ETDEWEB)
Brion, E [Laboratoire Aime Cotton, CNRS II, Batiment 505, 91405 Orsay Cedex (France); Akulin, V M [Laboratoire Aime Cotton, CNRS II, Batiment 505, 91405 Orsay Cedex (France); Dumer, I [College of Engineering, University of California, Riverside, CA 92521 (United States); Harel, G [Spinoza Institute, Utrecht University, Leuvenlaan 4, 3508 TD Utrecht (Netherlands); Kurizki, G [Department of Chemical Physics, Weizmann Institute of Science, Rehovot 76100 (Israel)
2005-10-01
We show that the multidimensional Zeno effect combined with non-holonomic control allows one to efficiently protect quantum systems from decoherence by a method similar to classical random coding. The method is applicable to arbitrary error-inducing Hamiltonians and general quantum systems. The quantum encoding approaches the Hamming upper bound for large dimension increases. Applicability of the method is demonstrated with a seven-qubit toy computer.
Raptor Codes and Cryptographic Issues
Malinen, Mikko
2008-01-01
In this paper two cryptographic methods are introduced. In the first method the presence of a certain size subgroup of persons can be checked for an action to take place. For this we use fragments of Raptor codes delivered to the group members. In the other method a selection of a subset of objects can be made secret. Also, it can be proven afterwards, what the original selection was.
Energy Technology Data Exchange (ETDEWEB)
Albuquerque, M.A.G.; David, M.G.; Almeida, C.E. de; Magalhaes, L.A.G., E-mail: malbuqueque@hotmail.com [Universidade do Estado do Rio de Janeiro (UERJ), Rio de Janeiro, RJ (Brazil). Lab. de Ciencias Radiologicas; Bernal, M. [Universidade Estadual de Campinas (UNICAMP), SP (Brazil); Braz, D. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil)
2015-07-01
Breast cancer is the most common type of cancer among women. The main strategy to increase the long-term survival of patients with this disease is the early detection of the tumor, and mammography is the most appropriate method for this purpose. Despite the reduction of cancer deaths, there is a big concern about the damage caused by the ionizing radiation to the breast tissue. To evaluate these measures it was modeled a mammography equipment, and obtained the depth spectra using the Monte Carlo method - PENELOPE code. The average energies of the spectra in depth and the half value layer of the mammography output spectrum. (author)
Universal Rateless Codes From Coupled LT Codes
Aref, Vahid
2011-01-01
It was recently shown that spatial coupling of individual low-density parity-check codes improves the belief-propagation threshold of the coupled ensemble essentially to the maximum a posteriori threshold of the underlying ensemble. We study the performance of spatially coupled low-density generator-matrix ensembles when used for transmission over binary-input memoryless output-symmetric channels. We show by means of density evolution that the threshold saturation phenomenon also takes place in this setting. Our motivation for studying low-density generator-matrix codes is that they can easily be converted into rateless codes. Although there are already several classes of excellent rateless codes known to date, rateless codes constructed via spatial coupling might offer some additional advantages. In particular, by the very nature of the threshold phenomenon one expects that codes constructed on this principle can be made to be universal, i.e., a single construction can uniformly approach capacity over the cl...
Software Certification - Coding, Code, and Coders
Havelund, Klaus; Holzmann, Gerard J.
2011-01-01
We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.
Progressive Coding and Presentation of Maps for Internet Applications
DEFF Research Database (Denmark)
Jensen, Ole Riis; Forchhammer, Søren Otto
1999-01-01
A new lossless context based method for content progressive coding of images as maps is proposed.......A new lossless context based method for content progressive coding of images as maps is proposed....
基于网络编码的对等网络互惠资源共享方法%P2P Network Reciprocal Resource Sharing Method Based on Network Coding
Institute of Scientific and Technical Information of China (English)
韦丽霜; 宋伟
2011-01-01
Current network message transmission is based on store and forward mechanism. The network node does not process any messages. Network coding theory allows nodes coding and transmitting messages. This paper uses character of network coding package carrying more information to design reciprocal P2P network resource sharing method, which achieves reliability and robustness of P2P resource sharing. Through simulation experiment to evaluate the performance of P2P resource sharing system. Simulation experimental results show that network coding reciprocal resource sharing method makes P2P resource sharing application scalable and efficient.%传统网络消息传播基于存储转发路由机制,网络节点对于网络消息不进行任何处理,网络编码理论允许节点对传播的信息进行编码处理.基于此,利用网络编码数据包能携带更多网络信息的特点,提出一种对等网络环境下的互惠资源共享方法,保证对等网络资源共享的高可靠性和鲁棒性,并通过仿真实验加以实现.仿真实验结果表明,网络编码互惠资源共享方法能够提高对等网络资源共享服务的整体下载效率.
Rice, R. F.; Lee, J. J.
1986-01-01
Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.
Energy Technology Data Exchange (ETDEWEB)
Bekar, Kursat B.; Azmy, Yousry Y. [Department of Mechanical and Nuclear Engineering, Penn State University, University Park, PA 16802 (United States)
2008-07-01
We present the TORT solutions to the 3-D transport codes' suite of benchmarks exercise. An overview of benchmark configurations is provided, followed by a description of the TORT computational model we developed to solve the cases comprising the benchmark suite. In the numerical experiments reported in this paper, we chose to refine the spatial and angular discretizations simultaneously, from the coarsest model (40x40x40, 200 angles) to the finest model (160x160x160, 800 angles), and employed the results of the finest computational model as reference values for evaluating the mesh-refinement effects. The presented results show that the solutions for most cases in the suite of benchmarks as computed by TORT are in the asymptotic regime. (authors)
Institute of Scientific and Technical Information of China (English)
Gilbert; Rishton; Matthew; (Mizhou); HUI
2010-01-01
High mammalian gene expression was obtained for more than twenty different proteins in different cell types by just a few laboratory scale stable gene transfections for each protein.The stable expression vectors were constructed by inserting a naturally-occurring 1.006 kb or a synthetic 0.733 kb DNA fragment(including intron) of extremely GC-rich at the 5’ or/and 3’ flanking regions of these protein genes or their gene promoters.This experiment is the first experimental evidence showing that a non-coding extremely GC-rich DNA fragment is a super "chromatin opening element" and plays an important role in mammalian gene expression.This experiment has further indicated that chromatin-based regulation of mammalian gene expression is at least partially embedded in DNA primary structure,namely DNA GC-content.
Fulachier, J; The ATLAS collaboration; Albrand, S; Lambert, F
2013-01-01
. The “ATLAS Metadata Interface” framework (AMI) has been developed in the context of ATLAS, one of the largest scientific collaborations. AMI can be considered to be a mature application, since its basic architecture has been maintained for over 10 years. In this paper we describe briefly the architecture and the main uses of the framework within the experiment (TagCollector for release management and Dataset Discovery). These two applications, which share almost 2000 registered users, are superficially quite different, however much of the code is shared and they have been developed and maintained over a decade almost completely by the same team of 3 people. We discuss how the architectural principles established at the beginning of the project have allowed us to continue both to integrate the new technologies and to respond to the new metadata use cases which inevitably appear over such a time period.
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The molecular structures of hydrocarbons in straight-run gasoline were numerically coded. The nonlinear quantitative relationship(QSRR) between gas chromatography(GC) retention indices of the hydrocarbons and their molecular structures were established by using an error back-propagation(BP) algorithm. The GC retention indices of 150 hydrocarbons were then predicted by removing 15 compounds(as a test set) and using the 135 remained molecules as a calibration set. Through this procedure, all the compounds in the whole data set were then predicted in groups of 15 compounds. The results obtained by BP with the correlation coefficient and the standard deviation 0.993 4 and 16.54, are satisfied.
Lim, Sung Hoon; Gamal, Abbas El; Chung, Sae-Young
2010-01-01
A noisy network coding scheme for sending multiple sources over a general noisy network is presented. For multi-source multicast networks, the scheme naturally extends both network coding over noiseless networks by Ahlswede, Cai, Li, and Yeung, and compress-forward coding for the relay channel by Cover and El Gamal to general discrete memoryless and Gaussian networks. The scheme also recovers as special cases the results on coding for wireless relay networks and deterministic networks by Avestimehr, Diggavi, and Tse, and coding for wireless erasure networks by Dana, Gowaikar, Palanki, Hassibi, and Effros. The scheme involves message repetition coding, relay signal compression, and simultaneous decoding. Unlike previous compress--forward schemes, where independent messages are sent over multiple blocks, the same message is sent multiple times using independent codebooks as in the network coding scheme for cyclic networks. Furthermore, the relays do not use Wyner--Ziv binning as in previous compress-forward sch...
Testing algebraic geometric codes
Institute of Scientific and Technical Information of China (English)
CHEN Hao
2009-01-01
Property testing was initially studied from various motivations in 1990's.A code C (∩)GF(r)n is locally testable if there is a randomized algorithm which can distinguish with high possibility the codewords from a vector essentially far from the code by only accessing a very small (typically constant) number of the vector's coordinates.The problem of testing codes was firstly studied by Blum,Luby and Rubinfeld and closely related to probabilistically checkable proofs (PCPs).How to characterize locally testable codes is a complex and challenge problem.The local tests have been studied for Reed-Solomon (RS),Reed-Muller (RM),cyclic,dual of BCH and the trace subcode of algebraicgeometric codes.In this paper we give testers for algebraic geometric codes with linear parameters (as functions of dimensions).We also give a moderate condition under which the family of algebraic geometric codes cannot be locally testable.
Institute of Scientific and Technical Information of China (English)
ZHANG Aili; LIU Xiufeng
2006-01-01
Chinese remainder codes are constructed by applying weak block designs and the Chinese remainder theorem of ring theory.The new type of linear codes take the congruence class in the congruence class ring R/I1 ∩ I2 ∩…∩ In for the information bit,embed R/Ji into R/I1 ∩ I2 ∩…∩ In,and assign the cosets of R/Ji as the subring of R/I1 ∩ I2 ∩…∩ In and the cosets of R/Ji in R/I1 ∩ I2 ∩…∩ In as check lines.Many code classes exist in the Chinese remainder codes that have high code rates.Chinese remainder codes are the essential generalization of Sun Zi codes.
Institute of Scientific and Technical Information of China (English)
张爱丽; 刘秀峰; 靳蕃
2004-01-01
Chinese Remainder Codes are constructed by applying weak block designs and Chinese Remainder Theorem of ring theory. The new type of linear codes take the congruence class in the congruence class ring R/I1∩I2∩…∩In for the information bit, embed R/Ji into R/I1∩I2∩…∩In, and asssign the cosets of R/Ji as the subring of R/I1∩I2∩…∩In and the cosets of R/Ji in R/I1∩I2∩…∩In as check lines. There exist many code classes in Chinese Remainder Codes, which have high code rates. Chinese Remainder Codes are the essential generalization of Sun Zi Codes.