WorldWideScience

Sample records for based asme code

  1. Code cases for implementing risk-based inservice testing in the ASME OM code

    Energy Technology Data Exchange (ETDEWEB)

    Rowley, C.W.

    1996-12-01

    Historically inservice testing has been reasonably effective, but quite costly. Recent applications of plant PRAs to the scope of the IST program have demonstrated that of the 30 pumps and 500 valves in the typical plant IST program, less than half of the pumps and ten percent of the valves are risk significant. The way the ASME plans to tackle this overly-conservative scope for IST components is to use the PRA and plant expert panels to create a two tier IST component categorization scheme. The PRA provides the quantitative risk information and the plant expert panel blends the quantitative and deterministic information to place the IST component into one of two categories: More Safety Significant Component (MSSC) or Less Safety Significant Component (LSSC). With all the pumps and valves in the IST program placed in MSSC or LSSC categories, two different testing strategies will be applied. The testing strategies will be unique for the type of component, such as centrifugal pump, positive displacement pump, MOV, AOV, SOV, SRV, PORV, HOV, CV, and MV. A series of OM Code Cases are being developed to capture this process for a plant to use. One Code Case will be for Component Importance Ranking. The remaining Code Cases will develop the MSSC and LSSC testing strategy for type of component. These Code Cases are planned for publication in early 1997. Later, after some industry application of the Code Cases, the alternative Code Case requirements will gravitate to the ASME OM Code as appendices.

  2. ASME Code Efforts Supporting HTGRs

    Energy Technology Data Exchange (ETDEWEB)

    D.K. Morton

    2011-09-01

    In 1999, an international collaborative initiative for the development of advanced (Generation IV) reactors was started. The idea behind this effort was to bring nuclear energy closer to the needs of sustainability, to increase proliferation resistance, and to support concepts able to produce energy (both electricity and process heat) at competitive costs. The U.S. Department of Energy has supported this effort by pursuing the development of the Next Generation Nuclear Plant, a high temperature gas-cooled reactor. This support has included research and development of pertinent data, initial regulatory discussions, and engineering support of various codes and standards development. This report discusses the various applicable American Society of Mechanical Engineers (ASME) codes and standards that are being developed to support these high temperature gascooled reactors during construction and operation. ASME is aggressively pursuing these codes and standards to support an international effort to build the next generation of advanced reactors so that all can benefit.

  3. ASME Code Efforts Supporting HTGRs

    Energy Technology Data Exchange (ETDEWEB)

    D.K. Morton

    2010-09-01

    In 1999, an international collaborative initiative for the development of advanced (Generation IV) reactors was started. The idea behind this effort was to bring nuclear energy closer to the needs of sustainability, to increase proliferation resistance, and to support concepts able to produce energy (both electricity and process heat) at competitive costs. The U.S. Department of Energy has supported this effort by pursuing the development of the Next Generation Nuclear Plant, a high temperature gas-cooled reactor. This support has included research and development of pertinent data, initial regulatory discussions, and engineering support of various codes and standards development. This report discusses the various applicable American Society of Mechanical Engineers (ASME) codes and standards that are being developed to support these high temperature gascooled reactors during construction and operation. ASME is aggressively pursuing these codes and standards to support an international effort to build the next generation of advanced reactors so that all can benefit.

  4. ASME Code Efforts Supporting HTGRs

    Energy Technology Data Exchange (ETDEWEB)

    D.K. Morton

    2012-09-01

    In 1999, an international collaborative initiative for the development of advanced (Generation IV) reactors was started. The idea behind this effort was to bring nuclear energy closer to the needs of sustainability, to increase proliferation resistance, and to support concepts able to produce energy (both electricity and process heat) at competitive costs. The U.S. Department of Energy has supported this effort by pursuing the development of the Next Generation Nuclear Plant, a high temperature gas-cooled reactor. This support has included research and development of pertinent data, initial regulatory discussions, and engineering support of various codes and standards development. This report discusses the various applicable American Society of Mechanical Engineers (ASME) codes and standards that are being developed to support these high temperature gascooled reactors during construction and operation. ASME is aggressively pursuing these codes and standards to support an international effort to build the next generation of advanced reactors so that all can benefit.

  5. 76 FR 36231 - American Society of Mechanical Engineers (ASME) Codes and New and Revised ASME Code Cases

    Science.gov (United States)

    2011-06-21

    ... Engineers (ASME) Codes and New and Revised ASME Code Cases; Final Rule #0;#0;Federal Register / Vol. 76 , No... 50 RIN 3150-AI35 American Society of Mechanical Engineers (ASME) Codes and New and Revised ASME Code... 2004 ASME Boiler and Pressure Vessel Code, Section III, Division 1; 2007 ASME Boiler and...

  6. 78 FR 37848 - ASME Code Cases Not Approved for Use

    Science.gov (United States)

    2013-06-24

    ... COMMISSION ASME Code Cases Not Approved for Use AGENCY: Nuclear Regulatory Commission. ACTION: Draft... public comment draft regulatory guide (DG), DG-1233, ``ASME Code Cases not Approved for Use.'' This regulatory guide lists the American Society of Mechanical Engineers (ASME) Code Cases that the NRC...

  7. ASME code ductile failure criteria for impulsively loaded pressure vessels

    Energy Technology Data Exchange (ETDEWEB)

    Nickell, Robert E.; Duffey, T. A. (Thomas A.); Rodriguez, E. A. (Edward A.)

    2003-01-01

    Ductile failure criteria suitable for application to impulsively loaded high pressure vessels that are designed to the rules of the ASME Code Section VI11 Division 3 are described and justified. The criteria are based upon prevention of load instability and the associated global failure mechanisms, and on protection against progressive distortion for multiple-use vessels. The criteria are demonstrated by the design and analysis of vessels that contain high explosive charges.

  8. Optimized periodic verification testing blended risk and performance-based MOV inservice test program an application of ASME code case OMN-1

    Energy Technology Data Exchange (ETDEWEB)

    Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P. [and others

    1996-12-01

    This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising.

  9. Consideration of the Construction Code for TBM-body in ASME BPVC

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dongjun; Kim, Yunjae [Korea Univ., Seoul (Korea, Republic of); Kim, Suk Kwon; Park, Sung Dae; Lee, Dong Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    In this paper, ASME code is briefly introduced, and the TBM-body is classified for selecting the ASME section. With the classification of TBM-body, the appropriate section is determined. Helium Cooled Ceramic Reflector (HCCR) Test Blanket System (TBS) has been designed to research on the functions of breeding blanket by KO TBM team. The functions has three subjects as 1) Tritium breeding, 2) Heat conversion and extraction, and 3) Neutron and Gamma-ray shielding. For the process of design, it is needed to select the appropriate construction code as the design criteria. ITER Organization (IO) has proposed that RCC-MR Edition 2007 ver. shall be used for TBM-shield. Because the TBM-shield is connected to the vacuum boundary. For the other part of TBM-set, TBM-body, there is no constraint on the selected code, and the manufacturer can appropriately select the construction code to apply design and fabrication parts. KO TBM Team has considered whether it is appropriate to choose any code for TBM-body. One of the things is ASME code. The advantage of ASME choice is suitable to the domestic status. In the domestic nuclear plant, ASME or KEPIC code is used as regulatory requirements. Based on this, it is possible to prepare a domestic fusion plant regulatory. In this paper, the construction code of TBM-body was determined in ASME BPVC. For the determination of code, the structure of ASME BPVC was introduced and the classification for TBM-body was conducted by the ITER criteria. And the operation conditions of TBM-body that contained creep and irradiation effects was considered to determine the construction code.

  10. Welding of NPT-Stamped Vessel in Accordance with ASME Code%ASME规范NPT钢印取证容器的焊接

    Institute of Scientific and Technical Information of China (English)

    吴佳

    2014-01-01

    Based on ASME Boiler&Pressure Vessel Code, the paper explains how to conduct welding operation on NPT-Stamped vessels that are certificated in accordance with ASME code.%结合ASME规范,介绍ASME NPT钢印取证容器焊接过程。

  11. 46 CFR 57.02-2 - Adoption of section IX of the ASME Code.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 2 2010-10-01 2010-10-01 false Adoption of section IX of the ASME Code. 57.02-2 Section... AND BRAZING General Requirements § 57.02-2 Adoption of section IX of the ASME Code. (a) The... accordance with section IX of the ASME (American Society of Mechanical Engineers) Code, as limited,...

  12. PHASE I MATERIALS PROPERTY DATABASE DEVELOPMENT FOR ASME CODES AND STANDARDS

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Weiju [ORNL; Lin, Lianshan [ORNL

    2013-01-01

    To support the ASME Boiler and Pressure Vessel Codes and Standard (BPVC) in modern information era, development of a web-based materials property database is initiated under the supervision of ASME Committee on Materials. To achieve efficiency, the project heavily draws upon experience from development of the Gen IV Materials Handbook and the Nuclear System Materials Handbook. The effort is divided into two phases. Phase I is planned to deliver a materials data file warehouse that offers a depository for various files containing raw data and background information, and Phase II will provide a relational digital database that provides advanced features facilitating digital data processing and management. Population of the database will start with materials property data for nuclear applications and expand to data covering the entire ASME Code and Standards including the piping codes as the database structure is continuously optimized. The ultimate goal of the effort is to establish a sound cyber infrastructure that support ASME Codes and Standards development and maintenance.

  13. The First ASME Code Stamped Cryomodule at SNS

    Energy Technology Data Exchange (ETDEWEB)

    Howell, M P; Crofford, M T; Douglas, D L; Kim, S -H; Steward, S T; Strong, W H; Afanador, R; Hannah, B S; Saunders, J

    2012-07-01

    The first spare cryomodule for the Spallation Neutron Source (SNS) has been designed, fabricated, and tested by SNS personnel. The approach to design for this cryomodule was to hold critical design features identical to the original design such as bayonet positions, coupler positions, cold mass assembly, and overall footprint. However, this is the first SNS cryomodule that meets the pressure requirements put forth in the 10 CFR 851: Worker Safety and Health Program. The most significant difference is that Section VIII of the ASME Boiler and Pressure Vessel Code was applied to the vacuum vessel of this cryomodule. Applying the pressure code to the helium vessels within the cryomodule was considered. However, it was determined to be schedule prohibitive because it required a code case for materials that are not currently covered by the code. Good engineering practice was applied to the internal components to verify the quality and integrity of the entire cryomodule. The design of the cryomodule, fabrication effort, and cryogenic test results will be reported in this paper.

  14. ASME Code requirements for multi-canister overpack design and fabrication

    Energy Technology Data Exchange (ETDEWEB)

    SMITH, K.E.

    1998-11-03

    The baseline requirements for the design and fabrication of the MCO include the application of the technical requirements of the ASME Code, Section III, Subsection NB for containment and Section III, Subsection NG for criticality control. ASME Code administrative requirements, which have not historically been applied at the Hanford site and which have not been required by the US Nuclear Regulatory Commission (NRC) for licensed spent fuel casks/canisters, were not invoked for the MCO. As a result of recommendations made from an ASME Code consultant in response to DNFSB staff concerns regarding ASME Code application, the SNF Project will be making the following modifications: issue an ASME Code Design Specification and Design Report, certified by a Registered Professional Engineer; Require the MCO fabricator to hold ASME Section III or Section VIII, Division 2 accreditation; and Use ASME Authorized Inspectors for MCO fabrication. Incorporation of these modifications will ensure that the MCO is designed and fabricated in accordance with the ASME Code. Code Stamping has not been a requirement at the Hanford site, nor for NRC licensed spent fuel casks/canisters, but will be considered if determined to be economically justified.

  15. Significant issues and changes for ANSI/ASME OM-1 1981, part 1, ASME OMc code-1994, and ASME OM Code-1995, Appendix I, inservice testing of pressure relief devices in light water reactor power plants

    Energy Technology Data Exchange (ETDEWEB)

    Seniuk, P.J.

    1996-12-01

    This paper identifies significant changes to the ANSI/ASME OM-1 1981, Part 1, and ASME Omc Code-1994 and ASME OM Code-1995, Appendix I, {open_quotes}Inservice Testing of Pressure Relief Devices in Light-Water Reactor Power Plants{close_quotes}. The paper describes changes to different Code editions and presents insights into the direction of the code committee and selected topics to be considered by the ASME O&M Working Group on pressure relief devices. These topics include scope issues, thermal relief valve issues, as-found and as-left set-pressure determinations, exclusions from testing, and cold setpoint bench testing. The purpose of this paper is to describe some significant issues being addressed by the O&M Working Group on Pressure Relief Devices (OM-1). The writer is currently the chair of OM-1 and the statements expressed herein represents his personal opinion.

  16. Assessment of ASME code examinations on regenerative, letdown and residual heat removal heat exchangers

    Energy Technology Data Exchange (ETDEWEB)

    Gosselin, Stephen R.; Cumblidge, Stephen E.; Anderson, Michael T.; Simonen, Fredric A.; Tinsley, G. A.; Lydell, B.; Doctor, Steven R.

    2005-07-01

    Inservice inspection requirements for pressure retaining welds in the regenerative, letdown, and residual heat removal heat exchangers are prescribed in Section XI Articles IWB and IWC of the ASME Boiler and Pressure Vessel Code. Accordingly, volumetric and/or surface examinations are performed on heat exchanger shell, head, nozzle-to-head, and nozzle-to-shell welds. Inspection difficulties associated with the implementation of these Code-required examinations have forced operating nuclear power plants to seek relief from the U.S. Nuclear Regulatory Commission. The nature of these relief requests are generally concerned with metallurgical, geometry, accessibility, and radiation burden. Over 60% of licensee requests to the NRC identify significant radiation exposure burden as the principle reason for relief from the ASME Code examinations on regenerative heat exchangers. For the residual heat removal heat exchangers, 90% of the relief requests are associated with geometry and accessibility concerns. Pacific Northwest National Laboratory was funded by the NRC Office of Nuclear Regulatory Research to review current practice with regard to volumetric and/or surface examinations of shell welds of letdown heat exchangers regenerative heat exchangers and residual (decay) heat removal heat exchangers Design, operating, common preventative maintenance practices, and potential degradation mechanisms are reviewed. A detailed survey of domestic and international PWR-specific operating experience was performed to identify pressure boundary failures (or lack of failures) in each heat exchanger type and NSSS design. The service data survey was based on the PIPExp® database and covers PWR plants worldwide for the period 1970-2004. Finally a risk assessment of the current ASME Code inspection requirements for residual heat removal, letdown, and regenerative heat exchangers is performed. The results are then reviewed to discuss the examinations relative to plant safety and

  17. ASME code considerations for the compact heat exchanger

    Energy Technology Data Exchange (ETDEWEB)

    Nestell, James [MPR Associates Inc., Alexandria, VA (United States); Sham, Sam [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-08-31

    robustness. Classic shell and tube designs will be large and costly, and may only be appropriate in steam generator service in the SHX where boiling inside the tubes occurs. For other energy conversion systems, all of these features can be met in a compact heat exchanger design. This report will examine some of the ASME Code issues that will need to be addressed to allow use of a Code-qualified compact heat exchanger in IHX or SHX nuclear service. Most effort will focus on the IHX, since the safety-related (Class A) design rules are more extensive than those for important-to-safety (Class B) or commercial rules that are relevant to the SHX.

  18. 76 FR 11191 - Hazardous Materials: Adoption of ASME Code Section XII and the National Board Inspection Code

    Science.gov (United States)

    2011-03-01

    ... Hazardous Materials: Adoption of ASME Code Section XII and the National Board Inspection Code AGENCY... Pressure Vessel Code, Section XII (2010 Edition) and the National Board of Boiler and Pressure Vessel Inspectors' National Board Inspection Code (2007 Edition). Further, PHMSA is extending the comment period...

  19. Activated sludge models ASM1, ASM2, ASM2d and ASM3

    DEFF Research Database (Denmark)

    Henze, Mogens; Gujer, W.; Mino, T.;

    This book has been produced to give a total overview of the Activated Sludge Model (ASM) family at the start of 2000 and to give the reader easy access to the different models in their original versions. It thus presents ASM1, ASM2, ASM2d and ASM3 together for the first time.Modelling of activated...... sludge processes has become a common part of the design and operation of wastewater treatment plants. Today models are being used in design, control, teaching and research.ContentsASM3: Introduction, Comparison of ASM1 and ASM3, ASM3: Definition of compounds in the model, ASM3: Definition of processes...... in the Model, ASM3: Stoichiometry, ASM3: Kinetics, Limitations of ASM3, Aspects of application of ASM3, ASM3C: A Carbon based model, Conclusion ASM 2d: Introduction, Conceptual Approach, ASM 2d, Typical Wastewater Characteristics and Kinetic and Stoichiometric Constants, Limitations, Conclusion ASM 2...

  20. Technical justification for ASME code section xi crack detection by visual examination

    Energy Technology Data Exchange (ETDEWEB)

    Nickell, R.E. [Applied Science and Technology, Poway, CA (United States); Rashid, Y.R. [ANATECH Corp., San Diego (United States)

    2001-07-01

    A critical technical element of nuclear power plant license renewal in the United States is the demonstration that the effects of aging do not compromise the intended safety function(s) of a system, structure, or component during the extended term of operation. The demonstration may take either of two forms. First, it can be shown that the design basis for the system, structure, or component is sufficiently robust that the aging effects have been insignificant through the current license term, and will continue to be insignificant through the extended term. Alternatively, it can be shown that, while the aging effects may be potentially significant, those effects can be managed and functionality maintained by defined programmatic activities during the extended term of operation. The first of the two approaches is generally provided by the construction basis, such as construction in accordance with the ASME Code Section III and other consensus codes and standards. The second of the two approaches is often provided by periodic inservice inspection and testing, in accordance with the ASME Code Section XI. The purpose of the ASME Section XI inspections and tests is to assure that systems, components, and structures are fit for continued service until the next scheduled inspection or test. The purpose of this paper is to document the effectiveness of the current ASME Code Section XI visual examination procedures in detecting the effects of aging for systems, structures, and components that are tolerant of mature cracks. (author)

  1. 46 CFR 53.01-3 - Adoption of section IV of the ASME Boiler and Pressure Vessel Code.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 2 2010-10-01 2010-10-01 false Adoption of section IV of the ASME Boiler and Pressure...) MARINE ENGINEERING HEATING BOILERS General Requirements § 53.01-3 Adoption of section IV of the ASME Boiler and Pressure Vessel Code. (a) Heating boilers shall be designed, constructed, inspected,...

  2. 46 CFR 52.01-2 - Adoption of section I of the ASME Boiler and Pressure Vessel Code.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 2 2010-10-01 2010-10-01 false Adoption of section I of the ASME Boiler and Pressure...) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-2 Adoption of section I of the ASME Boiler and Pressure Vessel Code. (a) Main power boilers and auxiliary boilers shall be designed,...

  3. 77 FR 3073 - American Society of Mechanical Engineers (ASME) Codes and New and Revised ASME Code Cases...

    Science.gov (United States)

    2012-01-23

    ... Maintenance of Nuclear Power Plants (OM Code). The final rule also incorporated by reference (with conditions... described in the GALL report, or propose alternatives (exceptions) for the NRC to review as part of a plant... acceptable approach for aging management--through inservice inspection--of PWR nickel-alloy upper vessel head...

  4. Appropriate nominal stresses for use with ASME Code pressure-loading stress indices for nozzles

    Energy Technology Data Exchange (ETDEWEB)

    Rodabaugh, E.C.

    1976-06-01

    This program is part of a cooperative effort with industry to develop and verify analytical methods for assessing the safety of nuclear pressure-vessel and piping-system design. The study of nominal stresses and stress indices described is part of a continuing study of design rules for nozzles in pressure vessels being coordinated by the PVRC Subcommittee on Reinforced Openings and External Loadings. Results from these studies are used by appropriate ASME Code groups in drafting new and improved design rules.

  5. Draft ASME code case on ductile cast iron for transport packaging

    Energy Technology Data Exchange (ETDEWEB)

    Saegusa, T. [Central Research Inst. of Electric Power Industry, Abiko (Japan); Arai, T. [Central Research Inst. of Electric Power Industry, Yokosuka (Japan); Hirose, M. [Nuclear Fuel Transport Co., Ltd., Tokyo (Japan); Kobayashi, T. [Nippon Chuzo, Kawasaki (Japan); Tezuka, Y. [Mitsubishi Materials Co., Tokyo (Japan); Urabe, N. [Kokan Keisoku K. K., Kawasaki (Japan); Hueggenberg, R. [GNB, Essen (Germany)

    2004-07-01

    The current Rules for Construction of ''Containment Systems for Storage and Transport Packagings of Spent Nuclear Fuel and High Level Radioactive Material and Waste'' of Division 3 in Section III of ASME Code (2001 Edition) does not include ductile cast iron in its list of materials permitted for use. The Rules specify required fracture toughness values of ferritic steel material for nominal wall thickness 5/8 to 12 inches (16 to 305 mm). New rule for ductile cast iron for transport packaging of which wall thickness is greater than 12 inches (305mm) is required.

  6. ASM Based Synthesis of Handwritten Arabic Text Pages.

    Science.gov (United States)

    Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif; Ghoneim, Ahmed

    2015-01-01

    Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available.

  7. ASM Based Synthesis of Handwritten Arabic Text Pages

    Science.gov (United States)

    Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif; Ghoneim, Ahmed

    2015-01-01

    Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available. PMID:26295059

  8. 46 CFR 54.01-2 - Adoption of division 1 of section VIII of the ASME Boiler and Pressure Vessel Code.

    Science.gov (United States)

    2010-10-01

    ... Boiler and Pressure Vessel Code. 54.01-2 Section 54.01-2 Shipping COAST GUARD, DEPARTMENT OF HOMELAND... division 1 of section VIII of the ASME Boiler and Pressure Vessel Code. (a) Pressure vessels shall be designed, constructed, and inspected in accordance with section VIII of the ASME Boiler and Pressure...

  9. ASME power test code ptc 4.1 for steam generators; Codigo de pruebas de potencia ASME ptc 4.1 para generadores de vapor

    Energy Technology Data Exchange (ETDEWEB)

    Plauchu Alcantara, Jorge Alberto [Plauchu Consultores, Morelia, Michoacan (Mexico)

    2001-07-01

    This presentation is oriented towards those who in this subject have experience in the design and equipment specification, plant projects, factory and field testing, operation or result analyses. An important fraction of the national energy supply, approximately 13%, is applied to the steam generation in the different aspects of the industrial activity, in the electrical industry of public service and in the commercial and services sector. The development of the national programs of energy efficiency verifies this when dedicating to this use of the energy important projects, some of them with support of the USAID. The measurement of the energy utilization or the efficiency of steam generators (or boilers) is made applying some procedure agreed by the parts and the one of greater acceptance and best known in Mexico and internationally is the ASME Power Test Code PTC 4.1 for Steam Generators. The purpose and formality in the determination of efficiency and of steam generation capacity behavior, thermal basic regime or fulfillment of guarantees, radically changes the exigencies of strict attachment to the PTC 4.1 This definition will determine the importance of the test method selected, the deviations and convened exceptions, the influence of the precision and the measurement errors, the consideration of auxiliary equipment, etc. An interpretation or incorrect application of the Test Code has lead and will lead to results and nonreliable decisions. [Spanish] Esta exposicion se orienta a quienes en este tema cuenta con experiencia en diseno y especificacion de equipo, proyecto de planta, pruebas en fabrica y campo, operacion o analisis de resultados. Una fraccion importante de la oferta nacional de energia, 13% aproximadamente, se aplica a la generacion de vapor en diferentes giros de actividad industrial, en la industria electrica, de servicio publico y en el sector de servicios y comercial. El desarrollo de los programas nacionales de eficiencia energetica comprueba

  10. Considerations on fatigue stress range calculations in nuclear power plants using on-line monitoring systems and the ASME Code

    Energy Technology Data Exchange (ETDEWEB)

    Cicero, R., E-mail: ciceror@unican.e [INESCO INGENIEROS S.L., Santander (Spain); Departamento de Ciencia e Ingenieria del Terreno y los Materiales, Universidad de Cantabria, Santander (Spain); Cicero, S. [Departamento de Ciencia e Ingenieria del Terreno y los Materiales, Universidad de Cantabria, Santander (Spain); Gorrochategui, I. [Centro Tecnologico de Componentes, Santander (Spain); Lacalle, R. [INESCO INGENIEROS S.L., Santander (Spain); Departamento de Ciencia e Ingenieria del Terreno y los Materiales, Universidad de Cantabria, Santander (Spain)

    2010-01-15

    Nuclear power plants are generally designed and inspected according to the ASME Code. This code indicates stress intensity (S{sub INT}) as the parameter to be used in the stress analysis of components. One of the particularities of S{sub INT} is that it always takes positive values, independently of the nature of the stress (tensile or compressive). This circumstance is relevant in the Fatigue Monitoring Systems used in nuclear power plants, due to the manner in which the different variable stresses are combined in order to obtain the final total stress range. This paper describes some situations derived from the application of the ASME Code, shows different ways of dealing with them and illustrates their influence on the evaluation of the fatigue usage factor through a case study.

  11. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-03-27

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  12. A minimal path searching approach for active shape model (ASM)-based segmentation of the lung

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-02-01

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  13. 75 FR 80765 - Hazardous Materials: Adoption of ASME Code Section XII and the National Board Inspection Code

    Science.gov (United States)

    2010-12-23

    ... submitting the document (or signing the document, if submitted on behalf of an association, business, labor... membership professional organization that enables collaboration, knowledge-sharing, and skill development across all engineering disciplines. ASME is recognized globally for its leadership in providing the...

  14. DEVELOPMENT OF ASME SECTION X CODE RULES FOR HIGH PRESSURE COMPOSITE HYDROGEN PRESSURE VESSELS WITH NON-LOAD SHARING LINERS

    Energy Technology Data Exchange (ETDEWEB)

    Rawls, G.; Newhouse, N.; Rana, M.; Shelley, B.; Gorman, M.

    2010-04-13

    The Boiler and Pressure Vessel Project Team on Hydrogen Tanks was formed in 2004 to develop Code rules to address the various needs that had been identified for the design and construction of up to 15000 psi hydrogen storage vessel. One of these needs was the development of Code rules for high pressure composite vessels with non-load sharing liners for stationary applications. In 2009, ASME approved new Appendix 8, for Section X Code which contains the rules for these vessels. These vessels are designated as Class III vessels with design pressure ranging from 20.7 MPa (3,000 ps)i to 103.4 MPa (15,000 psi) and maximum allowable outside liner diameter of 2.54 m (100 inches). The maximum design life of these vessels is limited to 20 years. Design, fabrication, and examination requirements have been specified, included Acoustic Emission testing at time of manufacture. The Code rules include the design qualification testing of prototype vessels. Qualification includes proof, expansion, burst, cyclic fatigue, creep, flaw, permeability, torque, penetration, and environmental testing.

  15. ASME 规范对核设备制造和运行期间的无损检验要求对比%Differences of NDT Technology Requirement between Manufacture and Operation Period of Nuclear Equipment in ASME Code

    Institute of Scientific and Technical Information of China (English)

    葛亮; 蔡家藩; 聂勇; 梁平

    2016-01-01

    对比分析了美国机械工程师协会(ASME)规范对核设备在制造和运行阶段中,在检验范围、检验技术和验收准则等方面的无损检验要求,对存在的差异进行了归纳总结。%This paper takes the ASME code as an example to describe the requirement of NDT for nuclear equipment in Manufacture and Operation Periods,and to analyze and compare the corresponding requirements of inspection scope,technique and acceptance standard in this two period,as well as,the sum-up the difference.

  16. Elastic-plastic analysis of the PVRC burst disk tests with comparison to the ASME code -- Primary stress limits

    Energy Technology Data Exchange (ETDEWEB)

    Jones, D.P.; Holliday, J.E.

    1999-02-01

    This paper provides a comparison between finite element analysis results and test data from the Pressure Vessel Research Council (PVRC) burst disk program. Testing sponsored by the PVRC over 20 years ago was done by pressurizing circular flat disks made from three different materials until failure by bursting. The purpose of this re-analysis is to investigate the use of finite element analysis (FEA) to assess the primary stress limits of the ASME Boiler and Pressure Vessel Code (1998) and to qualify the use of elastic-plastic (EP-FEA) for limit load calculations. The three materials tested represent the range of strength and ductility found in modern pressure vessel construction and include a low strength high ductility material, a medium strength medium ductility material, and a high strength low ductility low alloy material. Results of elastic and EP-FEA are compared to test data. Stresses from the elastic analyses are linearized for comparison of Code primary stress limits to test results. Elastic-plastic analyses are done using both best-estimate and elastic-perfectly plastic (EPP) stress-strain curves. Both large strain-large displacement (LSLD) and small strain-small displacement (SSSD) assumptions are used with the EP-FEA. Analysis results are compared to test results to evaluate the various analysis methods, models, and assumptions as applied to the bursting of thin disks.

  17. News from the Library: A new key reference work for the engineer: ASME's Boiler and Pressure Vessel Code at the CERN Library

    CERN Multimedia

    CERN Library

    2011-01-01

    The Library is aiming at offering a range of constantly updated reference books, to cover all areas of CERN activity. A recent addition to our collections strengthens our offer in the Engineering field.   The CERN Library now holds a copy of the complete ASME Boiler and Pressure Vessel Code, 2010 edition. This code establishes rules of safety governing the design, fabrication, and inspection of boilers and pressure vessels, and nuclear power plant components during construction. This document is considered worldwide as a reference for mechanical design and is therefore important for the CERN community. The Code published by ASME (American Society of Mechanical Engineers) is kept current by the Boiler and Pressure Committee, a volunteer group of more than 950 engineers worldwide. The Committee meets regularly to consider requests for interpretations, revision, and to develop new rules. The CERN Library receives updates and includes them in the volumes until the next edition, which is expected to ...

  18. The ASM-NSF Biology Scholars Program: An Evidence-Based Model for Faculty Development.

    Science.gov (United States)

    Chang, Amy L; Pribbenow, Christine M

    2016-05-01

    The American Society for Microbiology (ASM) established its ASM-NSF (National Science Foundation) Biology Scholars Program (BSP) to promote undergraduate education reform by 1) supporting biologists to implement evidence-based teaching practices, 2) engaging life science professional societies to facilitate biologists' leadership in scholarly teaching within the discipline, and 3) participating in a teaching community that fosters disciplinary-level science, technology, engineering, and mathematics (STEM) reform. Since 2005, the program has utilized year-long residency training to provide a continuum of learning and practice centered on principles from the scholarship of teaching and learning (SoTL) to more than 270 participants ("scholars") from biology and multiple other disciplines. Additionally, the program has recruited 11 life science professional societies to support faculty development in SoTL and discipline-based education research (DBER). To identify the BSP's long-term outcomes and impacts, ASM engaged an external evaluator to conduct a study of the program's 2010-2014 scholars (n = 127) and society partners. The study methods included online surveys, focus groups, participant observation, and analysis of various documents. Study participants indicate that the program achieved its proposed goals relative to scholarship, professional society impact, leadership, community, and faculty professional development. Although participants also identified barriers that hindered elements of their BSP participation, findings suggest that the program was essential to their development as faculty and provides evidence of the BSP as a model for other societies seeking to advance undergraduate science education reform. The BSP is the longest-standing faculty development program sponsored by a collective group of life science societies. This collaboration promotes success across a fragmented system of more than 80 societies representing the life sciences and helps

  19. The Characteristics and Applied Limits for the Test Code ASME PTC6 and ASME PTC46%发电设备性能试验规程ASME PTC6与ASME PTC46的特点及适用范围

    Institute of Scientific and Technical Information of China (English)

    王兴平

    2003-01-01

    对美国机械工程师协会汽轮机及全厂性试验规程ASME PTC6与ASME PTC46的目的、方法、边界条件等作了分析论述,着重介绍了ASME PTC46的特点和方法,并根据国内的实际情况,并提出了实际应用方面的建议.表1参4

  20. Extension of ASME VIII Division 1 design limits

    Energy Technology Data Exchange (ETDEWEB)

    Marriott, D.L. [Stress Engineering Services, Inc., Cincinnati, OH (United States). Consumer Products Division

    1995-12-01

    ASME Subcommittee 2 on materials presented a series of questions to PVRC regarding the acceptability of using the criteria of ASME Section 2, Part D, Appendix 1 for extending design limits for AISI 304 stainless steel beyond 1,500 F to 2,000 F and Alloy 800 HT from 1,650 F to 1,800 F respectively. This paper describes a project supported by PVRC to try and find an answer to this question. The project consisted of three parts. The first was a survey to determine the intent behind the wording of the ASME criteria in order to make an extrapolation of methods for setting design limits to higher temperatures. The second was a demonstration of a methodology for developing very high temperature limits, using a set of creep data for Alloy 800 HT. The third was a parametric study to evaluate the feasibility of using the minimum creep rate based deformation criterion used in the ASME Code to set strain related limits on materials showing predominantly tertiary creep. Based on this study, an alternative method to that currently employed by ASME in Appendix 1 has been proposed for setting high temperature design limits, based on a consistent margin on time to failure. This method has been presented to ASME for possible adoption. In addition, this investigation revealed some more detailed issues involving cyclic loading at very high temperature. It was recommended that these should be examined further by ASME. These issues are summarized briefly in this paper.

  1. 3D automatic anatomy segmentation based on iterative graph-cut-ASM

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xinjian; Bagci, Ulas [Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Building 10 Room 1C515, Bethesda, Maryland 20892-1182 and Life Sciences Research Center, School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China); Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Building 10 Room 1C515, Bethesda, Maryland 20892-1182 (United States)

    2011-08-15

    Purpose: This paper studies the feasibility of developing an automatic anatomy segmentation (AAS) system in clinical radiology and demonstrates its operation on clinical 3D images. Methods: The AAS system, the authors are developing consists of two main parts: object recognition and object delineation. As for recognition, a hierarchical 3D scale-based multiobject method is used for the multiobject recognition task, which incorporates intensity weighted ball-scale (b-scale) information into the active shape model (ASM). For object delineation, an iterative graph-cut-ASM (IGCASM) algorithm is proposed, which effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. The presented IGCASM algorithm is a 3D generalization of the 2D GC-ASM method that they proposed previously in Chen et al.[Proc. SPIE, 7259, 72590C1-72590C-8 (2009)]. The proposed methods are tested on two datasets comprised of images obtained from 20 patients (10 male and 10 female) of clinical abdominal CT scans, and 11 foot magnetic resonance imaging (MRI) scans. The test is for four organs (liver, left and right kidneys, and spleen) segmentation, five foot bones (calcaneus, tibia, cuboid, talus, and navicular). The recognition and delineation accuracies were evaluated separately. The recognition accuracy was evaluated in terms of translation, rotation, and scale (size) error. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF, FPVF). The efficiency of the delineation method was also evaluated on an Intel Pentium IV PC with a 3.4 GHZ CPU machine. Results: The recognition accuracies in terms of translation, rotation, and scale error over all organs are about 8 mm, 10 deg. and 0.03, and over all foot bones are about 3.5709 mm, 0.35 deg. and 0.025, respectively. The accuracy of delineation over all organs for all subjects as expressed in TPVF and FPVF is 93.01% and 0.22%, and

  2. 使用ASME Ⅷ-2规范进行容器建造的基本原则%Basic Principle of Application of ASME Ⅷ-2 Code in Vessel Construction

    Institute of Scientific and Technical Information of China (English)

    于志刚; 董方亮

    2009-01-01

    针对使用ASME Ⅷ-2规范进行容器建造的实际需要,介绍了ASME Ⅷ-2规范的基本原理,分析了ASMEⅧ-2规范的基本要素,总结了按照ASME Ⅷ-2规范进行设计、建造的一些基本要点.

  3. Accelerator System Model (ASM) user manual with physics and engineering model documentation. ASM version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-07-01

    The Accelerator System Model (ASM) is a computer program developed to model proton radiofrequency accelerators and to carry out system level trade studies. The ASM FORTRAN subroutines are incorporated into an intuitive graphical user interface which provides for the {open_quotes}construction{close_quotes} of the accelerator in a window on the computer screen. The interface is based on the Shell for Particle Accelerator Related Codes (SPARC) software technology written for the Macintosh operating system in the C programming language. This User Manual describes the operation and use of the ASM application within the SPARC interface. The Appendix provides a detailed description of the physics and engineering models used in ASM. ASM Version 1.0 is joint project of G. H. Gillespie Associates, Inc. and the Accelerator Technology (AT) Division of the Los Alamos National Laboratory. Neither the ASM Version 1.0 software nor this ASM Documentation may be reproduced without the expressed written consent of both the Los Alamos National Laboratory and G. H. Gillespie Associates, Inc.

  4. Reaction invariant-based reduction of the activated sludge model ASM1 for batch applications

    DEFF Research Database (Denmark)

    Santa Cruz, Judith A.; Mussati, Sergio F.; Scenna, Nicolás J.

    2016-01-01

    that are unaffected by the reaction progress, i.e. so-called reaction invariants. The reaction invariant concept can be used to reduce the number of ordinary differential equations (ODEs) involved in batch bioreactor models. In this paper, a systematic methodology of model reduction based on this concept is applied...... to batch activated sludge processes described by the Activated Sludge Model No. 1 (ASM1) for carbon and nitrogen removal. The objective of the model reduction is to describe the exact dynamics of the states predicted by the original model with a lower number of ODEs. This leads to a reduction...... of the numerical complexity as nonlinear ODEs are replaced by linear algebraic relationships predicting the exact dynamics of the original model....

  5. Updating of ASME Nuclear Code Case N-201 to Accommodate the Needs of Metallic Core Support Structures for High Temperature Gas Cooled Reactors Currently in Development

    Energy Technology Data Exchange (ETDEWEB)

    Mit Basol; John F. Kielb; John F. MuHooly; Kobus Smit

    2007-05-02

    On September 29, 2005, ASME Standards Technology, LLC (ASME ST-LLC) executed a multi-year, cooperative agreement with the United States DOE for the Generation IV Reactor Materials project. The project's objective is to update and expand appropriate materials, construction, and design codes for application in future Generation IV nuclear reactor systems that operate at elevated temperatures. Task 4 was embarked upon in recognition of the large quantity of ongoing reactor designs utilizing high temperature technology. Since Code Case N-201 had not seen a significant revision (except for a minor revision in September, 2006 to change the SA-336 forging reference for 304SS and 316SS to SA-965 in Tables 1.2(a) and 1.2(b), and some minor editorial changes) since December 1994, identifying recommended updates to support the current high temperature Core Support Structure (CSS) designs and potential new designs was important. As anticipated, the Task 4 effort identified a number of Code Case N-201 issues. Items requiring further consideration range from addressing apparent inconsistencies in definitions and certain material properties between CC-N-201 and Subsection NH, to inclusion of additional materials to provide the designer more flexibility of design. Task 4 developed a design parameter survey that requested input from the CSS designers of ongoing high temperature gas cooled reactor metallic core support designs. The responses to the survey provided Task 4 valuable input to identify the design operating parameters and future needs of the CSS designers. Types of materials, metal temperature, time of exposure, design pressure, design life, and fluence levels were included in the Task 4 survey responses. The results of the survey are included in this report. This research proves that additional work must be done to update Code Case N-201. Task 4 activities provide the framework for the Code Case N-201 update and future work to provide input on materials. Candidate

  6. Replacement of radiography with ultrasonic phased array for feeder tubes in CANDU reactors using ASME code case N-659-2

    Energy Technology Data Exchange (ETDEWEB)

    Simmons, R.; Bower, Q.; Arseneau, S., E-mail: bsimmons@metalogicinspection.com, E-mail: qbower@metalogicinspection.com, E-mail: sarseneau@metalogicinspection.com [Metalogic Inspection Services, Edmonton, Alberta (Canada)

    2013-07-01

    In this paper we will discuss phased array technology for the replacement of radiography on new construction projects in the nuclear industry. Specifically, through the implementation of A.S.M.E. code N-659-2 and MetaPhase phased array services. Phased Array is not considered a new technique on in service welds in the nuclear industry; however it was unprecedented on new construction welds and required significant investment in regulatory approval (C.N.S.C.), technology research and development, regulatory, client and technician training for successful service implementation. This paper will illustrate the abilities and limitations associated in replacing radiography with MetaPhase, as well as the substantial benefits relative to increased production, improved weld quality, enhanced safety and overall project cost savings. (author)

  7. Activated sludge model (ASM) based modelling of membrane bioreactor (MBR) processes: a critical review with special regard to MBR specificities.

    Science.gov (United States)

    Fenu, A; Guglielmi, G; Jimenez, J; Spèrandio, M; Saroj, D; Lesjean, B; Brepols, C; Thoeye, C; Nopens, I

    2010-08-01

    Membrane bioreactors (MBRs) have been increasingly employed for municipal and industrial wastewater treatment in the last decade. The efforts for modelling of such wastewater treatment systems have always targeted either the biological processes (treatment quality target) as well as the various aspects of engineering (cost effective design and operation). The development of Activated Sludge Models (ASM) was an important evolution in the modelling of Conventional Activated Sludge (CAS) processes and their use is now very well established. However, although they were initially developed to describe CAS processes, they have simply been transferred and applied to MBR processes. Recent studies on MBR biological processes have reported several crucial specificities: medium to very high sludge retention times, high mixed liquor concentration, accumulation of soluble microbial products (SMP) rejected by the membrane filtration step, and high aeration rates for scouring purposes. These aspects raise the question as to what extent the ASM framework is applicable to MBR processes. Several studies highlighting some of the aforementioned issues are scattered through the literature. Hence, through a concise and structured overview of the past developments and current state-of-the-art in biological modelling of MBR, this review explores ASM-based modelling applied to MBR processes. The work aims to synthesize previous studies and differentiates between unmodified and modified applications of ASM to MBR. Particular emphasis is placed on influent fractionation, biokinetics, and soluble microbial products (SMPs)/exo-polymeric substances (EPS) modelling, and suggestions are put forward as to good modelling practice with regard to MBR modelling both for end-users and academia. A last section highlights shortcomings and future needs for improved biological modelling of MBR processes.

  8. Creep-Fatigue Damage Evaluation of a Model Reactor Vessel and Reactor Internals of Sodium Test Facility according to ASME-NH and RCC-MRx Codes

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Dong-Won; Lee, Hyeong-Yeon; Eoh, Jae-Hyuk; Son, Seok-Kwon; Kim, Jong-Bum; Jeong, Ji-Young [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The objective of the STELLA-2 is to support the specific design approval for PGSFR by synthetic reviews of key safety issues and code validations through the integral effect tests. Due to its high temperature operation in SFRs (and in a testing facility) up to 550 °C, thermally induced creep-fatigue damage is very likely in components including a reactor vessel, reactor internals (interior structures), heat exchangers, pipelines, etc. In this study, structural integrity of the components such as reactor vessel and internals in STELLA-2 has been evaluated against creep-fatigue failures at a concept-design step. As 2D analysis yields far conservative results, a realistic 3D simulation is performed by a commercial software. A design integrity guarding against a creep-fatigue damage failure operating at high temperature was evaluated for the reactor vessel with its internal structure of the STELLA-2. Both the high temperature design codes were used for the evaluation, and results were compared. All the results showed the vessel as a whole is safely designed at the given operating conditions, while the ASME-NH gives a conservative evaluation.

  9. Towards a consensus-based biokinetic model for green microalgae - The ASM-A.

    Science.gov (United States)

    Wágner, Dorottya S; Valverde-Pérez, Borja; Sæbø, Mariann; Bregua de la Sotilla, Marta; Van Wagenen, Jonathan; Smets, Barth F; Plósz, Benedek Gy

    2016-10-15

    Cultivation of microalgae in open ponds and closed photobioreactors (PBRs) using wastewater resources offers an opportunity for biochemical nutrient recovery. Effective reactor system design and process control of PBRs requires process models. Several models with different complexities have been developed to predict microalgal growth. However, none of these models can effectively describe all the relevant processes when microalgal growth is coupled with nutrient removal and recovery from wastewaters. Here, we present a mathematical model developed to simulate green microalgal growth (ASM-A) using the systematic approach of the activated sludge modelling (ASM) framework. The process model - identified based on a literature review and using new experimental data - accounts for factors influencing photoautotrophic and heterotrophic microalgal growth, nutrient uptake and storage (i.e. Droop model) and decay of microalgae. Model parameters were estimated using laboratory-scale batch and sequenced batch experiments using the novel Latin Hypercube Sampling based Simplex (LHSS) method. The model was evaluated using independent data obtained in a 24-L PBR operated in sequenced batch mode. Identifiability of the model was assessed. The model can effectively describe microalgal biomass growth, ammonia and phosphate concentrations as well as the phosphorus storage using a set of average parameter values estimated with the experimental data. A statistical analysis of simulation and measured data suggests that culture history and substrate availability can introduce significant variability on parameter values for predicting the reaction rates for bulk nitrate and the intracellularly stored nitrogen state-variables, thereby requiring scenario specific model calibration. ASM-A was identified using standard cultivation medium and it can provide a platform for extensions accounting for factors influencing algal growth and nutrient storage using wastewater resources.

  10. The safety relief valve handbook design and use of process safety valves to ASME and International codes and standards

    CERN Document Server

    Hellemans, Marc

    2009-01-01

    The Safety Valve Handbook is a professional reference for design, process, instrumentation, plant and maintenance engineers who work with fluid flow and transportation systems in the process industries, which covers the chemical, oil and gas, water, paper and pulp, food and bio products and energy sectors. It meets the need of engineers who have responsibilities for specifying, installing, inspecting or maintaining safety valves and flow control systems. It will also be an important reference for process safety and loss prevention engineers, environmental engineers, and plant and process designers who need to understand the operation of safety valves in a wider equipment or plant design context. . No other publication is dedicated to safety valves or to the extensive codes and standards that govern their installation and use. A single source means users save time in searching for specific information about safety valves. . The Safety Valve Handbook contains all of the vital technical and standards informat...

  11. TOFD代替RT应用于ASME Ⅷ-1建造设备的规范要求%Code Requirements for TOFD instead of RT Apply to ASME Ⅲ-1

    Institute of Scientific and Technical Information of China (English)

    张亚统

    2016-01-01

    根据市场发展、检测经济性和检测质量的需要,TOFD(Time of Flight Diffraction)检测方法逐步被应用于ASME规范产品的制造,但以往成功的案例基本是以ASME规范案例2235为基础,考虑到ASME规范案例随时有可能被规范取消,如何才能使用符合ASME规范要求的TOFD代替RT并应用于ASME Ⅷ-1规范的产品就变成了很多ASME持证厂家思考的问题.本文通过对ASME规范的综合分析和研究,提出了解决办法.

  12. Sdz asm 981.

    Science.gov (United States)

    Wellington, K; Spencer, C M

    2000-12-01

    SDZ ASM 981 is an anti-inflammatory macrolactam which binds with high affinity to macrophilin-12. The resulting complex inhibits calcineurin, thus blocking the synthesis of inflammatory cytokines. Twice daily application of topical SDZ ASM 981 1% cream was effective in the treatment of atopic dermatitis in adults and children in clinical trials. Summarised results from 260 patients with atopic dermatitis indicate that the efficacy of SDZ ASM 981 is dose dependent. The highest concentration evaluated (1% cream) was not as effective as betamethasone valerate 1% cream in this 3-week trial. The efficacy of SDZ ASM 981 and clobetasol ointments, used under occlusion, did not differ significantly in 10 patients with chronic psoriasis. Likewise, SDZ ASM 981 0.6% and betamethasone valerate 1% creams were similarly effective in 66 patients with allergic contact dermatitis. Concentrations of SDZ ASM 981 in the blood during topical treatment were invariably below 2.1 microg/L. Oral SDZ ASM 981 20mg or 30mg twice daily were effective in a dose dependent manner in the reduction of psoriasis in adults with no evidence of adverse effects. SDZ ASM 981 was well tolerated in the available trials, exhibiting no potential for systemic adverse reactions and no atrophogenic potential, a problem commonly associated with corticosteroid treatment.

  13. Activated sludge model No. 2d, ASM2d

    DEFF Research Database (Denmark)

    Henze, M.

    1999-01-01

    The Activated Sludge Model No. 2d (ASM2d) presents a model for biological phosphorus removal with simultaneous nitrification-denitrification in activated sludge systems. ASM2d is based on ASM2 and is expanded to include the denitrifying activity of the phosphorus accumulating organisms (PAOs...

  14. Report on task assignment No. 3 for the Waste Package Project; Parts A & B, ASME pressure vessel codes review for waste package application; Part C, Library search for reliability/failure rates data on low temperature low pressure piping, containers, and casks with long design lives

    Energy Technology Data Exchange (ETDEWEB)

    Trabia, M.B.; Kiley, M.; Cardle, J.; Joseph, M.

    1991-07-01

    The Waste Package Project Research Team, at UNLV, has four general required tasks. Task one is the management, quality assurance, and overview of the research that is performed under the cooperative agreement. Task two is the structural analysis of spent fuel and high level waste. Task three is an American Society of Mechanical Engineers (ASME) Pressure Vessel Code review for waste package application. Finally, task four is waste package labeling. This report includes preliminary information about task three (ASME Pressure Vessel Code review for Waste package Application). The first objective is to compile a list of the ASME Pressure Vessel Code that can be applied to waste package containers design and manufacturing processes. The second objective is to explore the use of these applicable codes to the preliminary waste package container designs. The final objective is to perform a library search for reliability and/or failure rates data on low pressure, low temperature, containers and casks with long design lives.

  15. Method,Measure,and Immunity of the ASME Standard for Anti-brittle Fracture Thoughts,Methods and Absolution Steps of ASME Code for Anti-Brittle Fracture%ASME标准防脆断的思路、措施和豁免判定步骤

    Institute of Scientific and Technical Information of China (English)

    程涛涛; 黄明松

    2015-01-01

    介绍ASME 标准中防止低温脆断的总体思路和措施,并对该措施进行分析,比较了ASME Ⅷ-Ⅰ和ASME Ⅷ-Ⅱ中对防低温脆断措施方面的主要区别,列出了ASME材料低温冲击试验豁免判定的具体操作步骤。%This paper introduces the general idea and measures of preventing the low temperature brittle frac-ture in ASME standard,and analyzes the measures;it contrasts the main differences between ASME Ⅷ-Ⅰand ASME Ⅷ-Ⅱin preventing the low temperature brittle fracture;it lists the specific steps of impact test exemption of ASME.

  16. The Quality Control of the Products Embossed Seal of ASME Codes%ASME规范钢印产品的质量控制

    Institute of Scientific and Technical Information of China (English)

    吴军

    2008-01-01

    美国ASME B&PV规范是被世界公认的技术内容最为完整、应用最为广泛的锅炉及压力容器标准.文章论述了按照ASME规范第Ⅰ卷、第Ⅷ卷第Ⅰ分册建立起来的ASME质量控制体系在ASME规范钢印产品(文中简称ASME规范产品)设计、制造及检验等过程中的运用.

  17. Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Faber, M.H.; Sørensen, John Dalsgaard

    2003-01-01

    The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....

  18. Cisco ASM Router

    CERN Multimedia

    2001-01-01

    One of the two "ASM/2-32EM" boxes installed in 1988, from "Cisco Systems Inc." - then an unknown 20-employee company in Menlo Park, California (USA). This is one of the first two Cisco boxes to appear in Switzerland, and possibly Europe. The 220v power supply was a special modification made for use at CERN. They supported IP address filtering, which seemed just what CERN needed to help protect the new Cray XMP-48 super computer from network hackers. The two ASM boxes were both routers and terminal servers. They protected a secure private Ethernet segment used by the Cray project, as well as providing secure terminal connections to that segment, including CERN's first dialback terminal service, which allowed Cray and CERN system analysts to work on the machine from home, using another Cisco feature called TACACS. (Kindly offered by B. Segal who discovered this company while at a Usenix Conference in Phoenix, Arizona in June 1987.)

  19. Characteristics and Applied Limits for the Test Code ASME PTC 6.2 of Steam Turbine Performance%汽轮机性能试验规程ASME PTC6.2的特点及适用范围

    Institute of Scientific and Technical Information of China (English)

    刘利; 张敏

    2011-01-01

    对美国机械工程师学会制定的试验规程ASME PTC 6.2中蒸汽轮机的试验特点和试验方法等(联合循环)进行了系统阐述.ASME PTC 6.2规程具有边界条件、二元加法修正法等特点,适用于各种形式联合循环中汽轮机性能试验.将ASME PTC 6.2、ASME PTC 6和ASME PTC 46规程进行了比较分析,指出了规程的适用范围,得出了相关结论.

  20. Research on the Formal Semantics of Verilog Based on ASM%Verilog语义的ASM表示方法研究

    Institute of Scientific and Technical Information of China (English)

    胡燕翔

    2006-01-01

    使用抽象状态机模型(ASM)对Verilog的语义进行研究,给出各类赋值语句和延迟/事件控制结构的形式定义.以此为基础与VHDL进行对比,说明各种赋值语句和延迟/事件控制结构向VHDL的转换方法以及二者在转换前后的差异.

  1. On (Partial) Unit Memory Codes Based on Gabidulin Codes

    CERN Document Server

    Wachter, Antonia; Bossert, Martin; Zyablov, Victor

    2011-01-01

    (Partial) Unit Memory ((P)UM) codes provide a powerful possibility to construct convolutional codes based on block codes in order to achieve a high decoding performance. In this contribution, a construction based on Gabidulin codes is considered. This construction requires a modified rank metric, the so-called sum rank metric. For the sum rank metric, the free rank distance, the extended row rank distance and its slope are defined analogous to the extended row distance in Hamming metric. Upper bounds for the free rank distance and the slope of (P)UM codes in the sum rank metric are derived and an explicit construction of (P)UM codes based on Gabidulin codes is given, achieving the upper bound for the free rank distance.

  2. Feature-based Image Sequence Compression Coding

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A novel compressing method for video teleconference applications is presented. Semantic-based coding based on human image feature is realized, where human features are adopted as parameters. Model-based coding and the concept of vector coding are combined with the work on image feature extraction to obtain the result.

  3. Segmentation-based video coding

    Energy Technology Data Exchange (ETDEWEB)

    Lades, M. [Lawrence Livermore National Lab., CA (United States); Wong, Yiu-fai; Li, Qi [Texas Univ., San Antonio, TX (United States). Div. of Engineering

    1995-10-01

    Low bit rate video coding is gaining attention through a current wave of consumer oriented multimedia applications which aim, e.g., for video conferencing over telephone lines or for wireless communication. In this work we describe a new segmentation-based approach to video coding which belongs to a class of paradigms appearing very promising among the various proposed methods. Our method uses a nonlinear measure of local variance to identify the smooth areas in an image in a more indicative and robust fashion: First, the local minima in the variance image are identified. These minima then serve as seeds for the segmentation of the image with a watershed algorithm. Regions and their contours are extracted. Motion compensation is used to predict the change of regions between previous frames and the current frame. The error signal is then quantized. To reduce the number of regions and contours, we use the motion information to assist the segmentation process, to merge regions, resulting in a further reduction in bit rate. Our scheme has been tested and good results have been obtained.

  4. 基于蒙特卡罗法的FC-AE-ASM网络可靠性研究%Study on reliability of FC-AE-ASM network based on Monte Carlo method

    Institute of Scientific and Technical Information of China (English)

    易川; 翟正军; 羊昌燕

    2014-01-01

    To solve the reliability problem of FC-AE-ASM(Fibre Channel-Avionics Environment-Anonymous Subscriber Messaging)network, based on basic FC-AE-ASM network model, several redundancy structure of the FC-AE-ASM network is introduced. An analysis method of network reliability is proposed. A calculation method of all terminal network reliability and error analysis is developed. Combined with the complexity of the FC-AE-ASM network model that is con-sisted of multiple FC switches, the effect of link redundancy structure, link reliability and node reliability to network reli-ability are analyzed.%对于FC-AE-ASM网络的可靠性问题,从FC-AE-ASM网络的基本模型出发,介绍了两种FC-AE-ASM网络冗余结构;提出了基于蒙特卡罗仿真法的网络可靠性分析方法,给出了FC-AE-ASM网络全端可靠度计算方法,给出了仿真结果的误差分析公式;结合由多个FC交换机组成的复杂FC-AE-ASM网络模型实例,分析链路冗余结构、链路可靠概率和节点可靠概率对FC-AE-ASM网络可靠性的影响。

  5. Current Activities of the ASME Subgroup NUPACK

    Energy Technology Data Exchange (ETDEWEB)

    Gerald M. Foster; D. Keith Morton; Paul McConnell

    2007-10-01

    Current activities of the American Society of Mechanical Engineers (ASME), Section III Subgroup on Containment Systems for Spent Fuel High-Level Waste Transport Packagings (also known as Subgroup NUPACK) are reviewed with emphasis on the recent revision of Subsection WB. Also, brief insightson new proposals for the development of rules for internal support structures and for a strain-based acceptance criteria are provided.

  6. Biological activity of palladium(II) and platinum(II) complexes of the acetone Schiff bases of S-methyl- and S-benzyldithiocarbazate and the X-ray crystal structure of the [Pd(asme)2] (asme=anionic form of the acetone Schiff base of S-methyldithiocarbazate) complex.

    Science.gov (United States)

    Akbar Ali, Mohammad; Mirza, Aminul Huq; Butcher, Raymond J; Tarafder, M T H; Keat, Tan Boon; Ali, A Manaf

    2002-11-25

    Palladium(II) and platinum(II) complexes of general empirical formula, [M(NS)(2)] (NS=uninegatively charged acetone Schiff bases of S-methyl- and S-benzyldithiocarbazate; M=Pt(II) and Pd(II)) have been prepared and characterized by a variety of physicochemical techniques. Based on conductance, IR and electronic spectral evidence, a square-planar structure is assigned to these complexes. The crystal and molecular structure of the [Pd(asme)(2)] complex (asme=anionic form of the acetone Schiff base of S-methyldithiocarbazate) has been determined by X-ray diffraction. The complex has a distorted cis-square planar structure with the ligands coordinated to the palladium(II) ions as uninegatively charged bidentate NS chelating agents via the azomethine nitrogen and the mercaptide sulfur atoms. The distortion from a regular square-planar geometry is attributed to the restricted bite angles of the ligands. Antimicrobial tests indicate that the Schiff bases exhibit strong activities against the pathogenic bacteria, Bacillus subtilis (mutant defective DNA repair), methicillin-resistant Staphylococcus aureus, B. subtilis (wild type) and Pseudomonas aeruginosa and the fungi, Candida albicans (CA), Candida lypotica (2075), Saccharomyces cerevisiae (20341) and Aspergillus ochraceous (398)-the activities exhibited by these compounds being greater than that of the standard antibacterial and antifungal drugs, streptomycin and nystatin, respectively. The palladium(II) and platinum(II) complexes are inactive against most of these organisms but, the microbe, Pseudomonas aeruginosa shows strong sensitivity to the platinum(II) complexes. Screening of the compounds for their cytotoxicities against T-lymphoblastic leukemia cancer cells has shown that the acetone Schiff base of S-methyldithiocarbazate (Hasme) exhibits a very weak activity, whereas the S-benzyl derivative (Hasbz) is inactive. However, the palladium(II) complexes exhibit strong cytotoxicities against this cancer; their

  7. Group representations, error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E

    1996-01-01

    This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.

  8. Towards a consensus-based biokinetic model for green microalgae – The ASM-A

    DEFF Research Database (Denmark)

    Wágner, Dorottya Sarolta; Valverde Pérez, Borja; Sæbø, Mariann

    2016-01-01

    Cultivation of microalgae in open ponds and closed photobioreactors (PBRs) using wastewater resources offers an opportunity for biochemical nutrient recovery. Effective reactor system design and process control of PBRs requires process models. Several models with different complexities have been...... of microalgae. Model parameters were estimated using laboratory-scale batch and sequenced batch experiments using the novel Latin Hypercube Sampling based Simplex (LHSS) method. The model was evaluated using independent data obtained in a 24-L PBR operated in sequenced batch mode. Identifiability of the model...

  9. Implementation of LT codes based on chaos

    Institute of Scientific and Technical Information of China (English)

    Zhou Qian; Li Liang; Chen Zeng-Qiang; Zhao Jia-Xiang

    2008-01-01

    Fountain codes provide an efficient way to transfer information over erasure channels like the Internet.LT codes are the first codes fully realizing the digital fountain concept.They are asymptotically optimal rateless erasure codes with highly efficient encoding and decoding algorithms.In theory,for each encoding symbol of LT codes,its degree is randomly chosen according to a predetermined degree distribution,and its neighbours used to generate that encoding symbol are chosen uniformly at random.Practical implementation of LT codes usually realizes the randomness through pseudo-randomness number generator like linear congruential method.This paper applies the pseudo-randomness of chaotic sequence in the implementation of LT codes.Two Kent chaotic maps are used to determine the degree and neighbour(s)of each encoding symbol.It is shown that the implemented LT codes based on chaos perform better than the LT codes implemented by the traditional pseudo-randomness number generator.

  10. DNA Coding Based Knowledge Discovery Algorithm

    Institute of Scientific and Technical Information of China (English)

    LI Ji-yun; GENG Zhao-feng; SHAO Shi-huang

    2002-01-01

    A novel DNA coding based knowledge discovery algorithm was proposed, an example which verified its validity was given. It is proved that this algorithm can discover new simplified rules from the original rule set efficiently.

  11. QC-LDPC code-based cryptography

    CERN Document Server

    Baldi, Marco

    2014-01-01

    This book describes the fundamentals of cryptographic primitives based on quasi-cyclic low-density parity-check (QC-LDPC) codes, with a special focus on the use of these codes in public-key cryptosystems derived from the McEliece and Niederreiter schemes. In the first part of the book, the main characteristics of QC-LDPC codes are reviewed, and several techniques for their design are presented, while tools for assessing the error correction performance of these codes are also described. Some families of QC-LDPC codes that are best suited for use in cryptography are also presented. The second part of the book focuses on the McEliece and Niederreiter cryptosystems, both in their original forms and in some subsequent variants. The applicability of QC-LDPC codes in these frameworks is investigated by means of theoretical analyses and numerical tools, in order to assess their benefits and drawbacks in terms of system efficiency and security. Several examples of QC-LDPC code-based public key cryptosystems are prese...

  12. Scalable still image coding based on wavelet

    Science.gov (United States)

    Yan, Yang; Zhang, Zhengbing

    2005-02-01

    The scalable image coding is an important objective of the future image coding technologies. In this paper, we present a kind of scalable image coding scheme based on wavelet transform. This method uses the famous EZW (Embedded Zero tree Wavelet) algorithm; we give a high-quality encoding to the ROI (region of interest) of the original image and a rough encoding to the rest. This method is applied well in limited memory space condition, and we encode the region of background according to the memory capacity. In this way, we can store the encoded image in limited memory space easily without losing its main information. Simulation results show it is effective.

  13. AsmL Specification of a Ptolemy II Scheduler

    DEFF Research Database (Denmark)

    Lázaro Cuadrado, Daniel; Koch, Peter; Ravn, Anders Peter

    2003-01-01

    Ptolemy II is a tool that combines different computational models for simulation and design of embedded systems. AsmL is a software specification language based on the Abstract State Machine formalism. This paper reports on development of an AsmL model of the Synchronous Dataflow domain scheduler...... of Ptolemy II. By building this model we can give precise semantics to the implementation. Furthermore it allows us to isolate the scheduling problem from the tool and make the potential parallelism of the implementation explicit. The model is executable and is tested against the implementation...

  14. Coding Transparency in Object-Based Video

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2006-01-01

    A novel algorithm for coding gray level alpha planes in object-based video is presented. The scheme is based on segmentation in multiple layers. Different coders are specifically designed for each layer. In order to reduce the bit rate, cross-layer redundancies as well as temporal correlation...

  15. Optimal interference code based on machine learning

    Science.gov (United States)

    Qian, Ye; Chen, Qian; Hu, Xiaobo; Cao, Ercong; Qian, Weixian; Gu, Guohua

    2016-10-01

    In this paper, we analyze the characteristics of pseudo-random code, by the case of m sequence. Depending on the description of coding theory, we introduce the jamming methods. We simulate the interference effect or probability model by the means of MATLAB to consolidate. In accordance with the length of decoding time the adversary spends, we find out the optimal formula and optimal coefficients based on machine learning, then we get the new optimal interference code. First, when it comes to the phase of recognition, this study judges the effect of interference by the way of simulating the length of time over the decoding period of laser seeker. Then, we use laser active deception jamming simulate interference process in the tracking phase in the next block. In this study we choose the method of laser active deception jamming. In order to improve the performance of the interference, this paper simulates the model by MATLAB software. We find out the least number of pulse intervals which must be received, then we can make the conclusion that the precise interval number of the laser pointer for m sequence encoding. In order to find the shortest space, we make the choice of the greatest common divisor method. Then, combining with the coding regularity that has been found before, we restore pulse interval of pseudo-random code, which has been already received. Finally, we can control the time period of laser interference, get the optimal interference code, and also increase the probability of interference as well.

  16. Integrin and GPCR Crosstalk in the Regulation of ASM Contraction Signaling in Asthma.

    Science.gov (United States)

    Teoh, Chun Ming; Tam, John Kit Chung; Tran, Thai

    2012-01-01

    Airway hyperresponsiveness (AHR) is one of the cardinal features of asthma. Contraction of airway smooth muscle (ASM) cells that line the airway wall is thought to influence aspects of AHR, resulting in excessive narrowing or occlusion of the airway. ASM contraction is primarily controlled by agonists that bind G protein-coupled receptor (GPCR), which are expressed on ASM. Integrins also play a role in regulating ASM contraction signaling. As therapies for asthma are based on symptom relief, better understanding of the crosstalk between GPCRs and integrins holds good promise for the design of more effective therapies that target the underlying cellular and molecular mechanism that governs AHR. In this paper, we will review current knowledge about integrins and GPCRs in their regulation of ASM contraction signaling and discuss the emerging concept of crosstalk between the two and the implication of this crosstalk on the development of agents that target AHR.

  17. Random Coding Bounds for DNA Codes Based on Fibonacci Ensembles of DNA Sequences

    Science.gov (United States)

    2008-07-01

    COVERED (From - To) 6 Jul 08 – 11 Jul 08 4. TITLE AND SUBTITLE RANDOM CODING BOUNDS FOR DNA CODES BASED ON FIBONACCI ENSEMBLES OF DNA SEQUENCES ... sequences which are generalizations of the Fibonacci sequences . 15. SUBJECT TERMS DNA Codes, Fibonacci Ensembles, DNA Computing, Code Optimization 16...coding bound on the rate of DNA codes is proved. To obtain the bound, we use some ensembles of DNA sequences which are generalizations of the Fibonacci

  18. Review of ASME-NH Design Materials for Creep-Fatigue

    Energy Technology Data Exchange (ETDEWEB)

    Koo, Gyeong Hoi; Kim, Jong Bum [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-05-15

    To review and recommend the candidate design materials for the Sodium-Cooled Fast Reactor, the material sensitivity evaluations by the comparison of design data between the ASME-NH materials were performed by using the SIE ASME-NH computer program implementing the material database of the ASME-NH. The design material data provided by the ASME-NH code are the elastic modulus and yield Strength, Time-Independent Allowable Stress Intensity value, time-dependent allowable stress intensity value, expected minimum stress-to rupture value, stress rupture Factors for weldment, isochronous stress-strain curves, and design fatigue curves. Among these, the data related with the creep-fatigue evaluation are investigated in this study

  19. Measuring Modularity in Open Source Code Bases

    Directory of Open Access Journals (Sweden)

    Roberto Milev

    2009-03-01

    Full Text Available Modularity of an open source software code base has been associated with growth of the software development community, the incentives for voluntary code contribution, and a reduction in the number of users who take code without contributing back to the community. As a theoretical construct, modularity links OSS to other domains of research, including organization theory, the economics of industry structure, and new product development. However, measuring the modularity of an OSS design has proven difficult, especially for large and complex systems. In this article, we describe some preliminary results of recent research at Carleton University that examines the evolving modularity of large-scale software systems. We describe a measurement method and a new modularity metric for comparing code bases of different size, introduce an open source toolkit that implements this method and metric, and provide an analysis of the evolution of the Apache Tomcat application server as an illustrative example of the insights gained from this approach. Although these results are preliminary, they open the door to further cross-discipline research that quantitatively links the concerns of business managers, entrepreneurs, policy-makers, and open source software developers.

  20. Source Code Generator Based on Dynamic Frames

    Directory of Open Access Journals (Sweden)

    Danijel Radošević

    2011-06-01

    Full Text Available Normal 0 21 false false false HR X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Obična tablica"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} This paper presents the model of source code generator based on dynamic frames. The model is named as the SCT model because if its three basic components: Specification (S, which describes the application characteristics, Configuration (C, which describes the rules for building applications, and Templates (T, which refer to application building blocks. The process of code generation dynamically creates XML frames containing all building elements (S, C ant T until final code is produced. This approach is compared to existing XVCL frames based model for source code generating. The SCT model is described by both XML syntax and the appropriate graphical elements. The SCT model is aimed to build complete applications, not just skeletons. The main advantages of the presented model are its textual and graphic description, a fully configurable generator, and the reduced overhead of the generated source code. The presented SCT model is shown on development of web application example in order to demonstrate its features and justify our design choices.

  1. FLOOD ROUTING BASED ON NETWORK CODING (NCF)

    OpenAIRE

    HOSSEIN BALOOCHIAN; MOZAFAR BAGMOHAMMADI

    2010-01-01

    Most of the energy in a sensor network is used for transmission of data packets. For this reason, optimization of energy consumption is of utmost importance in these networks. This paper presents NCF, a flood routing protocol based on network coding. Simulations show that in addition to eliminating the drawbacks of traditional flooding methods, like the explosion phenomenon, NCF increases the lifetime of the network by at least 20% and decreases the number of packet transmissions. Another adv...

  2. Image Coding Based on Address Vector Quantization.

    Science.gov (United States)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing

  3. Report on FY15 alloy 617 code rules development

    Energy Technology Data Exchange (ETDEWEB)

    Sham, Sam [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jetter, Robert I [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hollinger, Greg [Becht Engineering Co., Inc., Liberty Corner, NJ (United States); Pease, Derrick [Becht Engineering Co., Inc., Liberty Corner, NJ (United States); Carter, Peter [Stress Engineering Services, Inc., Houston, TX (United States); Pu, Chao [Univ. of Tennessee, Knoxville, TN (United States); Wang, Yanli [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-09-01

    Due to its strength at very high temperatures, up to 950°C (1742°F), Alloy 617 is the reference construction material for structural components that operate at or near the outlet temperature of the very high temperature gas-cooled reactors. However, the current rules in the ASME Section III, Division 5 Subsection HB, Subpart B for the evaluation of strain limits and creep-fatigue damage using simplified methods based on elastic analysis have been deemed inappropriate for Alloy 617 at temperatures above 650°C (1200°F) (Corum and Brass, Proceedings of ASME 1991 Pressure Vessels and Piping Conference, PVP-Vol. 215, p.147, ASME, NY, 1991). The rationale for this exclusion is that at higher temperatures it is not feasible to decouple plasticity and creep, which is the basis for the current simplified rules. This temperature, 650°C (1200°F), is well below the temperature range of interest for this material for the high temperature gas-cooled reactors and the very high temperature gas-cooled reactors. The only current alternative is, thus, a full inelastic analysis requiring sophisticated material models that have not yet been formulated and verified. To address these issues, proposed code rules have been developed which are based on the use of elastic-perfectly plastic (EPP) analysis methods applicable to very high temperatures. The proposed rules for strain limits and creep-fatigue evaluation were initially documented in the technical literature (Carter, Jetter and Sham, Proceedings of ASME 2012 Pressure Vessels and Piping Conference, papers PVP 2012 28082 and PVP 2012 28083, ASME, NY, 2012), and have been recently revised to incorporate comments and simplify their application. Background documents have been developed for these two code cases to support the ASME Code committee approval process. These background documents for the EPP strain limits and creep-fatigue code cases are documented in this report.

  4. Mesh-based parallel code coupling interface

    Energy Technology Data Exchange (ETDEWEB)

    Wolf, K.; Steckel, B. (eds.) [GMD - Forschungszentrum Informationstechnik GmbH, St. Augustin (DE). Inst. fuer Algorithmen und Wissenschaftliches Rechnen (SCAI)

    2001-04-01

    MpCCI (mesh-based parallel code coupling interface) is an interface for multidisciplinary simulations. It provides industrial end-users as well as commercial code-owners with the facility to combine different simulation tools in one environment. Thereby new solutions for multidisciplinary problems will be created. This opens new application dimensions for existent simulation tools. This Book of Abstracts gives a short overview about ongoing activities in industry and research - all presented at the 2{sup nd} MpCCI User Forum in February 2001 at GMD Sankt Augustin. (orig.) [German] MpCCI (mesh-based parallel code coupling interface) definiert eine Schnittstelle fuer multidisziplinaere Simulationsanwendungen. Sowohl industriellen Anwender als auch kommerziellen Softwarehersteller wird mit MpCCI die Moeglichkeit gegeben, Simulationswerkzeuge unterschiedlicher Disziplinen miteinander zu koppeln. Dadurch entstehen neue Loesungen fuer multidisziplinaere Problemstellungen und fuer etablierte Simulationswerkzeuge ergeben sich neue Anwendungsfelder. Dieses Book of Abstracts bietet einen Ueberblick ueber zur Zeit laufende Arbeiten in der Industrie und in der Forschung, praesentiert auf dem 2{sup nd} MpCCI User Forum im Februar 2001 an der GMD Sankt Augustin. (orig.)

  5. ASME BPE专家访谈

    Institute of Scientific and Technical Information of China (English)

    王欣

    2009-01-01

    ASME BPE,这一串字符对于药品生产商和设备供应商来说意味着什么?这是否是开启生物制药大门的成功密钥?本刊特邀ASME BPE委员会现任委员、资深专家为您详细破译。

  6. ASME Evaluation on Grid Mobile E-Commerce Process

    OpenAIRE

    Dan Chang; Wei Liao

    2012-01-01

    With the development of E-commerce, more scholars have paid attention to research on Mobile E-commerce and mostly focus on the optimization and evaluation of existing process. This paper researches the evaluation of Mobile E-commerce process with a method called ASME. Based on combing and analyzing current mobile business process and utilizing the grid management theory, mobile business process based on grid are constructed. Firstly, the existing process, namely Non-grid Mobile E-commerce, an...

  7. ASME Evaluation on Grid Mobile E-Commerce Process

    OpenAIRE

    Dan Chang; Wei Liao

    2012-01-01

    With the development of E-commerce, more scholars have paid attention to research on Mobile E-commerce and mostly focus on the optimization and evaluation of existing process. This paper researches the evaluation of Mobile E-commerce process with a method called ASME. Based on combing and analyzing current mobile business process and utilizing the grid management theory, mobile business process based on grid are constructed. Firstly, the existing process, namely Non-grid Mobile E-commerce, an...

  8. Vulnerability of MRD-Code-based Universal Secure Network Coding against Stronger Eavesdroppers

    CERN Document Server

    Shioji, Eitaro; Uyematsu, Tomohiko

    2010-01-01

    Silva et al. proposed a universal secure network coding scheme based on MRD codes, which can be applied to any underlying network code. This paper considers a stronger eavesdropping model where the eavesdroppers possess the ability to re-select the tapping links during the transmission. We give a proof for the impossibility of attaining universal security against such adversaries using Silva et al.'s code for all choices of code parameters, even with restricted number of tapped links. We also consider the cases with restricted tapping duration and derive some conditions for this code to be secure.

  9. Energy information data base: report number codes

    Energy Technology Data Exchange (ETDEWEB)

    None

    1979-09-01

    Each report processed by the US DOE Technical Information Center is identified by a unique report number consisting of a code plus a sequential number. In most cases, the code identifies the originating installation. In some cases, it identifies a specific program or a type of publication. Listed in this publication are all codes that have been used by DOE in cataloging reports. This compilation consists of two parts. Part I is an alphabetical listing of report codes identified with the issuing installations that have used the codes. Part II is an alphabetical listing of installations identified with codes each has used. (RWR)

  10. Incorporating Code-Based Software in an Introductory Statistics Course

    Science.gov (United States)

    Doehler, Kirsten; Taylor, Laura

    2015-01-01

    This article is based on the experiences of two statistics professors who have taught students to write and effectively utilize code-based software in a college-level introductory statistics course. Advantages of using software and code-based software in this context are discussed. Suggestions are made on how to ease students into using code with…

  11. Incorporating Code-Based Software in an Introductory Statistics Course

    Science.gov (United States)

    Doehler, Kirsten; Taylor, Laura

    2015-01-01

    This article is based on the experiences of two statistics professors who have taught students to write and effectively utilize code-based software in a college-level introductory statistics course. Advantages of using software and code-based software in this context are discussed. Suggestions are made on how to ease students into using code with…

  12. 115-year-old society knows how to reach young scientists: ASM Young Ambassador Program.

    Science.gov (United States)

    Karczewska-Golec, Joanna

    2015-12-25

    With around 40,000 members in more than 150 countries, American Society for Microbiology (ASM) faces the challenge of meeting very diverse needs of its increasingly international members base. The newly launched ASM Young Ambassador Program seeks to aid the Society in this effort. Equipped with ASM conceptual support and financing, Young Ambassadors (YAs) design and pursue country-tailored approaches to strengthen the Society's ties with local microbiological communities. In a trans-national setting, the active presence of YAs at important scientific events, such as 16th European Congress on Biotechnology, forges new interactions between ASM and sister societies. The paper presents an overview of the Young Ambassadors-driven initiatives at both global and country levels, and explores the topic of how early-career scientists can contribute to science diplomacy and international relations.

  13. ASME Material Challenges for Advanced Reactor Concepts

    Energy Technology Data Exchange (ETDEWEB)

    Piyush Sabharwall; Ali Siahpush

    2013-07-01

    This study presents the material Challenges associated with Advanced Reactor Concept (ARC) such as the Advanced High Temperature Reactor (AHTR). ACR are the next generation concepts focusing on power production and providing thermal energy for industrial applications. The efficient transfer of energy for industrial applications depends on the ability to incorporate cost-effective heat exchangers between the nuclear heat transport system and industrial process heat transport system. The heat exchanger required for AHTR is subjected to a unique set of conditions that bring with them several design challenges not encountered in standard heat exchangers. The corrosive molten salts, especially at higher temperatures, require materials throughout the system to avoid corrosion, and adverse high-temperature effects such as creep. Given the very high steam generator pressure of the supercritical steam cycle, it is anticipated that water tube and molten salt shell steam generators heat exchanger will be used. In this paper, the ASME Section III and the American Society of Mechanical Engineers (ASME) Section VIII requirements (acceptance criteria) are discussed. Also, the ASME material acceptance criteria (ASME Section II, Part D) for high temperature environment are presented. Finally, lack of ASME acceptance criteria for thermal design and analysis are discussed.

  14. Password Authentication Based on Fractal Coding Scheme

    Directory of Open Access Journals (Sweden)

    Nadia M. G. Al-Saidi

    2012-01-01

    Full Text Available Password authentication is a mechanism used to authenticate user identity over insecure communication channel. In this paper, a new method to improve the security of password authentication is proposed. It is based on the compression capability of the fractal image coding to provide an authorized user a secure access to registration and login process. In the proposed scheme, a hashed password string is generated and encrypted to be captured together with the user identity using text to image mechanisms. The advantage of fractal image coding is to be used to securely send the compressed image data through a nonsecured communication channel to the server. The verification of client information with the database system is achieved in the server to authenticate the legal user. The encrypted hashed password in the decoded fractal image is recognized using optical character recognition. The authentication process is performed after a successful verification of the client identity by comparing the decrypted hashed password with those which was stored in the database system. The system is analyzed and discussed from the attacker’s viewpoint. A security comparison is performed to show that the proposed scheme provides an essential security requirement, while their efficiency makes it easier to be applied alone or in hybrid with other security methods. Computer simulation and statistical analysis are presented.

  15. Three Methods for Occupation Coding Based on Statistical Learning

    Directory of Open Access Journals (Sweden)

    Gweon Hyukjun

    2017-03-01

    Full Text Available Occupation coding, an important task in official statistics, refers to coding a respondent’s text answer into one of many hundreds of occupation codes. To date, occupation coding is still at least partially conducted manually, at great expense. We propose three methods for automatic coding: combining separate models for the detailed occupation codes and for aggregate occupation codes, a hybrid method that combines a duplicate-based approach with a statistical learning algorithm, and a modified nearest neighbor approach. Using data from the German General Social Survey (ALLBUS, we show that the proposed methods improve on both the coding accuracy of the underlying statistical learning algorithm and the coding accuracy of duplicates where duplicates exist. Further, we find defining duplicates based on ngram variables (a concept from text mining is preferable to one based on exact string matches.

  16. A Survey of Variable Extragalactic Sources with XTE's All Sky Monitor (ASM)

    Science.gov (United States)

    Jernigan, Garrett

    1998-01-01

    The original goal of the project was the near real-time detection of AGN utilizing the SSC 3 of the ASM on XTE which does a deep integration on one 100 square degree region of the sky. While the SSC never performed sufficiently well to allow the success of this goal, the work on the project has led to the development of a new analysis method for coded aperture systems which has now been applied to ASM data for mapping regions near clusters of galaxies such as the Perseus Cluster and the Coma Cluster. Publications are in preparation that describe both the new method and the results from mapping clusters of galaxies.

  17. A Pixel Domain Video Coding based on Turbo code and Arithmetic code

    Directory of Open Access Journals (Sweden)

    Cyrine Lahsini

    2012-05-01

    Full Text Available In recent years, with emerging applications such as multimedia sensors networks, wirelesslow-power surveillance and mobile camera phones, the traditional video coding architecture in beingchallenged. In fact, these applications have different requirements than those of the broadcast videodelivery systems: a low power consumption at the encoder side is essential.In this context, we propose a pixel-domain video coding scheme which fits well in these senarios.In this system, both the arithmetic and turbo codes are used to encode the video sequence's frames.Simulations results show significant gains over Pixel-domain Wyner-Ziv video codeingr.

  18. Improvement of ASME NH for Grade 91

    Energy Technology Data Exchange (ETDEWEB)

    Bernard Riou

    2007-10-09

    This report has been prepared in the context of Task 3 of the ASME/DOE Gen IV material project. It has been identified that creep-fatigue evaluation procedures presently available in ASME (1) and RCC-MR (2) have been mainly developed for austenitic stainless steels and may not be suitable for cyclic softening materials such as mod 9 Cr 1 Mo steel (grade 91). The aim of this document is, starting from experimental test results, to perform a review of the procedures and, if necessary, provide recommendations for their improvements.

  19. Protograph-Based Raptor-Like Codes

    Science.gov (United States)

    Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.

    2014-01-01

    Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.

  20. Multi-User Cooperative Diversity through Network Coding Based on Classical Coding Theory

    CERN Document Server

    Rebelatto, João Luiz; Li, Yonghui; Vucetic, Branka

    2010-01-01

    In this work, we propose and analyze a generalized construction of distributed network codes for a network consisting of M users sending different information to a common base station through independent block fading channels. The aim is to increase the diversity order of the system without reducing its code rate. The proposed scheme, called generalized dynamic-network codes (GDNC), is a generalization of the dynamic-network codes (DNC) recently proposed by Xiao and Skoglund. The design of the network codes that maximize the diversity order is recognized as equivalent to the design of linear block codes over a nonbinary finite field under the Hamming metric. We prove that adopting a systematic generator matrix of a maximum distance separable block code over a sufficiently large finite field as the network transfer matrix is a sufficient condition for full diversity order under link failure model. The proposed generalization offers a much better tradeoff between rate and diversity order compared to the DNC. An...

  1. Non-coherent space-time code based on full diversity space-time block coding

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A non-unitary non-coherent space-time code which is capable of achieving full algebraic diversity is proposed based on full diversity space-time block coding. The error performance is optimized by transforming the non-unitary space-time code into unitary space-time code. By exploiting the desired structure of the proposed code, a grouped generalized likelihood ratio test decoding algorithm is presented to overcome the high complexity of the optimal algorithm. Simulation results show that the proposed code possesses high spectrum efficiency in contrast to the unitary space-time code despite slight loss in the SNR, and besides, the proposed grouped decoding algorithm provides good tradeoff between performance and complexity.

  2. Optimal Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Kroon, I. B.; Faber, M. H.

    1994-01-01

    Calibration of partial safety factors is considered in general, including classes of structures where no code exists beforehand. The partial safety factors are determined such that the difference between the reliability for the different structures in the class considered and a target reliability...... level is minimized. Code calibration on a decision theoretical basis is also considered and it is shown how target reliability indices can be calibrated. Results from code calibration for rubble mound breakwater designs are shown....

  3. Quantum BCH Codes Based on Spectral Techniques

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    When the time variable in quantum signal processing is discrete, the Fourier transform exists on the vector space of n-tuples over the Galois field F2, which plays an important role in the investigation of quantum signals. By using Fourier transforms, the idea of quantum coding theory can be described in a setting that is much different from that seen that far. Quantum BCH codes can be defined as codes whose quantum states have certain specified consecutive spectral components equal to zero and the error-correcting ability is also described by the number of the consecutive zeros. Moreover, the decoding of quantum codes can be described spectrally with more efficiency.

  4. Turbo Codes Based on Time-Variant Memory-1 Convolutional Codes over Fq

    CERN Document Server

    Liva, Gianluigi; Scalise, Sandro; Chiani, Marco

    2011-01-01

    Two classes of turbo codes over high-order finite fields are introduced. The codes are derived from a particular protograph sub-ensemble of the (dv=2,dc=3) low-density parity-check code ensemble. A first construction is derived as a parallel concatenation of two non-binary, time-variant accumulators. The second construction is based on the serial concatenation of a non-binary, time-variant differentiator and of a non-binary, time-variant accumulator, and provides a highly-structured flexible encoding scheme for (dv=2,dc=4) ensemble codes. A cycle graph representation is provided. The proposed codes can be decoded efficiently either as low-density parity-check codes (via belief propagation decoding over the codes bipartite graph) or as turbo codes (via the forward-backward algorithm applied to the component codes trellis). The forward-backward algorithm for symbol maximum a posteriori decoding of the component codes is illustrated and simplified by means of the fast Fourier transform. The proposed codes provid...

  5. Non-binary unitary error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.

    1996-06-01

    Error operator bases for systems of any dimension are defined and natural generalizations of the bit-flip/ sign-change error basis for qubits are given. These bases allow generalizing the construction of quantum codes based on eigenspaces of Abelian groups. As a consequence, quantum codes can be constructed form linear codes over {ital Z}{sub {ital n}} for any {ital n}. The generalization of the punctured code construction leads to many codes which permit transversal (i.e. fault tolerant) implementations of certain operations compatible with the error basis.

  6. A Line Based Visualization of Code Evolution

    NARCIS (Netherlands)

    Voinea, S.L.; Telea, A.; Wijk, J.J. van

    2005-01-01

    The source code of software systems changes many times during the system lifecycle. We study how developers can get insight in these changes in order to understand the project context and the product artifacts. For this we propose new techniques for code evolution representation and visualization in

  7. A Framework for Reverse Engineering Large C++ Code Bases

    NARCIS (Netherlands)

    Telea, Alexandru; Byelas, Heorhiy; Voinea, Lucian

    2008-01-01

    When assessing the quality and maintainability of large C++ code bases, tools are needed for extracting several facts from the source code, such as: architecture, structure, code smells, and quality metrics. Moreover, these facts should be presented in such ways so that one can correlate them and fi

  8. A Framework for Reverse Engineering Large C++ Code Bases

    NARCIS (Netherlands)

    Telea, Alexandru; Byelas, Heorhiy; Voinea, Lucian

    2009-01-01

    When assessing the quality and maintainability of large C++ code bases, tools are needed for extracting several facts from the source code, such as: architecture, structure, code smells, and quality metrics. Moreover, these facts should be presented in such ways so that one can correlate them and fi

  9. Generalized Distributed Network Coding Based on Nonbinary Linear Block Codes for Multi-User Cooperative Communications

    CERN Document Server

    Rebelatto, João Luiz; Li, Yonghui; Vucetic, Branka

    2010-01-01

    In this work, we propose and analyze a generalized construction of distributed network codes for a network consisting of M users sending different information to a common base station through independent block fading channels. The aim is to increase the diversity order of the system without reducing its code rate. The proposed scheme, called generalized dynamic network codes (GDNC), is a generalization of the dynamic network codes (DNC) recently proposed by Xiao and Skoglung. The design of the network codes that maximizes the diversity order is recognized as equivalent to the design of linear block codes over a nonbinary finite field under the Hamming metric. The proposed scheme offers a much better tradeoff between rate and diversity order. An outage probability analysis showing the improved performance is carried out, and computer simulations results are shown to agree with the analytical results.

  10. Deductive Glue Code Synthesis for Embedded Software Systems Based on Code Patterns

    Science.gov (United States)

    Liu, Jian; Fu, Jicheng; Zhang, Yansheng; Bastani, Farokh; Yen, I-Ling; Tai, Ann; Chau, Savio N.

    2006-01-01

    Automated code synthesis is a constructive process that can be used to generate programs from specifications. It can, thus, greatly reduce the software development cost and time. The use of formal code synthesis approach for software generation further increases the dependability of the system. Though code synthesis has many potential benefits, the synthesis techniques are still limited. Meanwhile, components are widely used in embedded system development. Applying code synthesis to component based software development (CBSD) process can greatly enhance the capability of code synthesis while reducing the component composition efforts. In this paper, we discuss the issues and techniques for applying deductive code synthesis techniques to CBSD. For deductive synthesis in CBSD, a rule base is the key for inferring appropriate component composition. We use the code patterns to guide the development of rules. Code patterns have been proposed to capture the typical usages of the components. Several general composition operations have been identified to facilitate systematic composition. We present the technique for rule development and automated generation of new patterns from existing code patterns. A case study of using this method in building a real-time control system is also presented.

  11. A New Evolutionary Algorithm Based on the Decimal Coding

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Traditional Evolutionary Algorithm (EAs) is based on the binary code, real number code, structure code and so on. But these coding strategies have their own advantages and disadvantages for the optimization of functions. In this paper a new Decimal Coding Strategy (DCS) ,which is convenient for space division and alterable precision, was proposed, and the theory analysis of its implicit parallelism and convergence was also discussed. We also redesign several genetic operators for the decimal code. In order to utilize the historical information of the existing individuals in the process of evolution and avoid repeated exploring,the strategies of space shrinking and precision alterable, are adopted. Finally, the evolutionary algorithm based on decimal coding (DCEAs) was applied to the optimization of functions, the optimization of parameter, mixed-integer nonlinear programming. Comparison with traditional GAs was made and the experimental results show that the performances of DCEAS are better than the tradition GAs.

  12. FNT-based Reed-Solomon Erasure Codes

    CERN Document Server

    Soro, Alexandre

    2009-01-01

    This paper presents a new construction of Maximum-Distance Separable (MDS) Reed-Solomon erasure codes based on Fermat Number Transform (FNT). Thanks to FNT, these codes support practical coding and decoding algorithms with complexity O(n log n), where n is the number of symbols of a codeword. An open-source implementation shows that the encoding speed can reach 150Mbps for codes of length up to several 10,000s of symbols. These codes can be used as the basic component of the Information Dispersal Algorithm (IDA) system used in a several P2P systems.

  13. Sandia National Laboratories analysis code data base

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, C.W.

    1994-11-01

    Sandia National Laboratories, mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The Laboratories` strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia`s technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code ``ownership`` and release status, and references describing the physical models and numerical implementation.

  14. WAVELET-BASED FINE GRANULARITY SCALABLE VIDEO CODING

    Institute of Scientific and Technical Information of China (English)

    ZhangJiangshan; ZhuGuangxi

    2003-01-01

    This letter proposes an efficient wavelet-based fine Granularity Scalable(FGS)coding scheme,where the base layer is encoded with a newly designed wavelet-based coder,and the entancement layer is encoded with Progressive Fins Granularity Scalable(PFGS)coding.This algorithm involves multi-frame motion compensationk,rate-distortion optimizing strategy with Lagrangian cost function and context-based adaptive arithmetic coding.In order to improve efficiency of the enhancenent layer coding,an improved motion estimation scheme that uses both information from the base layer and the enhancement layer is also proposed in this letter.The wavelet-based coder significantly improves the coding efficiency of the base layer compared with MPEG-4 ASP(Advanced Simple Profile)and H.26L TML9.The PFGS coding is a significant improvement over MPEG-4 FGS coding at the enhancement layer.Experiments show that single layer coding efficiency gain of the proposed scheme is about 2.0-3.0dB and 0.3-1.0dB higher than that of MPEG-4 ASP and H.26L TML9,respectively.The overall coding efficiency gain of the proposed scheme is about 4.0-5.0dB higher than that of MPEG04 FGS.

  15. WAVELET-BASED FINE GRANULARITY SCALABLE VIDEO CODING

    Institute of Scientific and Technical Information of China (English)

    Zhang Jiangshan; Zhu Guangxi

    2003-01-01

    This letter proposes an efficient wavelet-based Fine Granularity Scalable (FGS)coding scheme, where the base layer is encoded with a newly designed wavelet-based coder, and the enhancement layer is encoded with Progressive Fine Granularity Scalable (PFGS) coding.This algorithm involves multi-frame motion compensation, rate-distortion optimizing strategy with Lagrangian cost function and context-based adaptive arithmetic coding. In order to improve efficiency of the enhancement layer coding, an improved motion estimation scheme that uses both information from the base layer and the enhancement layer is also proposed in this letter. The wavelet-based coder significantly improves the coding efficiency of the base layer compared with MPEG-4 ASP (Advanced Simple Profile) and H.26L TML9. The PFGS coding is a significant improvement over MPEG-4 FGS coding at the enhancement layer. Experiments show that single layer coding efficiency gain of the proposed scheme is about 2.0-3.0dB and 0.3-1.0dB higher than that of MPEG-4 ASP and H.26L TML9, respectively. The overall coding efficiency gain of the proposed scheme is about 4.0-5.0dB higher than that of MPEG-4 FGS.

  16. On vocabulary size of grammar-based codes

    CERN Document Server

    Debowski, Lukasz

    2007-01-01

    We discuss inequalities holding between the vocabulary size, i.e., the number of distinct nonterminal symbols in a grammar-based compression for a string, and the excess length of the respective universal code, i.e., the code-based analog of algorithmic mutual information. The aim is to strengthen inequalities which were discussed in a weaker form in linguistics but shed some light on redundancy of efficiently computable codes. The main contribution of the paper is a construction of universal grammar-based codes for which the excess lengths can be bounded easily.

  17. Multiple descriptions based wavelet image coding

    Institute of Scientific and Technical Information of China (English)

    陈海林; 杨宇航

    2004-01-01

    We present a simple and efficient scheme that combines multiple descriptions coding with wavelet transform under JPEG2000 image coding architecture. To reduce packet losses, controlled amounts of redundancy are added to the wavelet transform coefficients to produce multiple descriptions of wavelet coefficients during the compression process to produce multiple descriptions bit-stream of a compressed image. Even if areceiver gets only parts of descriptions (other descriptions being lost), it can still reconstruct image with acceptable quality. Specifically, the scheme uses not only high-performance wavelet transform to improve compression efficiency, but also multiple descriptions technique to enhance the robustness of the compressed image that is transmitted through unreliable network channels.

  18. Wavelet-based embedded zerotree extension to color coding

    Science.gov (United States)

    Franques, Victoria T.

    1998-03-01

    Recently, a new image compression algorithm was developed which employs wavelet transform and a simple binary linear quantization scheme with an embedded coding technique to perform data compaction. This new family of coder, Embedded Zerotree Wavelet (EZW), provides a better compression performance than the current JPEG coding standard for low bit rates. Since EZW coding algorithm emerged, all of the published coding results related to this coding technique are on monochrome images. In this paper the author has enhanced the original coding algorithm to yield a better compression ratio, and has extended the wavelet-based zerotree coding to color images. Color imagery is often represented by several components, such as RGB, in which each component is generally processed separately. With color coding, each component could be compressed individually in the same manner as a monochrome image, therefore requiring a threefold increase in processing time. Most image coding standards employ de-correlated components, such as YIQ or Y, CB, CR and subsampling of the 'chroma' components, such coding technique is employed here. Results of the coding, including reconstructed images and coding performance, will be presented.

  19. Multiple descriptions based wavelet image coding

    Institute of Scientific and Technical Information of China (English)

    CHEN Hai-lin(陈海林); YANG Yu-hang(杨宇航)

    2004-01-01

    We present a simple and efficient scheme that combines multiple descriptions coding with wavelet transform under JPEG2000 image coding architecture. To reduce packet losses, controlled amounts of redundancy are added to the wavelet transform coefficients to produce multiple descriptions of wavelet coefficients during the compression process to produce multiple descriptions bit-stream of a compressed image. Even if a receiver gets only parts of descriptions (other descriptions being lost), it can still reconstruct image with acceptable quality. Specifically, the scheme uses not only high-performance wavelet transform to improve compression efficiency, but also multiple descriptions technique to enhance the robustness of the compressed image that is transmitted through unreliable network channels.

  20. Efficient Quantum Private Communication Based on Dynamic Control Code Sequence

    Science.gov (United States)

    Cao, Zheng-Wen; Feng, Xiao-Yi; Peng, Jin-Ye; Zeng, Gui-Hua; Qi, Jin

    2016-12-01

    Based on chaos and quantum properties, we propose a quantum private communication scheme with dynamic control code sequence. The initial sequence is obtained via chaotic systems, and the control code sequence is derived by grouping, XOR and extracting. A shift cycle algorithm is designed to enable the dynamic change of control code sequence. Analysis shows that transmission efficiency could reach 100 % with high dynamics and security.

  1. A line-based visualization of code evolution

    OpenAIRE

    Voinea, SL Lucian; Telea, AC Alexandru; Wijk, van, M.N.

    2005-01-01

    The source code of software systems changes many times during the system lifecycle. We study how developers can get insight in these changes in order to understand the project context and the product artifacts. For this we propose new techniques for code evolution representation and visualization interaction from a version-centric perspective. Central to our approach is a line-based display of the changing code, where each file version is shown as a column and the horizontal axis shows time. ...

  2. Context based Coding of Quantized Alpha Planes for Video Objects

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2002-01-01

    In object based video, each frame is a composition of objects that are coded separately. The composition is performed through the alpha plane that represents the transparency of the object. We present an alternative to MPEG-4 for coding of alpha planes that considers their specific properties....... Comparisons in terms of rate and distortion are provided, showing that the proposed coding scheme for still alpha planes is better than the algorithms for I-frames used in MPEG-4....

  3. Efficient Quantum Private Communication Based on Dynamic Control Code Sequence

    Science.gov (United States)

    Cao, Zheng-Wen; Feng, Xiao-Yi; Peng, Jin-Ye; Zeng, Gui-Hua; Qi, Jin

    2017-04-01

    Based on chaos and quantum properties, we propose a quantum private communication scheme with dynamic control code sequence. The initial sequence is obtained via chaotic systems, and the control code sequence is derived by grouping, XOR and extracting. A shift cycle algorithm is designed to enable the dynamic change of control code sequence. Analysis shows that transmission efficiency could reach 100 % with high dynamics and security.

  4. A New Approach to Coding in Content Based MANETs

    OpenAIRE

    Joy, Joshua; Yu, Yu-Ting; Perez, Victor; Lu, Dennis; Gerla, Mario

    2015-01-01

    In content-based mobile ad hoc networks (CB-MANETs), random linear network coding (NC) can be used to reliably disseminate large files under intermittent connectivity. Conventional NC involves random unrestricted coding at intermediate nodes. This however is vulnerable to pollution attacks. To avoid attacks, a brute force approach is to restrict the mixing at the source. However, source restricted NC generally reduces the robustness of the code in the face of errors, losses and mobility induc...

  5. Quantum superdense coding based on hyperentanglement

    Institute of Scientific and Technical Information of China (English)

    Zhao Rui-Tong; Guo Qi; Chen Li; Wang Hong-Fu; Zhang Shou

    2012-01-01

    We present a scheme for quantum superdense coding with hyperentanglement,in which the sender can transfer four bits of classical information by sending only one photon.The important device in the scheme is the hyperentangled Bell-state analyzer in both polarization and frequency degrees of freedom,which is also constructed in the paper by using a quantum nondemolition detector assisted by cross-Kerr nonlinearity.Our scheme can transfer more information with less resources than the existing schemes and is nearly deterministic and nondestructive.

  6. Grayscale Image Compression Based on Min Max Block Truncating Coding

    Directory of Open Access Journals (Sweden)

    Hilal Almarabeh

    2011-11-01

    Full Text Available This paper presents an image compression techniques based on block truncating coding. In this work, a min max block truncating coding (MM_BTC is presented for grayscale image compression relies on applying dividing image into non-overlapping blocks. MM_BTC differ from other block truncating coding such as block truncating coding (BTC in the way of selecting the quantization level in order to remove redundancy. Objectives measures such as: Bit Rate (BR, Mean Square Error (MSE, Peak Signal to Noise Ratio (PSNR, and Redundancy (R, were used to present a detailed evaluation of MM_BTC of image quality.

  7. ASME AG-1 Section FC Qualified HEPA Filters; a Particle Loading Comparison - 13435

    Energy Technology Data Exchange (ETDEWEB)

    Stillo, Andrew [Camfil Farr, 1 North Corporate Drive, Riverdale, NJ 07457 (United States); Ricketts, Craig I. [New Mexico State University, Department of Engineering Technology and Surveying Engineering, P.O. Box 30001 MSC 3566, Las Cruces, NM 88003-8001 (United States)

    2013-07-01

    High Efficiency Particulate Air (HEPA) Filters used to protect personnel, the public and the environment from airborne radioactive materials are designed, manufactured and qualified in accordance with ASME AG-1 Code section FC (HEPA Filters) [1]. The qualification process requires that filters manufactured in accordance with this ASME AG-1 code section must meet several performance requirements. These requirements include performance specifications for resistance to airflow, aerosol penetration, resistance to rough handling, resistance to pressure (includes high humidity and water droplet exposure), resistance to heated air, spot flame resistance and a visual/dimensional inspection. None of these requirements evaluate the particle loading capacity of a HEPA filter design. Concerns, over the particle loading capacity, of the different designs included within the ASME AG-1 section FC code[1], have been voiced in the recent past. Additionally, the ability of a filter to maintain its integrity, if subjected to severe operating conditions such as elevated relative humidity, fog conditions or elevated temperature, after loading in use over long service intervals is also a major concern. Although currently qualified HEPA filter media are likely to have similar loading characteristics when evaluated independently, filter pleat geometry can have a significant impact on the in-situ particle loading capacity of filter packs. Aerosol particle characteristics, such as size and composition, may also have a significant impact on filter loading capacity. Test results comparing filter loading capacities for three different aerosol particles and three different filter pack configurations are reviewed. The information presented represents an empirical performance comparison among the filter designs tested. The results may serve as a basis for further discussion toward the possible development of a particle loading test to be included in the qualification requirements of ASME AG-1

  8. Efficient image compression scheme based on differential coding

    Science.gov (United States)

    Zhu, Li; Wang, Guoyou; Liu, Ying

    2007-11-01

    Embedded zerotree (EZW) and Set Partitioning in Hierarchical Trees (SPIHT) coding, introduced by J.M. Shapiro and Amir Said, are very effective and being used in many fields widely. In this study, brief explanation of the principles of SPIHT was first provided, and then, some improvement of SPIHT algorithm according to experiments was introduced. 1) For redundancy among the coefficients in the wavelet region, we propose differential method to reduce it during coding. 2) Meanwhile, based on characteristic of the coefficients' distribution in subband, we adjust sorting pass and optimize differential coding, in order to reduce the redundancy coding in each subband. 3) The image coding result, calculated by certain threshold, shows that through differential coding, the rate of compression get higher, and the quality of reconstructed image have been get raised greatly, when bpp (bit per pixel)=0.5, PSNR (Peak Signal to Noise Ratio) of reconstructed image exceeds that of standard SPIHT by 0.2~0.4db.

  9. A MCTF video coding scheme based on distributed source coding principles

    Science.gov (United States)

    Tagliasacchi, Marco; Tubaro, Stefano

    2005-07-01

    Motion Compensated Temporal Filtering (MCTF) has proved to be an efficient coding tool in the design of open-loop scalable video codecs. In this paper we propose a MCTF video coding scheme based on lifting where the prediction step is implemented using PRISM (Power efficient, Robust, hIgh compression Syndrome-based Multimedia coding), a video coding framework built on distributed source coding principles. We study the effect of integrating the update step at the encoder or at the decoder side. We show that the latter approach allows to improve the quality of the side information exploited during decoding. We present the analytical results obtained by modeling the video signal along the motion trajectories as a first order auto-regressive process. We show that the update step at the decoder allows to half the contribution of the quantization noise. We also include experimental results with real video data that demonstrate the potential of this approach when the video sequences are coded at low bitrates.

  10. A Modified Vertex—Based Shape Coding Algorithm

    Institute of Scientific and Technical Information of China (English)

    石旭利; 张兆扬

    2002-01-01

    This paper proposes a modified shape coding algorthm called modified vertex-based shape coding(MVBSC) to encode the boundary of a visual object compactly by using a modified polygonal approximation approach which uses modified curvature scale space (CSS) theory to extract feature0points.

  11. GC-ASM: Synergistic Integration of Graph-Cut and Active Shape Model Strategies for Medical Image Segmentation.

    Science.gov (United States)

    Chen, Xinjian; Udupa, Jayaram K; Alavi, Abass; Torigian, Drew A

    2013-05-01

    Image segmentation methods may be classified into two categories: purely image based and model based. Each of these two classes has its own advantages and disadvantages. In this paper, we propose a novel synergistic combination of the image based graph-cut (GC) method with the model based ASM method to arrive at the GC-ASM method for medical image segmentation. A multi-object GC cost function is proposed which effectively integrates the ASM shape information into the GC framework. The proposed method consists of two phases: model building and segmentation. In the model building phase, the ASM model is built and the parameters of the GC are estimated. The segmentation phase consists of two main steps: initialization (recognition) and delineation. For initialization, an automatic method is proposed which estimates the pose (translation, orientation, and scale) of the model, and obtains a rough segmentation result which also provides the shape information for the GC method. For delineation, an iterative GC-ASM algorithm is proposed which performs finer delineation based on the initialization results. The proposed methods are implemented to operate on 2D images and evaluated on clinical chest CT, abdominal CT, and foot MRI data sets. The results show the following: (a) An overall delineation accuracy of TPVF > 96%, FPVF segmentation step compared to GC which requires seed specification and improves on the accuracy of GC. (e) One disadvantage of GC-ASM is its increased computational expense owing to the iterative nature of the algorithm.

  12. Central Decoding for Multiple Description Codes based on Domain Partitioning

    Directory of Open Access Journals (Sweden)

    M. Spiertz

    2006-01-01

    Full Text Available Multiple Description Codes (MDC can be used to trade redundancy against packet loss resistance for transmitting data over lossy diversity networks. In this work we focus on MD transform coding based on domain partitioning. Compared to Vaishampayan’s quantizer based MDC, domain based MD coding is a simple approach for generating different descriptions, by using different quantizers for each description. Commonly, only the highest rate quantizer is used for reconstruction. In this paper we investigate the benefit of using the lower rate quantizers to enhance the reconstruction quality at decoder side. The comparison is done on artificial source data and on image data. 

  13. Differential modulation based on space-time block codes

    Institute of Scientific and Technical Information of China (English)

    李正权; 胡光锐

    2004-01-01

    A differential modulation scheme using space-time block codes is put forward. Compared with other schemes,our scheme has lower computational complexity and has a simpler decoder. In the case of three or four transmitter antennas, our scheme has a higher rate a higher coding gain and a lower bit error rate for a given rate. Then we made simulations for space-time block codes as well as group codes in the case of two, three, four and five transmit antennas. The simulations prove that using two transmit antennas, one receive antenna and code rate of 4 bits/s/Hz, the differential STBC method outperform the differential group codes method by 4 dB. Useing three, four and five transmit antennas,one receive antenna, and code rate of 3 bits/s/Hz are adopted, the differential STBC method outperform the differential group codes method by 5 dB, 6.5 dB and 7 dB, respectively. In other words, the differential modulation scheme based on space-time block code is better than the corresponding differential modulation scheme

  14. Iterative Decoding of Parallel Concatenated Block Codes and Coset Based MAP Decoding Algorithm for F24 Code

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A multi-dimensional concatenation scheme for block codes is introduced, in which information symbols are interleaved and re-encoded for more than once. It provides a convenient platform to design high performance codes with flexible interleaver size.Coset based MAP soft-in/soft-out decoding algorithms are presented for the F24 code. Simulation results show that the proposed coding scheme can achieve high coding gain with flexible interleaver length and very low decoding complexity.

  15. 75 FR 24323 - American Society of Mechanical Engineers (ASME) Codes and New and Revised ASME Code Cases

    Science.gov (United States)

    2010-05-04

    ...)......... Paragraph Redesignate paragraph (b)(2)(xx), (b)(2)(xx) as ``System leakage paragraph tests''. (b)(2)(xvi... in the testing performed by Battelle Columbus Laboratories which concluded that ferritic steels tend... Piping at LWR Temperatures'' for the Battelle testing results). Therefore, this additional requirement...

  16. An Asymmetric Fingerprinting Scheme based on Tardos Codes

    CERN Document Server

    Charpentier, Ana; Furon, Teddy; Cox, Ingemar

    2010-01-01

    Tardos codes are currently the state-of-the-art in the design of practical collusion-resistant fingerprinting codes. Tardos codes rely on a secret vector drawn from a publicly known probability distribution in order to generate each Buyer's fingerprint. For security purposes, this secret vector must not be revealed to the Buyers. To prevent an untrustworthy Provider forging a copy of a Work with an innocent Buyer's fingerprint, previous asymmetric fingerprinting algorithms enforce the idea of the Buyers generating their own fingerprint. Applying this concept to Tardos codes is challenging since the fingerprint must be based on this vector secret. This paper provides the first solution for an asymmetric fingerprinting protocol dedicated to Tardos codes. The motivations come from a new attack, in which an untrustworthy Provider by modifying his secret vector frames an innocent Buyer.

  17. Unfolding code for neutron spectrometry based on neural nets technology

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M.; Vega C, H. R., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica, Apdo. Postal 336, 98000 Zacatecas (Mexico)

    2012-10-15

    The most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Neural Networks have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This unfolding code called Neutron Spectrometry and Dosimetry by means of Artificial Neural Networks was designed in a graphical interface under LabVIEW programming environment. The core of the code is an embedded neural network architecture, previously optimized by the {sup R}obust Design of Artificial Neural Networks Methodology{sup .} The main features of the code are: is easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a {sup 6}Lil(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, only seven rate counts measurement with a Bonner spheres spectrometer are required for simultaneously unfold the 60 energy bins of the neutron spectrum and to calculate 15 dosimetric quantities, for radiation protection porpoises. This code generates a full report in html format with all relevant information. (Author)

  18. Edge-preserving Intra Depth Coding based on Context-coding and H.264/AVC

    DEFF Research Database (Denmark)

    Zamarin, Marco; Salmistraro, Matteo; Forchhammer, Søren;

    2013-01-01

    Depth map coding plays a crucial role in 3D Video communication systems based on the “Multi-view Video plus Depth” representation as view synthesis performance is strongly affected by the accuracy of depth information, especially at edges in the depth map image. In this paper an efficient algorithm...... each approximated by a flat surface. Edge information is encoded by means of contextcoding with an adaptive template. As a novel element, the proposed method allows exploiting the edge structure of previously encoded edge macroblocks during the context-coding step to further increase compression...

  19. Wyner-Ziv Coding Based on Multidimensional Nested Lattices

    CERN Document Server

    Ling, Cong; Belfiore, Jean-Claude

    2011-01-01

    Distributed source coding (DSC) addresses the compression of correlated sources without communication links among them. This paper is concerned with the Wyner-Ziv problem: coding of an information source with side information available only at the decoder in the form of a noisy version of the source. Both the theoretical analysis and code design are addressed in the framework of multi-dimensional nested lattice coding (NLC). For theoretical analysis, accurate computation of the rate-distortion function is given under the high-resolution assumption, and a new upper bound using the derivative of the theta series is derived. For practical code design, several techniques with low complexity are proposed. Compared to the existing Slepian-Wolf coded nested quantization (SWC-NQ) for Wyner-Ziv coding based on one or two-dimensional lattices, our proposed multi-dimensional NLC can offer better performance at arguably lower complexity, since it does not require the second stage of Slepian-Wolf coding.

  20. Multiple description video coding based on residuum compensation

    Institute of Scientific and Technical Information of China (English)

    ZHAO AnBang; YU Yun; SUN GuoCang; LI GuanFang; HUI JunYing

    2009-01-01

    As one of the focus areas of robust video coding and transmission, multiple description video coding has attracted comprehensive attention. The low coding efficiency of multiple description video coding is always the major challenge in this field. In order to improve the performance of multiple description video coding, this paper proposes a scheme based on residuum compensation. In this scheme, each original video frame is encoded with a traditional encoder first to get the first description. Then the error residuum of each coded frame is added up to the corresponding original frame and the resulting compensated frame is encoded by another traditional encoder to generate the second description successively. This work uses JM10.0 of H.264/AVC as video coding codec to conduct the experiments. The experimental results show that the proposed scheme has good coding performance. Compared to state of art traditional single description encoder, the efficiency can be as high as 90%, i.e. the redundancy Is only 10% to 20%.

  1. ASM LabCap's contributions to disease surveillance and the International Health Regulations (2005).

    Science.gov (United States)

    Specter, Steven; Schuermann, Lily; Hakiruwizera, Celestin; Sow, Mah-Séré Keita

    2010-12-03

    The revised International Health Regulations [IHR(2005)], which requires the Member States of the World Health Organization (WHO) to develop core capacities to detect, assess, report, and respond to public health threats, is bringing new challenges for national and international surveillance systems. As more countries move toward implementation and/or strengthening of their infectious disease surveillance programs, the strengthening of clinical microbiology laboratories becomes increasingly important because they serve as the first line responders to detect new and emerging microbial threats, re-emerging infectious diseases, the spread of antibiotic resistance, and the possibility of bioterrorism. In fact, IHR(2005) Core Capacity #8, "Laboratory", requires that laboratory services be a part of every phase of alert and response.Public health laboratories in many resource-constrained countries require financial and technical assistance to build their capacity. In recognition of this, in 2006, the American Society for Microbiology (ASM) established an International Laboratory Capacity Building Program, LabCap, housed under the ASM International Board. ASM LabCap utilizes ASM's vast resources and its membership's expertise-40,000 microbiologists worldwide-to strengthen clinical and public health laboratory systems in low and low-middle income countries. ASM LabCap's program activities align with HR(2005) by building the capability of resource-constrained countries to develop quality-assured, laboratory-based information which is critical to disease surveillance and the rapid detection of disease outbreaks, whether they stem from natural, deliberate or accidental causes.ASM LabCap helps build laboratory capacity under a cooperative agreement with the U.S. Centers for Disease Control and Prevention (CDC) and under a sub-contract with the Program for Appropriate Technology in Health (PATH) funded by the United States Agency for International Development (USAID

  2. Selection of Component Codes for Turbo Coding Based on Convergence properties

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl

    1999-01-01

    The turbo decoding is a sub-optimal decoding, i.e. it is not a maximum likelihood decoding. It is important to be aware of this fact when the parameters for the scheme are chosen. This goes especially for the selection of component codes, where the selection often has been based solely on the per......The turbo decoding is a sub-optimal decoding, i.e. it is not a maximum likelihood decoding. It is important to be aware of this fact when the parameters for the scheme are chosen. This goes especially for the selection of component codes, where the selection often has been based solely...... on the performance at high SNR's. We will show that it is important to base the choice on the performance at low SNR's, i.e. the convergence properties, as well. Further, the study of the performance with different component codes may lead to an understanding of the convergence process in the turbo codes....

  3. Vision-based reading system for color-coded bar codes

    Science.gov (United States)

    Schubert, Erhard; Schroeder, Axel

    1996-02-01

    Barcode systems are used to mark commodities, articles and products with price and article numbers. The advantage of the barcode systems is the safe and rapid availability of the information about the product. The size of the barcode depends on the used barcode system and the resolution of the barcode scanner. Nevertheless, there is a strong correlation between the information content and the length of the barcode. To increase the information content, new 2D-barcode systems like CodaBlock or PDF-417 are introduced. In this paper we present a different way to increase the information content of a barcode and we would like to introduce the color coded barcode. The new color coded barcode is created by offset printing of the three colored barcodes, each barcode with different information. Therefore, three times more information content can be accommodated in the area of a black printed barcode. This kind of color coding is usable in case of the standard 1D- and 2D-barcodes. We developed two reading devices for the color coded barcodes. First, there is a vision based system, consisting of a standard color camera and a PC-based color frame grabber. Omnidirectional barcode decoding is possible with this reading device. Second, a bi-directional handscanner was developed. Both systems use a color separation process to separate the color image of the barcodes into three independent grayscale images. In the case of the handscanner the image consists of one line only. After the color separation the three grayscale barcodes can be decoded with standard image processing methods. In principle, the color coded barcode can be used everywhere instead of the standard barcode. Typical applications with the color coded barcodes are found in the medicine technique, stock running and identification of electronic modules.

  4. A Novel Block-Based Scheme for Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Qi-Bin Hou

    2014-06-01

    Full Text Available It is well-known that for a given sequence, its optimal codeword length is fixed. Many coding schemes have been proposed to make the codeword length as close to the optimal value as possible. In this paper, a new block-based coding scheme operating on the subsequences of a source sequence is proposed. It is proved that the optimal codeword lengths of the subsequences are not larger than that of the given sequence. Experimental results using arithmetic coding will be presented.

  5. Wavelet based hierarchical coding scheme for radar image compression

    Science.gov (United States)

    Sheng, Wen; Jiao, Xiaoli; He, Jifeng

    2007-12-01

    This paper presents a wavelet based hierarchical coding scheme for radar image compression. Radar signal is firstly quantized to digital signal, and reorganized as raster-scanned image according to radar's repeated period frequency. After reorganization, the reformed image is decomposed to image blocks with different frequency band by 2-D wavelet transformation, each block is quantized and coded by the Huffman coding scheme. A demonstrating system is developed, showing that under the requirement of real time processing, the compression ratio can be very high, while with no significant loss of target signal in restored radar image.

  6. Machine function based control code algebras

    NARCIS (Netherlands)

    Bergstra, J.A.

    2008-01-01

    Machine functions have been introduced by Earley and Sturgis in [6] in order to provide a mathematical foundation of the use of the T-diagrams proposed by Bratman in [5]. Machine functions describe the operation of a machine at a very abstract level. A theory of hardware and software based on machin

  7. Engineering students catch top prizes at ASME competition

    OpenAIRE

    Crumbley, Liz

    2006-01-01

    The Virginia Tech chapter of the American Society of Mechanical Engineers (ASME) carried away several top awards - including one for the design of a fishing apparatus for a quadriplegic - during the recent ASME District F student conference hosted by the University of Tennessee.

  8. A New Video Coding Method Based on Improving Detail Regions

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The Moving Pictures Expert Group (MPEG) and H.263 standard coding method is widely used in video compression. However, the visual quality of detail regions such as eyes and mouth is not content in people at the decoder, as far as the conference telephone or videophone is concerned. A new coding method based on improving detail regions is presented in this paper. Experimental results show that this method can improve the visual quality at the decoder.

  9. 78 FR 37721 - Approval of American Society of Mechanical Engineers' Code Cases

    Science.gov (United States)

    2013-06-24

    ... Engineers' Code Cases AGENCY: Nuclear Regulatory Commission. ACTION: Draft regulatory guides; request for... regulatory guides (DG), DG-1230, ``Design, Fabrication and Materials Code Case Acceptability, ASME Section III''; DG-1231, ``Inservice Inspection Code Case Acceptability, ASME Section XI, Division 1''; and...

  10. HIGH-PERFORMANCE SIMPLE-ENCODING GENERATOR-BASED SYSTEMATIC IRREGULAR LDPC CODES AND RESULTED PRODUCT CODES

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Low-Density Parity-Check (LDPC) code is one of the most exciting topics among the coding theory community. It is of great importance in both theory and practical communications over noisy channels. The most advantage of LDPC codes is their relatively lower decoding complexity compared with turbo codes, while the disadvantage is its higher encoding complexity. In this paper, a new approach is first proposed to construct high performance irregular systematic LDPC codes based on sparse generator matrix, which can significantly reduce the encoding complexity under the same decoding complexity as that of regular or irregular LDPC codes defined by traditional sparse parity-check matrix. Then, the proposed generator-based systematic irregular LDPC codes are adopted ss constituent block codes in rows and columns to design a new kind of product codes family, which also can be interpreted as irregular LDPC codes characterized by graph and thus decoded iteratively. Finally,the performance of the generator-based LDPC codes and the resultant product codes is investigated over an Additive White Gaussian Noise (AWGN) and also compared with the conventional LDPC codes under the same conditions of decoding complexity and channel noise.

  11. PERMUTATION-BASED POLYMORPHIC STEGO-WATERMARKS FOR PROGRAM CODES

    Directory of Open Access Journals (Sweden)

    Denys Samoilenko

    2016-06-01

    Full Text Available Purpose: One of the most actual trends in program code protection is code marking. The problem consists in creation of some digital “watermarks” which allow distinguishing different copies of the same program codes. Such marks could be useful for authority protection, for code copies numbering, for program propagation monitoring, for information security proposes in client-server communication processes. Methods: We used the methods of digital steganography adopted for program codes as text objects. The same-shape symbols method was transformed to same-semantic element method due to codes features which makes them different from ordinary texts. We use dynamic principle of marks forming making codes similar to be polymorphic. Results: We examined the combinatorial capacity of permutations possible in program codes. As a result it was shown that the set of 5-7 polymorphic variables is suitable for the most modern network applications. Marks creation and restoration algorithms where proposed and discussed. The main algorithm is based on full and partial permutations in variables names and its declaration order. Algorithm for partial permutation enumeration was optimized for calculation complexity. PHP code fragments which realize the algorithms were listed. Discussion: Methodic proposed in the work allows distinguishing of each client-server connection. In a case if a clone of some network resource was found the methodic could give information about included marks and thereby data on IP, date and time, authentication information of client copied the resource. Usage of polymorphic stego-watermarks should improve information security indexes in network communications.

  12. Depth-based Multi-View 3D Video Coding

    DEFF Research Database (Denmark)

    Zamarin, Marco

    maps is also addressed: an efficient scheme for stereoscopic disparity maps based on bit-plane decomposition and context-based arithmetic coding is proposed. Inter-view redundancy is exploited by means of disparity warping. Major gains in compression efficiency are noticed when comparing with a number...

  13. The algorithm of malicious code detection based on data mining

    Science.gov (United States)

    Yang, Yubo; Zhao, Yang; Liu, Xiabi

    2017-08-01

    Traditional technology of malicious code detection has low accuracy and it has insufficient detection capability for new variants. In terms of malicious code detection technology which is based on the data mining, its indicators are not accurate enough, and its classification detection efficiency is relatively low. This paper proposed the information gain ratio indicator based on the N-gram to choose signature, this indicator can accurately reflect the detection weight of the signature, and helped by C4.5 decision tree to elevate the algorithm of classification detection.

  14. RELAY ALGORITHM BASED ON NETWORK CODING IN WIRELESS LOCAL NETWORK

    Institute of Scientific and Technical Information of China (English)

    Wang Qi; Wang Qingshan; Wang Dongxue

    2013-01-01

    The network coding is a new technology in the field of information in 21st century.It could enhance the network throughput and save the energy consumption,and is mainly based on the single transmission rate.However,with the development of wireless network and equipment,wireless local network MAC protocols have already supported the multi-rate transmission.This paper investigates the optimal relay selection problem based on network coding.Firstly,the problem is formulated as an optimization problem.Moreover,a relay algorithm based on network coding is proposed and the transmission time gain of our algorithm over the traditional relay algorithm is analyzed.Lastly,we compare total transmission time and the energy consumption of our proposed algorithm,Network Coding with Relay Assistance (NCRA),Transmission Request (TR),and the Direct Transmission (DT) without relay algorithm by adopting IEEE 802.11b.The simulation results demonstrate that our algorithm that improves the coding opportunity by the cooperation of the relay nodes leads to the transmission time decrease of up to 17% over the traditional relay algorithms.

  15. A neutron spectrum unfolding code based on iterative procedures

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M.; Vega C, H. R., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica, Apdo. Postal 336, 98000 Zacatecas (Mexico)

    2012-10-15

    In this work, the version 3.0 of the neutron spectrum unfolding code called Neutron Spectrometry and Dosimetry from Universidad Autonoma de Zacatecas (NSDUAZ), is presented. This code was designed in a graphical interface under the LabVIEW programming environment and it is based on the iterative SPUNIT iterative algorithm, using as entrance data, only the rate counts obtained with 7 Bonner spheres based on a {sup 6}Lil(Eu) neutron detector. The main features of the code are: it is intuitive and friendly to the user; it has a programming routine which automatically selects the initial guess spectrum by using a set of neutron spectra compiled by the International Atomic Energy Agency. Besides the neutron spectrum, this code calculates the total flux, the mean energy, H(10), h(10), 15 dosimetric quantities for radiation protection porpoises and 7 survey meter responses, in four energy grids, based on the International Atomic Energy Agency compilation. This code generates a full report in html format with all relevant information. In this work, the neutron spectrum of a {sup 241}AmBe neutron source on air, located at 150 cm from detector, is unfolded. (Author)

  16. Adaptive Cooperative FEC Based on Combination of Network Coding and Channel Coding for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yong Jin

    2014-02-01

    Full Text Available The data delivery over wireless links with QoS-guarantee is a big challenge because of the unreliable and dynamic characteristics of wireless sensor networks, as well as QoS diversity requirements of applications. In this paper, we propose an adaptive cooperative Forward Error Correction algorithm based on network coding, in the hope quality of experience could be satisfied on receivers with high quality. The algorithm, based on wireless link and distance, adjusts the RS coder parameter and selects the optimal relay nodes. On the other hand, we combine the channel coding and network coding technology at the data link layer to fulfil the requirements of QoS diversity. Both mathematical analysis and NS simulation results demonstrate the proposed mechanism is superior to the traditional FEC and cooperative FEC alone at the reliability, real time performance and energy efficiency. In addition, the proposed mechanism can significantly improve quality of media streaming, in terms of playable frame rate on the receiving side. 

  17. Lossless Image Compression Based on Multiple-Tables Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Rung-Ching Chen

    2009-01-01

    Full Text Available This paper is intended to present a lossless image compression method based on multiple-tables arithmetic coding (MTAC method to encode a gray-level image f. First, the MTAC method employs a median edge detector (MED to reduce the entropy rate of f. The gray levels of two adjacent pixels in an image are usually similar. A base-switching transformation approach is then used to reduce the spatial redundancy of the image. The gray levels of some pixels in an image are more common than those of others. Finally, the arithmetic encoding method is applied to reduce the coding redundancy of the image. To promote high performance of the arithmetic encoding method, the MTAC method first classifies the data and then encodes each cluster of data using a distinct code table. The experimental results show that, in most cases, the MTAC method provides a higher efficiency in use of storage space than the lossless JPEG2000 does.

  18. Reference View Selection in DIBR-Based Multiview Coding.

    Science.gov (United States)

    Maugey, Thomas; Petrazzuoli, Giovanni; Frossard, Pascal; Cagnazzo, Marco; Pesquet-Popescu, Beatrice

    2016-04-01

    Augmented reality, interactive navigation in 3D scenes, multiview video, and other emerging multimedia applications require large sets of images, hence larger data volumes and increased resources compared with traditional video services. The significant increase in the number of images in multiview systems leads to new challenging problems in data representation and data transmission to provide high quality of experience on resource-constrained environments. In order to reduce the size of the data, different multiview video compression strategies have been proposed recently. Most of them use the concept of reference or key views that are used to estimate other images when there is high correlation in the data set. In such coding schemes, the two following questions become fundamental: 1) how many reference views have to be chosen for keeping a good reconstruction quality under coding cost constraints? And 2) where to place these key views in the multiview data set? As these questions are largely overlooked in the literature, we study the reference view selection problem and propose an algorithm for the optimal selection of reference views in multiview coding systems. Based on a novel metric that measures the similarity between the views, we formulate an optimization problem for the positioning of the reference views, such that both the distortion of the view reconstruction and the coding rate cost are minimized. We solve this new problem with a shortest path algorithm that determines both the optimal number of reference views and their positions in the image set. We experimentally validate our solution in a practical multiview distributed coding system and in the standardized 3D-HEVC multiview coding scheme. We show that considering the 3D scene geometry in the reference view, positioning problem brings significant rate-distortion improvements and outperforms the traditional coding strategy that simply selects key frames based on the distance between cameras.

  19. Trellis-coded CPM for satellite-based mobile communications

    Science.gov (United States)

    Abrishamkar, Farrokh; Biglieri, Ezio

    1988-01-01

    Digital transmission for satellite-based land mobile communications is discussed. To satisfy the power and bandwidth limitations imposed on such systems, a combination of trellis coding and continuous-phase modulated signals are considered. Some schemes based on this idea are presented, and their performance is analyzed by computer simulation. The results obtained show that a scheme based on directional detection and Viterbi decoding appears promising for practical applications.

  20. Late Cenozoic genus Fupingopollenites development and its implications for the Asian summer monsoon (ASM) evolution

    Science.gov (United States)

    Miao, Y.; Song, C.; Fang, X.; Meng, Q.; Zhang, P.; Wu, F.; Yan, X.

    2015-12-01

    An extinct palynomorph, Fupingopollenites, was used as the basis for a discussion of the late Cenozoic Asian summer monsoon (ASM) evolution and its possible driving forces. Based on the spatial and temporal variations in its percentages across Inner and East Asia, we found that Fupingopollenites mainly occurred in East Asia, with boundaries to the NE of ca. 42°N, 135°E and NW of ca. 36°N, 103°E during the Early Miocene (ca. 23-17 Ma). This region enlarged westwards, reaching the eastern Qaidam Basin (ca. 36°N, 97.5°E) during the Middle Miocene (ca. 17-11 Ma), before noticeably retreating to a region bounded to the NW at ca. 33°N, 105°E during ca. 11-5.3 Ma. The region then shrank further in the Pliocene, with the NE boundary shrinking southwards to about 35°N, 120°E; the area then almost disappeared during the Pleistocene (2.6-0 Ma). The flourishing and subsequent extinction of Fupingopollenites is indicative of a narrow ecological amplitude with a critical dependence on habitat humidity and temperature (most likely mean annual precipitation (MAP) >1000 mm and mean annual temperature (MAT) >10°C). Therefore, the Fupingopollenites geographic distribution can indicate the humid ASM evolution during the late Cenozoic, revealing that the strongest ASM period occurred during the Middle Miocene Climate Optimum (MMCO, ~17-14 Ma), after which the ASM weakened coincident with global cooling. We argue that the global cooling played a critical role in the ASM evolution, while the Tibetan Plateau uplifts made a relatively small contribution. This result was supported by a Miocene pollen record at the Qaidam Basin, inner Asia and the contemporaneously compiled pollen records across the Eurasia.

  1. an Efficient Blind Signature Scheme based on Error Correcting Codes

    Directory of Open Access Journals (Sweden)

    Junyao Ye

    Full Text Available Cryptography based on the theory of error correcting codes and lattices has received a wide attention in the last years. Shor`s algorithm showed that in a world where quantum computers are assumed to exist, number theoretic cryptosystems are insecure. The ...

  2. FARO base case post-test analysis by COMETA code

    Energy Technology Data Exchange (ETDEWEB)

    Annunziato, A.; Addabbo, C. [Joint Research Centre, Ispra (Italy)

    1995-09-01

    The paper analyzes the COMETA (Core Melt Thermal-Hydraulic Analysis) post test calculations of FARO Test L-11, the so-called Base Case Test. The FARO Facility, located at JRC Ispra, is used to simulate the consequences of Severe Accidents in Nuclear Power Plants under a variety of conditions. The COMETA Code has a 6 equations two phase flow field and a 3 phases corium field: the jet, the droplets and the fused-debris bed. The analysis shown that the code is able to pick-up all the major phenomena occurring during the fuel-coolant interaction pre-mixing phase.

  3. Chaos-based encryption for fractal image coding

    Institute of Scientific and Technical Information of China (English)

    Yuen Ching-Hung; Wong Kwok-Wo

    2012-01-01

    A chaos-based cryptosystem for fractal image coding is proposed.The Rényi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence.Compared with the standard approach of fractal image coding followed by the Advanced Encryption Standard,our scheme offers a higher sensitivity to both plaintext and ciphertext at a comparable operating efficiency.The keystream generated by the Rényi chaotic map passes the randomness tests set by the United States National Institute of Standards and Technology,and so the proposed scheme is sensitive to the key.

  4. Authentication-secrecy code based on conies over finite fields

    Institute of Scientific and Technical Information of China (English)

    裴定一; 王学理

    1996-01-01

    An authentication-secrecy code based on the rational normal curves over finite fields was constructed,whose probabilities of successful deception achieve their information-theoretic bounds.The set of encoding rules for this code is a representation system for cosets of a certain subgroup in the projective transformation group.A special case is studied,i.e.the rational normal curves are the conies over finite fields.The representation system for the cosets which determines the set of encoding rules will be given.

  5. OPTIMIZATION BASED ON LMPROVED REAL—CODED GENETIC ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    ShiYu; YuShenglin

    2002-01-01

    An improved real-coded genetic algorithm is pro-posed for global optimization of functionsl.The new algo-rithm is based om the judgement of the searching perfor-mance of basic real-coded genetic algorithm.The opera-tions of basic real-coded genetic algorithm are briefly dis-cussed and selected.A kind of chaos sequence is described in detail and added in the new algorithm ad a disturbance factor.The strategy of field partition is also used to im-prove the strcture of the new algorithm.Numerical ex-periment shows that the mew genetic algorithm can find the global optimum of complex funtions with satistaiting precision.

  6. A particle-based hybrid code for planet formation

    CERN Document Server

    Morishima, Ryuji

    2015-01-01

    We introduce a new particle-based hybrid code for planetary accretion. The code uses an $N$-body routine for interactions with planetary embryos while it can handle a large number of planetesimals using a super-particle approximation, in which a large number of small planetesimals are represented by a small number of tracers. Tracer-tracer interactions are handled by a statistical routine which uses the phase-averaged stirring and collision rates. We compare hybrid simulations with analytic predictions and pure $N$-body simulations for various problems in detail and find good agreements for all cases. The computational load on the portion of the statistical routine is comparable to or less than that for the $N$-body routine. The present code includes an option of hit-and-run bouncing but not fragmentation, which remains for future work.

  7. GUIDELESS SPATIAL COORDINATE MEASUREMENT TECHNOLOGY BASED ON CODING POLE

    Institute of Scientific and Technical Information of China (English)

    ZHAO Min; QIU Zongming; QU Jiamin; LIU Hongzhao

    2008-01-01

    A new method of guideless spatial coordinate measurement technology based on coding pole and vision measurement is proposed. Unequal spacing of bar code is adopted to pole, so that the code combination of pole image in measuring field is unique. Holographic characteristics of numeric coding pole are adopted to obtain pole pose and pole probe position by any section of bar code on the pole. Spatial coordinates of measuring points can be obtained by coordinate transform. The contradiction between high resolution and large visual field of image sensor is resolved, thereby providing a new concept for surface shape measurement of large objects with high precision. The measurement principles of the system are expounded and mathematic model is established. The measurement equation is evaluated by simulation experiments and the measurement precision is analyzed. Theoretical analysis and simulation experiments prove that this system is characterized by simple structure and wide measurement range. Therefore it can be used in the 3-dimentional coordinate measurement of large objects.

  8. Auriculoterapia en pacientes asmáticos

    Directory of Open Access Journals (Sweden)

    Adolfo González Salvador

    1997-04-01

    Full Text Available Se realiza un estudio para evaluar la eficacia de la auriculopuntura en 30 asmáticos del área de salud de Aguada de Pasajeros, durante los meses de noviembre de 1992 a abril de 1993. El tratamiento se aplicó durante un mes, con seguimiento durante los 5 meses posteriores. Se observó una disminución en la frecuencia, intensidad y duración de las crisis de asma; la mayoría de los pacientes tuvo una evolución satisfactoria y no se presentaron complicaciones. Se concluye que la auriculoterapia es un método útil en pacientes con asma bronquial debido a su eficacia e inocuidadA study was conducted to evaluate the efficacy of auriculopuncture in 30 asthmatic patients from the health area of Aguada de Pasajeros between November, 1992, and April, 1993. The treatment was applied for a month, with a follow-up during the next 5 months. It was observed a reduction in the frequency, intensity and duration of the asthma crises. Most of the patients had a satisfactory evolution and there were no complications. It is concluded that auriculotherapy is a useful method for patients with bronchial asthma due to its effectiveness and innocuousness.

  9. Safety Analysis Report for the KRI-ASM Transport Package

    Energy Technology Data Exchange (ETDEWEB)

    Bang, K. S.; Lee, J. C.; Kim, D. H.; Seo, K. S

    2006-10-15

    Safety evaluation for the KRI-ASM transport package to transport safely I-131, which is produced at HANARO research reactor in KAERI, was carried out. In the safety analyses results for the KRI-ASM transport package, all the maximum stresses as well as the maximum temperature of the surface are lower than their allowable limits. The safety tests to verify the safety analyses results will be performed by using the test model of the KRI-BGM transport package.

  10. A Smart Approach for GPT Cryptosystem Based on Rank Codes

    CERN Document Server

    Rashwan, Haitham; Honary, Bahram

    2010-01-01

    The concept of Public- key cryptosystem was innovated by McEliece's cryptosystem. The public key cryptosystem based on rank codes was presented in 1991 by Gabidulin -Paramonov-Trejtakov(GPT). The use of rank codes in cryptographic applications is advantageous since it is practically impossible to utilize combinatoric decoding. This has enabled using public keys of a smaller size. Respective structural attacks against this system were proposed by Gibson and recently by Overbeck. Overbeck's attacks break many versions of the GPT cryptosystem and are turned out to be either polynomial or exponential depending on parameters of the cryptosystem. In this paper, we introduce a new approach, called the Smart approach, which is based on a proper choice of the distortion matrix X. The Smart approach allows for withstanding all known attacks even if the column scrambler matrix P over the base field Fq.

  11. Forming Teams for Teaching Programming based on Static Code Analysis

    CERN Document Server

    Arosemena-Trejos, Davis; Clunie, Clifton

    2012-01-01

    The use of team for teaching programming can be effective in the classroom because it helps students to generate and acquire new knowledge in less time, but these groups to be formed without taking into account some respects, may cause an adverse effect on the teaching-learning process. This paper proposes a tool for the formation of team based on the semantics of source code (SOFORG). This semantics is based on metrics extracted from the preferences, styles and good programming practices. All this is achieved through a static analysis of code that each student develops. In this way, you will have a record of students with the information extracted; it evaluates the best formation of teams in a given course. The team's formations are based on programming styles, skills, pair programming or with leader.

  12. Forming Teams for Teaching Programming based on Static Code Analysis

    Directory of Open Access Journals (Sweden)

    Davis Arosemena-Trejos

    2012-03-01

    Full Text Available The use of team for teaching programming can be effective in the classroom because it helps students to generate and acquire new knowledge in less time, but these groups to be formed without taking into account some respects, may cause an adverse effect on the teaching-learning process. This paper proposes a tool for the formation of team based on the semantics of source code (SOFORG. This semantics is based on metrics extracted from the preferences, styles and good programming practices. All this is achieved through a static analysis of code that each student develops. In this way, you will have a record of students with the information extracted; it evaluates the best formation of teams in a given course. The team€™s formations are based on programming styles, skills, pair programming or with leader.

  13. Joint Source-Channel Coding for Wavelet-Based Scalable Video Transmission Using an Adaptive Turbo Code

    Directory of Open Access Journals (Sweden)

    Ramzan Naeem

    2007-01-01

    Full Text Available An efficient approach for joint source and channel coding is presented. The proposed approach exploits the joint optimization of a wavelet-based scalable video coding framework and a forward error correction method based on turbo codes. The scheme minimizes the reconstructed video distortion at the decoder subject to a constraint on the overall transmission bitrate budget. The minimization is achieved by exploiting the source rate distortion characteristics and the statistics of the available codes. Here, the critical problem of estimating the bit error rate probability in error-prone applications is discussed. Aiming at improving the overall performance of the underlying joint source-channel coding, the combination of the packet size, interleaver, and channel coding rate is optimized using Lagrangian optimization. Experimental results show that the proposed approach outperforms conventional forward error correction techniques at all bit error rates. It also significantly improves the performance of end-to-end scalable video transmission at all channel bit rates.

  14. Joint Source-Channel Coding for Wavelet-Based Scalable Video Transmission Using an Adaptive Turbo Code

    Directory of Open Access Journals (Sweden)

    Naeem Ramzan

    2007-03-01

    Full Text Available An efficient approach for joint source and channel coding is presented. The proposed approach exploits the joint optimization of a wavelet-based scalable video coding framework and a forward error correction method based on turbo codes. The scheme minimizes the reconstructed video distortion at the decoder subject to a constraint on the overall transmission bitrate budget. The minimization is achieved by exploiting the source rate distortion characteristics and the statistics of the available codes. Here, the critical problem of estimating the bit error rate probability in error-prone applications is discussed. Aiming at improving the overall performance of the underlying joint source-channel coding, the combination of the packet size, interleaver, and channel coding rate is optimized using Lagrangian optimization. Experimental results show that the proposed approach outperforms conventional forward error correction techniques at all bit error rates. It also significantly improves the performance of end-to-end scalable video transmission at all channel bit rates.

  15. Wavelet-based coding of ultraspectral sounder data

    Science.gov (United States)

    Garcia-Vilchez, Fernando; Serra-Sagrista, Joan; Auli-Llinas, Francesc

    2005-08-01

    In this paper we provide a study concerning the suitability of well-known image coding techniques originally devised for lossy compression of still natural images when applied to lossless compression of ultraspectral sounder data. We present here the experimental results of six wavelet-based widespread coding techniques, namely EZW, IC, SPIHT, JPEG2000, SPECK and CCSDS-IDC. Since the considered techniques are 2-dimensional (2D) in nature but the ultraspectral data are 3D, a pre-processing stage is applied to convert the two spatial dimensions into a single spatial dimension. All the wavelet-based techniques are competitive when compared either to the benchmark prediction-based methods for lossless compression, CALIC and JPEG-LS, or to two common compression utilities, GZIP and BZIP2. EZW, SPIHT, SPECK and CCSDS-IDC provide a very similar performance, while IC and JPEG2000 improve the compression factor when compared to the other wavelet-based methods. Nevertheless, they are not competitive when compared to a fast precomputed vector quantizer. The benefits of applying a pre-processing stage, the Bias Adjusted Reordering, prior to the coding process in order to further exploit the spectral and/or spatial correlation when 2D techniques are employed, are also presented.

  16. Huffman-based code compression techniques for embedded processors

    KAUST Repository

    Bonny, Mohamed Talal

    2010-09-01

    The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures

  17. Trellis and turbo coding iterative and graph-based error control coding

    CERN Document Server

    Schlegel, Christian B

    2015-01-01

    This new edition has been extensively revised to reflect the progress in error control coding over the past few years. Over 60% of the material has been completely reworked, and 30% of the material is original. Convolutional, turbo, and low density parity-check (LDPC) coding and polar codes in a unified framework. Advanced research-related developments such as spatial coupling. A focus on algorithmic and implementation aspects of error control coding.

  18. A Secure Network Coding Based on Broadcast Encryption in SDN

    Directory of Open Access Journals (Sweden)

    Yue Chen

    2016-01-01

    Full Text Available By allowing intermediate nodes to encode the received packets before sending them out, network coding improves the capacity and robustness of multicast applications. But it is vulnerable to the pollution attacks. Some signature schemes were proposed to thwart such attacks, but most of them need to be homomorphic that the keys cannot be generated and managed easily. In this paper, we propose a novel fast and secure switch network coding multicast (SSNC on the software defined networks (SDN. In our scheme, the complicated secure multicast management was separated from the fast data transmission based on the SDN. Multiple multicasts will be aggregated to one multicast group according to the requirements of services and the network status. Then, the controller will route aggregated multicast group with network coding; only the trusted switch will be allowed to join the network coding by using broadcast encryption. The proposed scheme can use the traditional cryptography without homomorphy, which greatly reduces the complexity of the computation and improves the efficiency of transmission.

  19. Chessboard-interpolation-based multiple description video coding

    Institute of Scientific and Technical Information of China (English)

    FAN Chen; CUI Huijuan; TANG Kun

    2004-01-01

    To enhance the robustness of video transmission over noisy channels, this paper presents a multiple description video coding algorithm based on chessboardinterpolation. In the algorithm, the input image is decomposed according to the chessboard pattern, and then interpolated to produce two approximate images with the same resolution. Consequently, the state-of-the-art DCT+MC (Discrete Cosine Transform + Motion Compensation) video codec is independently applied to the two approximate images to generate two descriptions of the original image. In this framework, a fairely good reconstructed image quality is obtained when two descriptions are received simultaneously, while an acceptable reconstructed image quality could be yielded if only one description is available. Moreover, the mismatch between the encoder and the decoder could be effectively controlled through partial coding of the difference signal between two descriptions. In bidirectional video communications, a drift control scheme is further proposed, in which the error drift could be eliminated after the encoder imitating the error concealment actions of the decoder. Since the inherent correlation among adjacent blocks of DCT+MC video coding is efficiently exploited, this algorithm has a better redundancy-rate-distortion (RRD) performance than other multiple description algorithms. Simulation results show that the proposed algorithm is fairly robust while preserves a high compression rate. A more constant reconstructed image quality is achieved over extremely noisy channels, compared with traditional single description coding. In addition, it is observed that the mismatch and the error drift are effectively controlled.

  20. Improved FEC Code Based on Concatenated Code for Optical Transmission Systems

    Institute of Scientific and Technical Information of China (English)

    YUAN Jian-guo; JIANG Ze; MAO You-ju

    2006-01-01

    The improved three novel schemes of the super forward error correction(super-FEC) concatenated codes are proposed after the development trend of long-haul optical transmission systems and the defects of the existing FEC codes have been analyzed. The performance simulation of the Reed-Solomon(RS)+Bose-Chaudhuri-Hocguenghem(BCH) inner-outer serial concatenated code is implemented and the conceptions of encoding/decoding the parallel-concatenated code are presented. Furthermore,the simulation results for the RS(255,239)+RS(255,239) code and the RS(255,239)+RS(255,223) code show that the two consecutive concatenated codes are a superior coding scheme with such advantages as the better error correction,moderate redundancy and easy realization compared to the classic RS(255,239) code and other codes,and their signal to noise ratio gains are respectively 2~3 dB more than that of the RS(255,239)code at the bit error rate of 1×10-13. Finally,the frame structure of the novel consecutive concatenated code is arranged to lay a firm foundation in designing its hardware.

  1. Automatic Structure-Based Code Generation from Coloured Petri Nets

    DEFF Research Database (Denmark)

    Kristensen, Lars Michael; Westergaard, Michael

    2010-01-01

    Automatic code generation based on Coloured Petri Net (CPN) models is challenging because CPNs allow for the construction of abstract models that intermix control flow and data processing, making translation into conventional programming constructs difficult. We introduce Process-Partitioned CPNs....... The viability of our approach is demonstrated by applying it to automatically generate an Erlang implementation of the Dynamic MANET On-demand (DYMO) routing protocol specified by the Internet Engineering Task Force (IETF)....

  2. An Efficient Attack on a Code-Based Signature Scheme

    OpenAIRE

    Phesso, Aurélie; Tillich, Jean-Pierre

    2016-01-01

    International audience; Baldi et al. have introduced in [BBC + 13] a very novel code based signature scheme. However we will prove here that some of the bits of the signatures are correlated in this scheme and this allows an attack that recovers enough of the underlying secret structure to forge new signatures. This cryptanalysis was performed on the parameters which were devised for 80 bits of security and broke them with 100, 000 signatures originating from the same secret key.

  3. Regulatory Safety Issues in the Structural Design Criteria of ASME Section III Subsection NH and for Very High Temperatures for VHTR & GEN IV

    Energy Technology Data Exchange (ETDEWEB)

    William J. O’Donnell; Donald S. Griffin

    2007-05-07

    The objective of this task is to identify issues relevant to ASME Section III, Subsection NH [1], and related Code Cases that must be resolved for licensing purposes for VHTGRs (Very High Temperature Gas Reactor concepts such as those of PBMR, Areva, and GA); and to identify the material models, design criteria, and analysis methods that need to be added to the ASME Code to cover the unresolved safety issues. Subsection NH was originally developed to provide structural design criteria and limits for elevated-temperature design of Liquid Metal Fast Breeder Reactor (LMFBR) systems and some gas-cooled systems. The U.S. Nuclear Regulatory Commission (NRC) and its Advisory Committee for Reactor Safeguards (ACRS) reviewed the design limits and procedures in the process of reviewing the Clinch River Breeder Reactor (CRBR) for a construction permit in the late 1970s and early 1980s, and identified issues that needed resolution. In the years since then, the NRC and various contractors have evaluated the applicability of the ASME Code and Code Cases to high-temperature reactor designs such as the VHTGRs, and identified issues that need to be resolved to provide a regulatory basis for licensing. This Report describes: (1) NRC and ACRS safety concerns raised during the licensing process of CRBR , (2) how some of these issues are addressed by the current Subsection NH of the ASME Code; and (3) the material models, design criteria, and analysis methods that need to be added to the ASME Code and Code Cases to cover unresolved regulatory issues for very high temperature service.

  4. Multiple Encryption-based Algorithm of Agricultural Product Trace Code

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    To establish a sound traceability system of agricultural products and guarantee security of agricultural products,an algorithm is proposed to encrypt trace code of agricultural products.Original trace code consists of 34 digits indicating such information as place of origin,name of product,date of production and authentication.Area code is used to indicate enterprise information,the encrypted algorithm is designed because of the increasing code length,such coding algorithms as system conversion and section division are applied for the encrypted conversion of code of origin place and production date code,moreover,section identification code and authentication code are permutated and combined to produce check code.Through the multiple encryption and code length compression,34 digits are compressed to 20 on the basis of ensuring complete coding information,shorter code length and better encryption enable the public to know information about agricultural products without consulting professional database.

  5. JND measurements and wavelet-based image coding

    Science.gov (United States)

    Shen, Day-Fann; Yan, Loon-Shan

    1998-06-01

    Two major issues in image coding are the effective incorporation of human visual system (HVS) properties and the effective objective measure for evaluating image quality (OQM). In this paper, we treat the two issues in an integrated fashion. We build a JND model based on the measurements of the JND (Just Noticeable Difference) property of HVS. We found that JND does not only depend on the background intensity but also a function of both spatial frequency and patten direction. Wavelet transform, due to its excellent simultaneous Time (space)/frequency resolution, is the best choice to apply the JND model. We mathematically derive an OQM called JND_PSNR that is based on the JND property and wavelet decomposed subbands. JND_PSNR is more consistent with human perception and is recommended as an alternative to the PSNR or SNR. With the JND_PSNR in mind, we proceed to propose a wavelet and JND based codec called JZW. JZW quantizes coefficients in each subband with proper step size according to the subband's importance to human perception. Many characteristics of JZW are discussed, its performance evaluated and compared with other famous algorithms such as EZW, SPIHT and TCCVQ. Our algorithm has 1 - 1.5 dB gain over SPIHT even when we use simple Huffman coding rather than the more efficient adaptive arithmetic coding.

  6. The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing

    Science.gov (United States)

    Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava

    2016-08-01

    This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.

  7. Research of Wavelet Based Multicarrier Modulation System with Near Shannon Limited Codes

    Institute of Scientific and Technical Information of China (English)

    ZHANGHaixia; YUANDongfeng; ZHAOFeng

    2005-01-01

    In this paper, by using turbo codes and Low density parity codes (LDPC) as channel correcting code scheme, Wavelet based multicarrier modulation (WMCM) systems are proposed and investigated on different transmission scenarios. The Bit error rate (BER) performance of these two near Shannon limited codes is simulated and compared with various code parameters. Simulated results show that Turbo coded WMCM (TCWMCM) performs better than LDPC coded WMCM (LDPC-CWMCM) on both AWGN and Rayleigh fading channels when these two kinds of codes are of the same code parameters.

  8. Optical image encryption based on real-valued coding and subtracting with the help of QR code

    Science.gov (United States)

    Deng, Xiaopeng

    2015-08-01

    A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.

  9. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    Science.gov (United States)

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  10. DCT/DST-based transform coding for intra prediction in image/video coding.

    Science.gov (United States)

    Saxena, Ankur; Fernandes, Felix C

    2013-10-01

    In this paper, we present a DCT/DST based transform scheme that applies either the conventional DCT or type-7 DST for all the video-coding intra-prediction modes: vertical, horizontal, and oblique. Our approach is applicable to any block-based intra prediction scheme in a codec that employs transforms along the horizontal and vertical direction separably. Previously, Han, Saxena, and Rose showed that for the intra-predicted residuals of horizontal and vertical modes, the DST is the optimal transform with performance close to the KLT. Here, we prove that this is indeed the case for the other oblique modes. The optimal choice of using DCT or DST is based on intra-prediction modes and requires no additional signaling information or rate-distortion search. The DCT/DST scheme presented in this paper was adopted in the HEVC standardization in March 2011. Further simplifications, especially to reduce implementation complexity, which remove the mode-dependency between DCT and DST, and simply always use DST for the 4 × 4 intra luma blocks, were adopted in the HEVC standard in July 2012. Simulation results conducted for the DCT/DST algorithm are shown in the reference software for the ongoing HEVC standardization. Our results show that the DCT/DST scheme provides significant BD-rate improvement over the conventional DCT based scheme for intra prediction in video sequences.

  11. Canonical Huffman code based full-text index

    Institute of Scientific and Technical Information of China (English)

    Yi Zhang; Zhili Pei; Jinhui Yang; Yanchun Liang

    2008-01-01

    Full-text indices are data structures that can be used to find any substring of a given string. Many full-text indices require space larger than the original string. In this paper, we introduce the canonical Huffman code to the wavelet tree of a string T[1...n]. Compared with Huffman code based wavelet tree, the memory space used to represent the shape of wavelet tree is not needed. In case of large alphabet, this part of memory is not negligible. The operations of wavelet tree are also simpler and more efficient due to the canonical Huffman code. Based on the resulting structure, the multi-key rank and select functions can be performed using at most nH0 + |X|(lglgn + lgn - lg|Σ|)+O(nH0) bits and in O(H0) time for average cases, where H0 is the zeroth order empirical entropy of T. In the end, we present an efficient construction algorithm for this index, which is on-line and linear.

  12. Wavelet-based zerotree coding of aerospace images

    Science.gov (United States)

    Franques, Victoria T.; Jain, Vijay K.

    1996-06-01

    This paper presents a wavelet based image coding method achieving high levels of compression. A multi-resolution subband decomposition system is constructed using Quadrature Mirror Filters. Symmetric extension and windowing of the multi-scaled subbands are incorporated to minimize the boundary effects. Next, the Embedded Zerotree Wavelet coding algorithm is used for data compression method. Elimination of the isolated zero symbol, for certain subbands, leads to an improved EZW algorithm. Further compression is obtained with an adaptive arithmetic coder. We achieve a PSNR of 26.91 dB at a bit rate of 0.018, 35.59 dB at a bit rate of 0.149, and 43.05 dB at 0.892 bits/pixel for the aerospace image, Refuel.

  13. Image coding based on energy-sorted wavelet packets

    Science.gov (United States)

    Kong, Lin-Wen; Lay, Kuen-Tsair

    1995-04-01

    The discrete wavelet transform performs multiresolution analysis, which effectively decomposes a digital image into components with different degrees of details. In practice, it is usually implemented in the form of filter banks. If the filter banks are cascaded and both the low-pass and the high-pass components are further decomposed, a wavelet packet is obtained. The coefficients of the wavelet packet effectively represent subimages in different resolution levels. In the energy-sorted wavelet- packet decomposition, all subimages in the packet are then sorted according to their energies. The most important subimages, as measured by the energy, are preserved and coded. By investigating the histogram of each subimage, it is found that the pixel values are well modelled by the Laplacian distribution. Therefore, the Laplacian quantization is applied to quantized the subimages. Experimental results show that the image coding scheme based on wavelet packets achieves high compression ratio while preserving satisfactory image quality.

  14. Four Year-Olds Use Norm-Based Coding for Face Identity

    Science.gov (United States)

    Jeffery, Linda; Read, Ainsley; Rhodes, Gillian

    2013-01-01

    Norm-based coding, in which faces are coded as deviations from an average face, is an efficient way of coding visual patterns that share a common structure and must be distinguished by subtle variations that define individuals. Adults and school-aged children use norm-based coding for face identity but it is not yet known if pre-school aged…

  15. A practical approach for implementing risk-based inservice testing of pumps at nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, R.S. [Idaho National Engineering Lab., Idaho Falls, ID (United States); Maret, D.; Seniuk, P.; Smith, L.

    1996-12-01

    The American Society of Mechanical Engineers (ASME) Center for Research and Technology Development`s (CRTD) Research Task Force on Risk-Based Inservice Testing has developed guidelines for risk-based inservice testing (IST) of pumps and valves. These guidelines are intended to help the ASME Operation and Maintenance (OM) Committee to enhance plant safety while focussing appropriate testing resources on critical components. This paper describes a practical approach for implementing those guidelines for pumps at nuclear power plants. The approach, as described in this paper, relies on input, direction, and assistance from several entities such as the ASME Code Committees, United States Nuclear Regulatory Commission (NRC), and the National Laboratories, as well as industry groups and personnel with applicable expertise. Key parts of the risk-based IST process that are addressed here include: identification of important failure modes, identification of significant failure causes, assessing the effectiveness of testing and maintenance activities, development of alternative testing and maintenance strategies, and assessing the effectiveness of alternative testing strategies with present ASME Code requirements. Finally, the paper suggests a method of implementing this process into the ASME OM Code for pump testing.

  16. Penetration of ASM 981 in canine skin: a comparative study.

    Science.gov (United States)

    Gutzwiller, Meret E Ricklin; Reist, Martin; Persohn, Elke; Peel, John E; Roosje, Petra J

    2006-01-01

    ASM 981 has been developed for topical treatment of inflammatory skin diseases. It specifically inhibits the production and release of pro-inflammatory cytokines. We measured the skin penetration of ASM 981 in canine skin and compared penetration in living and frozen skin. To make penetration of ASM 981 visible in dog skin, tritium labelled ASM 981 was applied to a living dog and to defrosted skin of the same dog. Using qualitative autoradiography the radioactive molecules were detected in the lumen of the hair follicles until the infundibulum, around the superficial parts of the hair follicles and into a depth of the dermis of 200 to 500 microm. Activity could not be found in deeper parts of the hair follicles, the dermis or in the sebaceous glands. Penetration of ASM 981 is low in canine skin and is only equally spread in the upper third of the dermis 24 hours after application. Penetration in frozen skin takes even longer than in living canine skin but shows the same distribution.

  17. Evaluation of Key Points of Phased Array Ultrasonic Testing in the Updated Edition of ASME Standards Part 1:Two Different Methods Related to Acceptance Criteria%ASME 标准新版中有关相控阵超声成像检测的要点评析第一部分:两种方法要求

    Institute of Scientific and Technical Information of China (English)

    李衍

    2015-01-01

    This article describes the major standard practices on phased array ultrasonic testing (PAUT)for pressure equipment presented in the updated edition of ASME Code(2013),Section Ⅴ NDE,Article 4.It is the first part:evaluation of two inspection methods of welds specified by the referencing Code Section on manufacture and design based on the different acceptance criteria,one is UT requirements for workmanship based acceptance criteria,another is UT requirements for fracture mechanics based acceptance criteria.A common ground of both standard practices is using Computer imaging (CI)techniques to inspect and evaluate welded joints of pressure equipment,and the second one specifically focuses on flaw characterization and sizing.The intention is to compare with our industrial standards and to look at our national condition,thus we can find out the gap and correct errors in this field,so that making the Chinese enterprises performing the referencing ASME Code Section rise to a higher level.%介绍 ASME(美国机械工程师学会)最新版(2013)第Ⅴ卷《无损检测》第四章中有关承压设备相控阵超声成像检测的主要规定。评析由承压设备制造卷界定的焊缝超声检测基于不同验收条件的两种方法:一是基于制造质量验收标准的超声检测要求;二是基于断裂力学验收标准的超声检测要求。两种要求的共同点是都要用计算机成像法检测和评定焊缝,后者特别注重于缺陷表征和定量。意在对照国标国情,找差距、纠偏误,使中国企业正确执行 ASME 有关规范的水平更上一个台阶。

  18. Network Coding-Based Communications via the Controlled Quantum Teleportation

    Directory of Open Access Journals (Sweden)

    Ying Guo

    2013-02-01

    Full Text Available Inspired by the structure of the network coding over the butterfly network, a framework of quantum network coding scheme is investigated, which transmits two unknown quantum states crossly over the butterfly quantum system with the multi-photon non-maximally entangled GHZ states. In this scheme, it contains certain number of entanglement-qubit source nodes that teleport unknown quantum states to other nodes on the small-scale network where each intermediate node can pass on its received quantum states to others via superdense coding. In order to transmit the unknown states in a deterministic way, the controlled quantum teleportation is adopted on the intermediate node. It makes legal nodes more convenient than any other previous teleportation schemes to transmit unknown quantum states to unknown participants in applications. It shows that the intrinsic efficiency of transmissions approaches 100% in principle. This scheme is secure based on the securely-shared quantum channels between all nodes and the quantum mechanical impossibility of local unitary transformations between non-maximally entangled GHZ states. Moreover, the generalized scheme is proposed for transmitting two multipartite entangled states.

  19. PetriCode: A Tool for Template-Based Code Generation from CPN Models

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge

    2014-01-01

    Code generation is an important part of model driven methodologies. In this paper, we present PetriCode, a software tool for generating protocol software from a subclass of Coloured Petri Nets (CPNs). The CPN subclass is comprised of hierarchical CPN models describing a protocol system at different...

  20. Visual Coding of Human Bodies: Perceptual Aftereffects Reveal Norm-Based, Opponent Coding of Body Identity

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J.

    2013-01-01

    Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this…

  1. Trace-Based Code Generation for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    Paper Submitted for review at the Eighth International Conference on Generative Programming and Component Engineering. Model-based testing can be a powerful means to generate test cases for the system under test. However, creating a useful model for model-based testing requires expertise in the (fo

  2. Trace-Based Code Generation for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    Paper Submitted for review at the Eighth International Conference on Generative Programming and Component Engineering. Model-based testing can be a powerful means to generate test cases for the system under test. However, creating a useful model for model-based testing requires expertise in the

  3. Sparse coding based feature representation method for remote sensing images

    Science.gov (United States)

    Oguslu, Ender

    In this dissertation, we study sparse coding based feature representation method for the classification of multispectral and hyperspectral images (HSI). The existing feature representation systems based on the sparse signal model are computationally expensive, requiring to solve a convex optimization problem to learn a dictionary. A sparse coding feature representation framework for the classification of HSI is presented that alleviates the complexity of sparse coding through sub-band construction, dictionary learning, and encoding steps. In the framework, we construct the dictionary based upon the extracted sub-bands from the spectral representation of a pixel. In the encoding step, we utilize a soft threshold function to obtain sparse feature representations for HSI. Experimental results showed that a randomly selected dictionary could be as effective as a dictionary learned from optimization. The new representation usually has a very high dimensionality requiring a lot of computational resources. In addition, the spatial information of the HSI data has not been included in the representation. Thus, we modify the framework by incorporating the spatial information of the HSI pixels and reducing the dimension of the new sparse representations. The enhanced model, called sparse coding based dense feature representation (SC-DFR), is integrated with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) classifiers to discriminate different types of land cover. We evaluated the proposed algorithm on three well known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit (SOMP) and image fusion and recursive filtering (IFRF). The results from the experiments showed that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification. To further

  4. New Semantic Model for Authentication Protocols in ASMs

    Institute of Scientific and Technical Information of China (English)

    Rui Xue; Deng-Guo Feng

    2004-01-01

    A new semantic model in Abstract State Model(ASM)for authentication protocols is presented.It highlights the Woo-Lam's ideas for authentication,which is the strongest one in Lowe's definition hierarchy for entity authentication.Apart from the flexible and natural features in forming and analyzing protocols inherited from ASM,the model defines both authentication and secrecy properties explicitly in first order sentences as invariants.The process of proving security properties with respect to an authentication protocol blends the correctness and secrecy properties together to avoid the potential flaws which may happen when treated separately.The security of revised Helsinki protocol is shown as a case study.The new model is different from the previous ones in ASMs.

  5. A Network Coding Based Routing Protocol for Underwater Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xin Guan

    2012-04-01

    Full Text Available Due to the particularities of the underwater environment, some negative factors will seriously interfere with data transmission rates, reliability of data communication, communication range, and network throughput and energy consumption of underwater sensor networks (UWSNs. Thus, full consideration of node energy savings, while maintaining a quick, correct and effective data transmission, extending the network life cycle are essential when routing protocols for underwater sensor networks are studied. In this paper, we have proposed a novel routing algorithm for UWSNs. To increase energy consumption efficiency and extend network lifetime, we propose a time-slot based routing algorithm (TSR.We designed a probability balanced mechanism and applied it to TSR. The theory of network coding is introduced to TSBR to meet the requirement of further reducing node energy consumption and extending network lifetime. Hence, time-slot based balanced network coding (TSBNC comes into being. We evaluated the proposed time-slot based balancing routing algorithm and compared it with other classical underwater routing protocols. The simulation results show that the proposed protocol can reduce the probability of node conflicts, shorten the process of routing construction, balance energy consumption of each node and effectively prolong the network lifetime.

  6. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  7. A high-speed BCI based on code modulation VEP

    Science.gov (United States)

    Bin, Guangyu; Gao, Xiaorong; Wang, Yijun; Li, Yun; Hong, Bo; Gao, Shangkai

    2011-04-01

    Recently, electroencephalogram-based brain-computer interfaces (BCIs) have attracted much attention in the fields of neural engineering and rehabilitation due to their noninvasiveness. However, the low communication speed of current BCI systems greatly limits their practical application. In this paper, we present a high-speed BCI based on code modulation of visual evoked potentials (c-VEP). Thirty-two target stimuli were modulated by a time-shifted binary pseudorandom sequence. A multichannel identification method based on canonical correlation analysis (CCA) was used for target identification. The online system achieved an average information transfer rate (ITR) of 108 ± 12 bits min-1 on five subjects with a maximum ITR of 123 bits min-1 for a single subject.

  8. Saliency-Based Fidelity Adaptation Preprocessing for Video Coding

    Institute of Scientific and Technical Information of China (English)

    Shao-Ping Lu; Song-Hai Zhang

    2011-01-01

    In this paper, we present a video coding scheme which applies the technique of visual saliency computation to adjust image fidelity before compression. To extract visually salient features, we construct a spatio-temporal saliency map by analyzing the video using a combined bottom-up and top-down visual saliency model. We then use an extended bilateral filter, in which the local intensity and spatial scales are adjusted according to visual saliency, to adaptively alter the image fidelity. Our implementation is based on the H.264 video encoder JM12.0. Besides evaluating our scheme with the H.264 reference software, we also compare it to a more traditional foreground-background segmentation-based method and a foveation-based approach which employs Gaussian blurring. Our results show that the proposed algorithm can improve the compression ratio significantly while effectively preserving perceptual visual quality.

  9. A construction of quantum turbo product codes based on CSS-type quantum convolutional codes

    Science.gov (United States)

    Xiao, Hailin; Ni, Ju; Xie, Wu; Ouyang, Shan

    As in classical coding theory, turbo product codes (TPCs) through serially concatenated block codes can achieve approximatively Shannon capacity limit and have low decoding complexity. However, special requirements in the quantum setting severely limit the structures of turbo product codes (QTPCs). To design a good structure for QTPCs, we present a new construction of QTPCs with the interleaved serial concatenation of CSS(L1,L2)-type quantum convolutional codes (QCCs). First, CSS(L1,L2)-type QCCs are proposed by exploiting the theory of CSS-type quantum stabilizer codes and QCCs, and the description and the analysis of encoder circuit are greatly simplified in the form of Hadamard gates and C-NOT gates. Second, the interleaved coded matrix of QTPCs is derived by quantum permutation SWAP gate definition. Finally, we prove the corresponding relation on the minimum Hamming distance of QTPCs associated with classical TPCs, and describe the state diagram of encoder and decoder of QTPCs that have a highly regular structure and simple design idea.

  10. Implementation Of Decoders for LDPC Block Codes and LDPC Convolutional Codes Based on GPUs

    CERN Document Server

    Zhao, Yue

    2012-01-01

    With the use of belief propagation (BP) decoding algorithm, low-density parity-check (LDPC) codes can achieve near-Shannon limit performance. LDPC codes can accomplish bit error rates (BERs) as low as $10^{-15}$ even at a small bit-energy-to-noise-power-spectral-density ratio ($E_{b}/N_{0}$). In order to evaluate the error performance of LDPC codes, simulators running on central processing units (CPUs) are commonly used. However, the time taken to evaluate LDPC codes with very good error performance is excessive. For example, assuming 30 iterations are used in the decoder, our simulation results have shown that it takes a modern CPU more than 7 days to arrive at a BER of 10^{-6} for a code with length 18360. In this paper, efficient LDPC block-code decoders/simulators which run on graphics processing units (GPUs) are proposed. Both standard BP decoding algorithm and layered decoding algorithm are used. We also implement the decoder for the LDPC convolutional codes (LDPCCC). The LDPCCC is derived from a pre-de...

  11. Flanged joints with contact outside the bolt circle: ASME Part B design rules

    Energy Technology Data Exchange (ETDEWEB)

    Rodabaugh, E. C.; Moore, S. E.

    1976-05-01

    The ASME Boiler and Pressure Vessel Code, Section VIII, Division 1, gives rules which are subdivided into ''Part A'' and ''Part B''. Part A covers flanged joints where contact between flanges occurs through a gasket located inside the bolt holes. Part B covers flanged joints with contact outside the bolt holes. This report (a) summarizes the theory for Part B flanged joints, (b) presents examples which show the significant differences between Part A flanged joints and Part B flanged joints, (c) presents the available test data relevant to the characteristics of Part B flanged joints, (d) gives listings of two computer programs which can be used to evaluate the characteristics of Part B flanged joints, and (e) gives recommendations for Code revisions and other aspects of Part B flanged-joint design.

  12. Code Based Analysis for Object-Oriented Systems

    Institute of Scientific and Technical Information of China (English)

    Swapan Bhattacharya; Ananya Kanjilal

    2006-01-01

    The basic features of object-oriented software makes it difficult to apply traditional testing methods in objectoriented systems. Control Flow Graph (CFG) is a well-known model used for identification of independent paths in procedural software. This paper highlights the problem of constructing CFG in object-oriented systems and proposes a new model named Extended Control Flow Graph (ECFG) for code based analysis of Object-Oriented (OO) software. ECFG is a layered CFG where nodes refer to methods rather than statements. A new metrics - Extended Cyclomatic Complexity (E-CC) is developed which is analogous to McCabe's Cyclomatic Complexity (CC) and refers to the number of independent execution paths within the OO software. The different ways in which CFG's of individual methods are connected in an ECFG are presented and formulas for E-CC for these different cases are proposed. Finally we have considered an example in Java and based on its ECFG, applied these cases to arrive at the E-CC of the total system as well as proposed a methodology for calculating the basis set, i.e., the set of independent paths for the OO system that will help in creation of test cases for code testing.

  13. Block-based embedded color image and video coding

    Science.gov (United States)

    Nagaraj, Nithin; Pearlman, William A.; Islam, Asad

    2004-01-01

    Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.

  14. A Novel User Authentication Scheme Based on QR-Code

    Directory of Open Access Journals (Sweden)

    Kuan-Chieh Liao

    2010-08-01

    Full Text Available User authentication is one of the fundamental procedures to ensure secure communications and share system resources over an insecure public network channel.  Thus, a simple and efficient authentication mechanism is required for securing the network system in the real environment. In general, the password-based authentication mechanism provides the basic capability to prevent unauthorized access. Especially, the purpose of the one-time password is to make it more difficult to gain unauthorized access to restricted resources. Instead of using the password file as conventional authentication systems, many researchers have devoted to implement various one-time password schemes using smart cards, time-synchronized token or short message service in order to reduce the risk of tampering and maintenance cost.  However, these schemes are impractical because of the far from ubiquitous hardware devices or the infrastructure requirements. To remedy these weaknesses, the attraction of the QR-code technique can be introduced into our one-time password authentication protocol. Not the same as before, the proposed scheme based on QR code not only eliminates the usage of the password verification table, but also is a cost effective solution since most internet users already have mobile phones. For this reason, instead of carrying around a separate hardware token for each security domain, the superiority of handiness benefit from the mobile phone makes our approach more practical and convenient.

  15. Multi-View Distributed Video Coding Based on Discrete Cosine

    Directory of Open Access Journals (Sweden)

    Guanqun Liu

    2014-07-01

    Full Text Available To investigate the allocation scheme of the multi-view distributed video coding (DVC, the corresponding improvements are proposed correspondingly for traditional multi-view DVC. Traditional multi-view DVC (Wyner-Ziv DVC encodes for all areas of Wyner-Ziv frame indiscriminately based on Turbo or LDPC. In this kind of encoding process, with regard to violent motor area, decoder can’t decode violent motor area accurately and also send more solicited message to feedback channel, which lowers the code efficiency and decodes inaccurately for violent motor area, it causes a part of area distortion in the image. In this paper, a distributed video encryption algorithm is proposed which based on discrete cosine transform (DCT. The algorithm combines decision criteria of ROI to get violent motor area and non-violent motor area. For violent motor area, to extract low frequency coefficient of DCT as DCT-R algorithm to assist decoder end to decode, decoder utilizes low frequency coefficient of DCT which has already been decoded to carry on bi-directional movement evaluation. Simulation experiment tests and verifies the improved algorithm effectiveness of proposed multi-view DVC in this paper

  16. Coding technique with progressive reconstruction based on VQ and entropy coding applied to medical images

    Science.gov (United States)

    Martin-Fernandez, Marcos; Alberola-Lopez, Carlos; Guerrero-Rodriguez, David; Ruiz-Alzola, Juan

    2000-12-01

    In this paper we propose a novel lossless coding scheme for medical images that allows the final user to switch between a lossy and a lossless mode. This is done by means of a progressive reconstruction philosophy (which can be interrupted at will) so we believe that our scheme gives a way to trade off between the accuracy needed for medical diagnosis and the information reduction needed for storage and transmission. We combine vector quantization, run-length bit plane and entropy coding. Specifically, the first step is a vector quantization procedure; the centroid codes are Huffman- coded making use of a set of probabilities that are calculated in the learning phase. The image is reconstructed at the coder in order to obtain the error image; this second image is divided in bit planes, which are then run-length and Huffman coded. A second statistical analysis is performed during the learning phase to obtain the parameters needed in this final stage. Our coder is currently trained for hand-radiographs and fetal echographies. We compare our results for this two types of images to classical results on bit plane coding and the JPEG standard. Our coder turns out to outperform both of them.

  17. Depth Measurement Based on Infrared Coded Structured Light

    Directory of Open Access Journals (Sweden)

    Tong Jia

    2014-01-01

    Full Text Available Depth measurement is a challenging problem in computer vision research. In this study, we first design a new grid pattern and develop a sequence coding and decoding algorithm to process the pattern. Second, we propose a linear fitting algorithm to derive the linear relationship between the object depth and pixel shift. Third, we obtain depth information on an object based on this linear relationship. Moreover, 3D reconstruction is implemented based on Delaunay triangulation algorithm. Finally, we utilize the regularity of the error curves to correct the system errors and improve the measurement accuracy. The experimental results show that the accuracy of depth measurement is related to the step length of moving object.

  18. Improved RB-HARQ scheme based on structured LDPC codes

    Institute of Scientific and Technical Information of China (English)

    WANG Wen-jun; LIN Yue-wei; YAN Yuan

    2007-01-01

    Reliability-based hybrid automatic repeat request (ARQ) (RB-HARQ) is a recently introduced approach to incremental-redundancy ARQ. In RB-HARQ scheme, the bits that are to be retransmitted are adaptively selected at the receiver based on the estimated bit reliability. It could result in significant performance gain but requires huge overhead in the feedback channel. In this study, an improved RB-HARQ scheme (IRB-HARQ) for structured low-density parity-check codes is proposed, which simplifies the comparison operations needed to search the bits to be retransmitted and outperforms the RB-HARQ scheme in consideration of the bit transmission power for the requesting messages on the feedback link. Simulation results show that the IRB-HARQ scheme is more efficient and practical than the RB-HARQ scheme.

  19. Fingerprint indexing based on Minutia Cylinder-Code.

    Science.gov (United States)

    Cappelli, Raffaele; Ferrara, Matteo; Maltoni, Davide

    2011-05-01

    This paper proposes a new hash-based indexing method to speed up fingerprint identification in large databases. A Locality-Sensitive Hashing (LSH) scheme has been designed relying on Minutiae Cylinder-Code (MCC), which proved to be very effective in mapping a minutiae-based representation (position/ angle only) into a set of fixed-length transformation-invariant binary vectors. A novel search algorithm has been designed thanks to the derivation of a numerical approximation for the similarity between MCC vectors. Extensive experimentations have been carried out to compare the proposed approach against 15 existing methods over all the benchmarks typically used for fingerprint indexing. In spite of the smaller set of features used (top performing methods usually combine more features), the new approach outperforms existing ones in almost all of the cases.

  20. Understanding the Long-Term Spectral Variability of Cygnus X-1 from BATSE and ASM Observations

    Science.gov (United States)

    Zdziarski, Andrzej A.; Poutanen, Juri; Paciesas, William S.; Wen, Linqing; Six, N. Frank (Technical Monitor)

    2002-01-01

    We present a spectral analysis of observations of Cygnus X-1 by the RXTE/ASM (1.5-12 keV) and CGRO/BATSE (20-300 keV), including about 1200 days of simultaneous data. We find a number of correlations between intensities and hardnesses in different energy bands from 1.5 keV to 300 keV. In the hard (low) spectral state, there is a negative correlation between the ASM 1.5-12 keV flux and the hardness at any energy. In the soft (high) spectral state, the ASM flux is positively correlated with the ASM hardness (as previously reported) but uncorrelated with the BATSE hardness. In both spectral states, the BATSE hardness correlates with the flux above 100 keV, while it shows no correlation with the flux in the 20-100 keV range. At the same time, there is clear correlation between the BATSE fluxes below and above 100 keV. In the hard state, most of the variability can be explained by softening the overall spectrum with a pivot at approximately 50 keV. The observations show that there has to be another, independent variability pattern of lower amplitude where the spectral shape does not change when the luminosity changes. In the soft state, the variability is mostly caused by a variable hard (Comptonized) spectral component of a constant shape superimposed on a constant soft blackbody component. These variability patterns are in agreement with the dependence of the rms variability on the photon energy in the two states. We interpret the observed correlations in terms of theoretical Comptonization models. In the hard state, the variability appears to be driven mostly by changing flux in seed photons Comptonized in a hot thermal plasma cloud with an approximately constant power supply. In the soft state, the variability is consistent with flares of hybrid, thermal/nonthermal, plasma with variable power above a stable cold disk. Also, based on broadband pointed observations simultaneous with those of the ASM and BATSE, we find the intrinsic bolometric luminosity increases by a

  1. Network Coding Parallelization Based on Matrix Operations for Multicore Architectures

    DEFF Research Database (Denmark)

    Wunderlich, Simon; Cabrera, Juan; Fitzek, Frank

    2015-01-01

    . Despite the fact that single core implementations show already comparable coding speeds with standard coding approaches, this paper pushes network coding to the next level by exploiting multicore architectures. The disruptive idea presented in the paper is to break with current software implementations...... and coding approaches and to adopt highly optimized dense matrix operations from the high performance computation field for network coding in order to increase the coding speed. The paper presents the novel coding approach for multicore architectures and shows coding speed gains on a commercial platform...... such as the Raspberry Pi2 with four cores in the order of up to one full magnitude. The speed increase gain is even higher than the number of cores of the Raspberry Pi2 since the newly introduced approach exploits the cache architecture way better than by-the-book matrix operations. Copyright © 2015 by the Institute...

  2. 14 CFR 330.31 - What data must air carriers submit concerning ASMs or RTMs?

    Science.gov (United States)

    2010-01-01

    ... combination passenger/cargo carrier, you must have submitted your August 2001 total completed ASM report to... correct an error that you document to the Department, you must not alter the ASM or RTM reports...

  3. Gröbner Bases, Coding, and Cryptography

    CERN Document Server

    Sala, Massimiliano; Perret, Ludovic

    2009-01-01

    Coding theory and cryptography allow secure and reliable data transmission, which is at the heart of modern communication. This book offers a comprehensive overview on the application of commutative algebra to coding theory and cryptography. It analyzes important properties of algebraic/geometric coding systems individually.

  4. DESIGN OF QUASI-CYCLIC LDPC CODES BASED ON EUCLIDEAN GEOMETRIES

    Institute of Scientific and Technical Information of China (English)

    Liu Yuanhua; Niu Xinliang; Wang Xinmei; Fan Jiulun

    2010-01-01

    A new method for constructing Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) codes based on Euclidean Geometry (EG) is presented. The proposed method results in a class of QC-LDPC codes with girth of at least 6 and the designed codes perform very close to the Shannon limit with iterative decoding. Simulations show that the designed QC-LDPC codes have almost the same performance with the existing EG-LDPC codes.

  5. An Efficient Image Compression Technique Based on Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Prof. Rajendra Kumar Patel

    2012-12-01

    Full Text Available The rapid growth of digital imaging applications, including desktop publishing, multimedia, teleconferencing, and high visual definition has increased the need for effective and standardized image compression techniques. Digital Images play a very important role for describing the detailed information. The key obstacle for many applications is the vast amount of data required to represent a digital image directly. The various processes of digitizing the images to obtain it in the best quality for the more clear and accurate information leads to the requirement of more storage space and better storage and accessing mechanism in the form of hardware or software. In this paper we concentrate mainly on the above flaw so that we reduce the space with best quality image compression. State-ofthe-art techniques can compress typical images from 1/10 to 1/50 their uncompressed size without visibly affecting image quality. From our study I observe that there is a need of good image compression technique which provides better reduction technique in terms of storage and quality. Arithmetic coding is the best way to reducing encoding data. So in this paper we propose arithmetic coding with walsh transformation based image compression technique which is an efficient way of reduction

  6. Spike-based population coding and working memory.

    Directory of Open Access Journals (Sweden)

    Martin Boerlin

    2011-02-01

    Full Text Available Compelling behavioral evidence suggests that humans can make optimal decisions despite the uncertainty inherent in perceptual or motor tasks. A key question in neuroscience is how populations of spiking neurons can implement such probabilistic computations. In this article, we develop a comprehensive framework for optimal, spike-based sensory integration and working memory in a dynamic environment. We propose that probability distributions are inferred spike-per-spike in recurrently connected networks of integrate-and-fire neurons. As a result, these networks can combine sensory cues optimally, track the state of a time-varying stimulus and memorize accumulated evidence over periods much longer than the time constant of single neurons. Importantly, we propose that population responses and persistent working memory states represent entire probability distributions and not only single stimulus values. These memories are reflected by sustained, asynchronous patterns of activity which make relevant information available to downstream neurons within their short time window of integration. Model neurons act as predictive encoders, only firing spikes which account for new information that has not yet been signaled. Thus, spike times signal deterministically a prediction error, contrary to rate codes in which spike times are considered to be random samples of an underlying firing rate. As a consequence of this coding scheme, a multitude of spike patterns can reliably encode the same information. This results in weakly correlated, Poisson-like spike trains that are sensitive to initial conditions but robust to even high levels of external neural noise. This spike train variability reproduces the one observed in cortical sensory spike trains, but cannot be equated to noise. On the contrary, it is a consequence of optimal spike-based inference. In contrast, we show that rate-based models perform poorly when implemented with stochastically spiking neurons.

  7. Development of multi-physics code systems based on the reactor dynamics code DYN3D

    Energy Technology Data Exchange (ETDEWEB)

    Kliem, Soeren; Gommlich, Andre; Grahn, Alexander; Rohde, Ulrich [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany); Schuetze, Jochen [ANSYS Germany GmbH, Darmstadt (Germany); Frank, Thomas [ANSYS Germany GmbH, Otterfing (Germany); Gomez Torres, Armando M.; Sanchez Espinoza, Victor Hugo [Karlsruher Institut fuer Technologie (KIT), Eggenstein-Leopoldshafen (Germany)

    2011-07-15

    The reactor dynamics code DYN3D has been coupled with the CFD code ANSYS CFX and the 3D thermal hydraulic core model FLICA4. In the coupling with ANSYS CFX, DYN3D calculates the neutron kinetics and the fuel behavior including the heat transfer to the coolant. The physical data interface between the codes is the volumetric heat release rate into the coolant. In the coupling with FLICA4 only the neutron kinetics module of DYN3D is used. Fluid dynamics and related transport phenomena in the reactor's coolant and fuel behavior is calculated by FLICA4. The correctness of the coupling of DYN3D with both thermal hydraulic codes was verified by the calculation of different test problems. These test problems were set-up in such a way that comparison with the DYN3D stand-alone code was possible. This included steady-state and transient calculations of a mini-core consisting of nine real-size PWR fuel assemblies with ANSYS CFX/DYN3D as well as mini-core and a full core steady-state calculation using FLICA4/DYN3D. (orig.)

  8. After-School Math PLUS (ASM+) Final Evaluation Report

    Science.gov (United States)

    Academy for Educational Development, 2007

    2007-01-01

    This report summarizes findings from the Academy for Educational Development's (AED's) evaluation of After-School Math PLUS (ASM+). This program was designed to help students find the math in everyday experiences and create awareness about the importance of math skills for future career options. The evaluation was conducted by AED's Center for…

  9. Packet combining based on cross-packet coding

    Institute of Scientific and Technical Information of China (English)

    LIN DengSheng; XIAO Ming; LI ShaoQian

    2013-01-01

    We propose a packet combining scheme of using cross-packet coding. With the coding scheme, one redundant packet can be used to ensure the error-correction of multiple source packets. Thus, the proposed scheme can increase the code rate. Moreover, the proposed coding scheme has also advantages of decoding complexity, reducing undetectable errors (by the proposed low-complexity decoder) and flexibility (applicable to channels with and without feedback). Theoretical analysis under the proposed low-complexity decoding algorithm is given to maximize the code rate by optimizing the number of source packets. Finally, we give numerical results to demonstrate the advantages of the proposed scheme in terms of code rates compared to the traditional packet combining without coding or ARQ (automatic repeat-request) techniques.

  10. Proposed Arabic Text Steganography Method Based on New Coding Technique

    Directory of Open Access Journals (Sweden)

    Assist. prof. Dr. Suhad M. Kadhem

    2016-09-01

    Full Text Available Steganography is one of the important fields of information security that depend on hiding secret information in a cover media (video, image, audio, text such that un authorized person fails to realize its existence. One of the lossless data compression techniques which are used for a given file that contains many redundant data is run length encoding (RLE. Sometimes the RLE output will be expanded rather than compressed, and this is the main problem of RLE. In this paper we will use a new coding method such that its output will be contains sequence of ones with few zeros, so modified RLE that we proposed in this paper will be suitable for compression, finally we employ the modified RLE output for stenography purpose that based on Unicode and non-printed characters to hide the secret information in an Arabic text.

  11. Fire-safety engineering and performance-based codes

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt

    Fire-safety Engineering is written as a textbook for Engineering students at universities and other institutions of higher education that teach in the area of fire. The book can also be used as a work of reference for consulting engineers, Building product manufacturers, contractors, building...... project administrators, etc. The book deals with the following topics: • Historical presentation on the subject of fire • Legislation and building project administration • European fire standardization • Passive and active fire protection • Performance-based Codes • Fire-safety Engineering • Fundamental...... and respiratory physiology • Combustion and natural fires • Explosion theory • Fire Chemistry • Fire Extinction Chemistry and Physics • Evacuation and human behaviour during a fire • Sensitivity and risk analysis • Fire Models • Emission and Radiation Theory...

  12. Whole Language versus Code-Based Skills and Interactional Patterns in Singapore's Early Literacy Program

    Science.gov (United States)

    Vaish, Viniti

    2014-01-01

    This paper analyzes whole language and code-based skills approaches in early literacy and the specific patterns of interaction present in both approaches. Nineteen hours of video data were coded to analyze the nature of whole language versus code-based skills instruction and document the allocation of time spent on each approach in a reading…

  13. Spectrum of SMPD1 mutations in Asian-Indian patients with acid sphingomyelinase (ASM)-deficient Niemann-Pick disease.

    Science.gov (United States)

    Ranganath, Prajnya; Matta, Divya; Bhavani, Gandham SriLakshmi; Wangnekar, Savita; Jain, Jamal Mohammed Nurul; Verma, Ishwar C; Kabra, Madhulika; Puri, Ratna Dua; Danda, Sumita; Gupta, Neerja; Girisha, Katta M; Sankar, Vaikom H; Patil, Siddaramappa J; Ramadevi, Akella Radha; Bhat, Meenakshi; Gowrishankar, Kalpana; Mandal, Kausik; Aggarwal, Shagun; Tamhankar, Parag Mohan; Tilak, Preetha; Phadke, Shubha R; Dalal, Ashwin

    2016-10-01

    Acid sphingomyelinase (ASM)-deficient Niemann-Pick disease is an autosomal recessive lysosomal storage disorder caused by biallelic mutations in the SMPD1 gene. To date, around 185 mutations have been reported in patients with ASM-deficient NPD world-wide, but the mutation spectrum of this disease in India has not yet been reported. The aim of this study was to ascertain the mutation profile in Indian patients with ASM-deficient NPD. We sequenced SMPD1 in 60 unrelated families affected with ASM-deficient NPD. A total of 45 distinct pathogenic sequence variants were found, of which 14 were known and 31 were novel. The variants included 30 missense, 4 nonsense, and 9 frameshift (7 single base deletions and 2 single base insertions) mutations, 1 indel, and 1 intronic duplication. The pathogenicity of the novel mutations was inferred with the help of the mutation prediction software MutationTaster, SIFT, Polyphen-2, PROVEAN, and HANSA. The effects of the identified sequence variants on the protein structure were studied using the structure modeled with the help of the SWISS-MODEL workspace program. The p. (Arg542*) (c.1624C>T) mutation was the most commonly identified mutation, found in 22% (26 out of 120) of the alleles tested, but haplotype analysis for this mutation did not identify a founder effect for the Indian population. To the best of our knowledge, this is the largest study on mutation analysis of patients with ASM-deficient Niemann-Pick disease reported in literature and also the first study on the SMPD1 gene mutation spectrum in India. © 2016 Wiley Periodicals, Inc.

  14. A Secure Code-Based Authentication Scheme for RFID Systems

    Directory of Open Access Journals (Sweden)

    Noureddine Chikouche

    2015-08-01

    Full Text Available Two essential problems are still posed in terms of Radio Frequency Identification (RFID systems, including: security and limitation of resources. Recently, Li et al.'s proposed a mutual authentication scheme for RFID systems in 2014, it is based on Quasi Cyclic-Moderate Density Parity Check (QC-MDPC McEliece cryptosystem. This cryptosystem is designed to reducing the key sizes. In this paper, we found that this scheme does not provide untraceability and forward secrecy properties. Furthermore, we propose an improved version of this scheme to eliminate existing vulnerabilities of studied scheme. It is based on the QC-MDPC McEliece cryptosystem with padding the plaintext by a random bit-string. Our work also includes a security comparison between our improved scheme and different code-based RFID authentication schemes. We prove secrecy and mutual authentication properties by AVISPA (Automated Validation of Internet Security Protocols and Applications tools. Concerning the performance, our scheme is suitable for low-cost tags with resource limitation.

  15. Retargetable Code Generation based on Structural Processor Descriptions

    OpenAIRE

    Leupers, Rainer; Marwedel, Peter

    1998-01-01

    Design automation for embedded systems comprising both hardware and software components demands for code generators integrated into electronic CAD systems. These code generators provide the necessary link between software synthesis tools in HW/SW codesign systems and embedded processors. General-purpose compilers for standard processors are often insufficient, because they do not provide flexibility with respect to different target processors and also suffer from inferior code quality....

  16. Entropy-Based Bounds On Redundancies Of Huffman Codes

    Science.gov (United States)

    Smyth, Padhraic J.

    1992-01-01

    Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.

  17. Parameter subset selection for the dynamic calibration of activated sludge models (ASMs): experience versus systems analysis

    DEFF Research Database (Denmark)

    Ruano, MV; Ribes, J; de Pauw, DJW

    2007-01-01

    In this work we address the issue of parameter subset selection within the scope of activated sludge model calibration. To this end, we evaluate two approaches: (i) systems analysis and (ii) experience-based approach. The evaluation has been carried out using a dynamic model (ASM2d) calibrated...... based approaches which excluded them from their analysis. Systems analysis reveals that parameter significance ranking and size of the identifiable parameter subset depend on the information content of data available for calibration. However, it suffers from heavy computational demand. In contrast...

  18. 基于美国ASME标准的重载货车车体焊缝疲劳寿命预测%Fatigue life prediction for weld line in heavy freight carbody based on ASME standard

    Institute of Scientific and Technical Information of China (English)

    谢素明; 周晓坤; 李向伟; 李晓峰

    2012-01-01

    为寻求在设计阶段能较准确地预测重载货车车体焊缝的疲劳寿命评估方法,基于各种标准提供的分析方法对转炉616装甲钢T型焊接接头进行疲劳评估,通过与试验的对比表明,美国ASME标准中的等效结构应力法更能准确预测焊缝的疲劳寿命。为提高某重载运煤敞车车体焊缝的疲劳寿命,建立了包括焊缝在内的敞车车体有限元模型,基于等效结构应力法和AAR标准中的载荷谱,预测了车体关键焊缝的疲劳寿命,其薄弱部位与车体实际发生疲劳裂纹部位基本吻合,依据焊缝的结构应力分布规律的特点,提出的枕梁改进结构可使车体关键焊缝疲劳寿命提高1.7倍。%Aiming at finding a more exact method to predict weld line fatigue life in heavy freight carbody during design phase, the fatigue life of armor steel 616 Tee joint is analyzed by various standards. Comparing analysis results and experimental results,the equivalent structural stress method offered by ASME standard is of high accuracy and obvious advantage. A finite element model of a heavy freight carbody including key weld lines is built, fatigue life prediction for these weld lines is performed by means of the equivalent structural stress method and load spectrum of AAR,and weld line of more shorter fatigue life is basically coincident with that in-service carbody. The improved structure of bolsters is to be put forward on the basis of structural stress analysis at weld lines, and fatigue life of the bolsters weld line is increased by 1.7 times.

  19. Improvement of QR Code Recognition Based on Pillbox Filter Analysis

    Directory of Open Access Journals (Sweden)

    Jia-Shing Sheu

    2013-04-01

    Full Text Available The objective of this paper is to perform the innovation design for improving the recognition of a captured QR code image with blur through the Pillbox filter analysis. QR code images can be captured by digital video cameras. Many factors contribute to QR code decoding failure, such as the low quality of the image. Focus is an important factor that affects the quality of the image. This study discusses the out-of-focus QR code image and aims to improve the recognition of the contents in the QR code image. Many studies have used the pillbox filter (circular averaging filter method to simulate an out-of-focus image. This method is also used in this investigation to improve the recognition of a captured QR code image. A blurred QR code image is separated into nine levels. In the experiment, four different quantitative approaches are used to reconstruct and decode an out-of-focus QR code image. These nine reconstructed QR code images using methods are then compared. The final experimental results indicate improvements in identification.

  20. Improvement of QR Code Recognition Based on Pillbox Filter Analysis

    OpenAIRE

    Jia-Shing Sheu; Kai-Chung Teng

    2013-01-01

    The objective of this paper is to perform the innovation design for improving the recognition of a captured QR code image with blur through the Pillbox filter analysis. QR code images can be captured by digital video cameras. Many factors contribute to QR code decoding failure, such as the low quality of the image. Focus is an important factor that affects the quality of the image. This study discusses the out-of-focus QR code image and aims to improve the recognition of the conte...

  1. Calibration Methods for Reliability-Based Design Codes

    DEFF Research Database (Denmark)

    Gayton, N.; Mohamed, A.; Sørensen, John Dalsgaard

    2004-01-01

    The calibration methods are applied to define the optimal code format according to some target safety levels. The calibration procedure can be seen as a specific optimization process where the control variables are the partial factors of the code. Different methods are available in the literature...

  2. Preserving Envelope Efficiency in Performance Based Code Compliance

    Energy Technology Data Exchange (ETDEWEB)

    Thornton, Brian A. [Thornton Energy Consulting (United States); Sullivan, Greg P. [Efficiency Solutions (United States); Rosenberg, Michael I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Baechler, Michael C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-06-20

    The City of Seattle 2012 Energy Code (Seattle 2014), one of the most progressive in the country, is under revision for its 2015 edition. Additionally, city personnel participate in the development of the next generation of the Washington State Energy Code and the International Energy Code. Seattle has pledged carbon neutrality by 2050 including buildings, transportation and other sectors. The United States Department of Energy (DOE), through Pacific Northwest National Laboratory (PNNL) provided technical assistance to Seattle in order to understand the implications of one potential direction for its code development, limiting trade-offs of long-lived building envelope components less stringent than the prescriptive code envelope requirements by using better-than-code but shorter-lived lighting and heating, ventilation, and air-conditioning (HVAC) components through the total building performance modeled energy compliance path. Weaker building envelopes can permanently limit building energy performance even as lighting and HVAC components are upgraded over time, because retrofitting the envelope is less likely and more expensive. Weaker building envelopes may also increase the required size, cost and complexity of HVAC systems and may adversely affect occupant comfort. This report presents the results of this technical assistance. The use of modeled energy code compliance to trade-off envelope components with shorter-lived building components is not unique to Seattle and the lessons and possible solutions described in this report have implications for other jurisdictions and energy codes.

  3. Direct GPS P-Code Acquisition Method Based on FFT

    Institute of Scientific and Technical Information of China (English)

    LI Hong; LU Mingquan; FENG Zhenming

    2008-01-01

    Recently, direct acquisition of GPS P-code has received considerable attention to enhance the anti-jamming and anti-spoofing capabilities of GPS receivers. This paper describes a P-code acquisition method that uses block searches with large-scale FFT to search code phases and carrier frequency offsets in parallel. To limit memory use, especially when implemented in hardware, only the largest correlation result with its position information was preserved after searching a block of resolution cells in both the time and frequency domains. A second search was used to solve the code phase slip problem induced by the code frequency offset. Simulation results demonstrate that the probability of detection is above 0.99 for carrier-to-noise density ratios in excess of 40 dB- Hz when the predetection integration time is 0.8 ms and 6 non-coherent integrations are used in the analysis.

  4. A Trustability Metric for Code Search based on Developer Karma

    CERN Document Server

    Gysin, Florian S

    2010-01-01

    The promise of search-driven development is that developers will save time and resources by reusing external code in their local projects. To efficiently integrate this code, users must be able to trust it, thus trustability of code search results is just as important as their relevance. In this paper, we introduce a trustability metric to help users assess the quality of code search results and therefore ease the cost-benefit analysis they undertake trying to find suitable integration candidates. The proposed trustability metric incorporates both user votes and cross-project activity of developers to calculate a "karma" value for each developer. Through the karma value of all its developers a project is ranked on a trustability scale. We present JBender, a proof-of-concept code search engine which implements our trustability metric and we discuss preliminary results from an evaluation of the prototype.

  5. An Efficient Retransmission Based on Network Coding with Unicast Flows

    CERN Document Server

    Zhou, Zhiheng; Tan, Yuanquan; Wang, Xing

    2010-01-01

    Recently, network coding technique has emerged as a promising approach that supports reliable transmission over wireless loss channels. In existing protocols where users have no interest in considering the encoded packets they had in coding or decoding operations, this rule is expensive and inef-ficient. This paper studies the impact of encoded packets in the reliable unicast network coding via some theoretical analysis. Using our approach, receivers do not only store the encoded packets they overheard, but also report these information to their neighbors, such that users enable to take account of encoded packets in their coding decisions as well as decoding operations. Moreover, we propose a redistribution algorithm to maximize the coding opportunities, which achieves better retransmission efficiency. Finally, theoretical analysis and simulation results for a wheel network illustrate the improve-ment in retransmissions efficiency due to the encoded packets.

  6. A Silent Revolution: From Sketching to Coding--A Case Study on Code-Based Design Tool Learning

    Science.gov (United States)

    Xu, Song; Fan, Kuo-Kuang

    2017-01-01

    Along with the information technology rising, Computer Aided Design activities are becoming more modern and more complex. But learning how to operation these new design tools has become the main problem lying in front of each designer. This study was purpose on finding problems encountered during code-based design tools learning period of…

  7. Occupational self-coding and automatic recording (OSCAR): a novel web-based tool to collect and code lifetime job histories in large population-based studies.

    Science.gov (United States)

    De Matteis, Sara; Jarvis, Deborah; Young, Heather; Young, Alan; Allen, Naomi; Potts, James; Darnton, Andrew; Rushton, Lesley; Cullinan, Paul

    2017-03-01

    Objectives The standard approach to the assessment of occupational exposures is through the manual collection and coding of job histories. This method is time-consuming and costly and makes it potentially unfeasible to perform high quality analyses on occupational exposures in large population-based studies. Our aim was to develop a novel, efficient web-based tool to collect and code lifetime job histories in the UK Biobank, a population-based cohort of over 500 000 participants. Methods We developed OSCAR (occupations self-coding automatic recording) based on the hierarchical structure of the UK Standard Occupational Classification (SOC) 2000, which allows individuals to collect and automatically code their lifetime job histories via a simple decision-tree model. Participants were asked to find each of their jobs by selecting appropriate job categories until they identified their job title, which was linked to a hidden 4-digit SOC code. For each occupation a job title in free text was also collected to estimate Cohen's kappa (κ) inter-rater agreement between SOC codes assigned by OSCAR and an expert manual coder. Results OSCAR was administered to 324 653 UK Biobank participants with an existing email address between June and September 2015. Complete 4-digit SOC-coded lifetime job histories were collected for 108 784 participants (response rate: 34%). Agreement between the 4-digit SOC codes assigned by OSCAR and the manual coder for a random sample of 400 job titles was moderately good [κ=0.45, 95% confidence interval (95% CI) 0.42-0.49], and improved when broader job categories were considered (κ=0.64, 95% CI 0.61-0.69 at a 1-digit SOC-code level). Conclusions OSCAR is a novel, efficient, and reasonably reliable web-based tool for collecting and automatically coding lifetime job histories in large population-based studies. Further application in other research projects for external validation purposes is warranted.

  8. Supporting Situated Learning Based on QR Codes with Etiquetar App: A Pilot Study

    Science.gov (United States)

    Camacho, Miguel Olmedo; Pérez-Sanagustín, Mar; Alario-Hoyos, Carlos; Soldani, Xavier; Kloos, Carlos Delgado; Sayago, Sergio

    2014-01-01

    EtiquetAR is an authoring tool for supporting the design and enactment of situated learning experiences based on QR tags. Practitioners use etiquetAR for creating, managing and personalizing collections of QR codes with special properties: (1) codes can have more than one link pointing at different multimedia resources, (2) codes can be updated…

  9. A code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check codes

    Science.gov (United States)

    Bai, Cheng-lin; Cheng, Zhi-hui

    2016-09-01

    In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.

  10. Batse/Sax and Batse/RXTE-ASM Joint Spectral Studies of GRBs

    Science.gov (United States)

    Paciesas, William S.

    2002-01-01

    We proposed to make joint spectral analysis of gamma-ray bursts (GRBs) in the BATSE data base that are located within the fields of view of either the BeppoSAX wide field cameras (WFCs) or the RXTE all-sky monitor (ASM). The very broad-band coverage obtained in this way would facilitate various studies of GRB spectra that are difficult to perform with BATSE data alone. Unfortunately, the termination of the CGRO mission in June 2000 was not anticipated at the time of the proposal, and the sample of common events turned out to be smaller than we would have liked.

  11. Raman-based distributed temperature sensor using simplex code and gain controlled EDFA

    Science.gov (United States)

    Bassan, F. R.; Penze, R. S.; Leonardi, A. A.; Fracarolli, J. P. V.; Floridia, C.; Rosolem, J. B.; Fruett, F.

    2015-09-01

    In this work we present a comparison between simplex coded and optical amplified simplex coded Raman based Distributed Temperature Sensing (DTS). An increase in performance is demonstrated using erbium doped fiber amplifier (EDFA) with proper gain control scheme that allows a DTS operates with simplex code. Using 63-bit simplex code and gain controlled EDFA we demonstrated the temperature resolution and dynamic range improvement in 16 °C @ 10 km and 4 dB, respectively.

  12. Improved Fast Fourier Transform Based Method for Code Accuracy Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Tae Wook; Jeong, Jae Jun [Pusan National University, Busan (Korea, Republic of); Choi, Ki Yong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    The capability of the proposed method is discussed. In this study, the limitations of the FFTBM were analyzed. The FFTBM produces quantitatively different results due to its frequency dependence. Because the problem is intensified by including a lot of high frequency components, a new method using a reduced cut-off frequency was proposed. The results of the proposed method show that the shortcomings of FFTBM are considerably relieved. Among them, the fast Fourier transform based method (FFTBM) introduced in 1990 has been widely used to evaluate a code uncertainty or accuracy. Prosek et al., (2008) identified its drawbacks, the so-called 'edge effect'. To overcome the problems, an improved FFTBM by signal mirroring (FFTBM-SM) was proposed and it has been used up to now. In spite of the improvement, the FFTBM-SM yielded different accuracy depending on the frequency components of a parameter, such as pressure, temperature and mass flow rate. Therefore, it is necessary to reduce the frequency dependence of the FFTBMs. In this study, the deficiencies of the present FFTBMs are analyzed and a new method is proposed to mitigate its frequency dependence.

  13. Automatic code generation from the OMT-based dynamic model

    Energy Technology Data Exchange (ETDEWEB)

    Ali, J.; Tanaka, J.

    1996-12-31

    The OMT object-oriented software development methodology suggests creating three models of the system, i.e., object model, dynamic model and functional model. We have developed a system that automatically generates implementation code from the dynamic model. The system first represents the dynamic model as a table and then generates executable Java language code from it. We used inheritance for super-substate relationships. We considered that transitions relate to states in a state diagram exactly as operations relate to classes in an object diagram. In the generated code, each state in the state diagram becomes a class and each event on a state becomes an operation on the corresponding class. The system is implemented and can generate executable code for any state diagram. This makes the role of the dynamic model more significant and the job of designers even simpler.

  14. GPU-based parallel clustered differential pulse code modulation

    Science.gov (United States)

    Wu, Jiaji; Li, Wenze; Kong, Wanqiu

    2015-10-01

    Hyperspectral remote sensing technology is widely used in marine remote sensing, geological exploration, atmospheric and environmental remote sensing. Owing to the rapid development of hyperspectral remote sensing technology, resolution of hyperspectral image has got a huge boost. Thus data size of hyperspectral image is becoming larger. In order to reduce their saving and transmission cost, lossless compression for hyperspectral image has become an important research topic. In recent years, large numbers of algorithms have been proposed to reduce the redundancy between different spectra. Among of them, the most classical and expansible algorithm is the Clustered Differential Pulse Code Modulation (CDPCM) algorithm. This algorithm contains three parts: first clusters all spectral lines, then trains linear predictors for each band. Secondly, use these predictors to predict pixels, and get the residual image by subtraction between original image and predicted image. Finally, encode the residual image. However, the process of calculating predictors is timecosting. In order to improve the processing speed, we propose a parallel C-DPCM based on CUDA (Compute Unified Device Architecture) with GPU. Recently, general-purpose computing based on GPUs has been greatly developed. The capacity of GPU improves rapidly by increasing the number of processing units and storage control units. CUDA is a parallel computing platform and programming model created by NVIDIA. It gives developers direct access to the virtual instruction set and memory of the parallel computational elements in GPUs. Our core idea is to achieve the calculation of predictors in parallel. By respectively adopting global memory, shared memory and register memory, we finally get a decent speedup.

  15. Flowgen: Flowchart-Based Documentation for C++ Codes

    OpenAIRE

    Kosower, David A.; Lopez-Villarejo, J. J.

    2014-01-01

    We present the Flowgen tool, which generates flowcharts from annotated C++ source code. The tool generates a set of interconnected high-level UML activity diagrams, one for each function or method in the C++ sources. It provides a simple and visual overview of complex implementations of numerical algorithms. Flowgen is complementary to the widely-used Doxygen documentation tool. The ultimate aim is to render complex C++ computer codes accessible, and to enhance collaboration between programme...

  16. Efficient tracker based on sparse coding with Euclidean local structure-based constraint

    Institute of Scientific and Technical Information of China (English)

    WANG Hongyuan; ZHANG Ji; CHEN Fuhua

    2016-01-01

    Sparse coding ( SC) based visual tracking ( l1⁃tracker) is gaining increasing attention, and many related algorithms are developed. In these algorithms, each candidate region is sparsely represented as a set of target tem⁃plates. However, the structure connecting these candidate regions is usually ignored. Lu proposed an NLSSC⁃tracker with non⁃local self⁃similarity sparse coding to address this issue, which has a high computational cost. In this study, we propose an Euclidean local⁃structure constraint based sparse coding tracker with a smoothed Euclidean local structure. With this tracker, the optimization procedure is transformed to a small⁃scale l1⁃optimization problem, sig⁃nificantly reducing the computational cost. Extensive experimental results on visual tracking demonstrate the effectiveness and efficiency of the proposed algorithm.

  17. Mode-dependent templates and scan order for H.264/AVC-based intra lossless coding.

    Science.gov (United States)

    Gu, Zhouye; Lin, Weisi; Lee, Bu-Sung; Lau, Chiew Tong; Sun, Ming-Ting

    2012-09-01

    In H.264/advanced video coding (AVC), lossless coding and lossy coding share the same entropy coding module. However, the entropy coders in the H.264/AVC standard were original designed for lossy video coding and do not yield adequate performance for lossless video coding. In this paper, we analyze the problem with the current lossless coding scheme and propose a mode-dependent template (MD-template) based method for intra lossless coding. By exploring the statistical redundancy of the prediction residual in the H.264/AVC intra prediction modes, more zero coefficients are generated. By designing a new scan order for each MD-template, the scanned coefficients sequence fits the H.264/AVC entropy coders better. A fast implementation algorithm is also designed. With little computation increase, experimental results confirm that the proposed fast algorithm achieves about 7.2% bit saving compared with the current H.264/AVC fidelity range extensions high profile.

  18. A Distinguisher-Based Attack of a Homomorphic Encryption Scheme Relying on Reed-Solomon Codes

    CERN Document Server

    Gauthier, Valérie; Tillich, Jean-Pierre

    2012-01-01

    Bogdanov and Lee suggested a homomorphic public-key encryption scheme based on error correcting codes. The underlying public code is a modified Reed-Solomon code obtained from inserting a zero submatrix in the Vandermonde generating matrix defining it. The columns that define this submatrix are kept secret and form a set $L$. We give here a distinguisher that detects if one or several columns belong to $L$ or not. This distinguisher is obtained by considering the code generated by component-wise products of codewords of the public code (the so called "square code"). This operation is applied to punctured versions of this square code obtained by picking a subset $I$ of the whole set of columns. It turns out that the dimension of the punctured square code is directly related to the cardinality of the intersection of $I$ with $L$. This allows an attack which recovers the full set $L$ and which can then decrypt any ciphertext.

  19. A MATLAB based 3D modeling and inversion code for MT data

    Science.gov (United States)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  20. 专家谈ASME BPE——推动ASME BPE在亚洲的应用

    Institute of Scientific and Technical Information of China (English)

    Tony Cirillo

    2007-01-01

    ASME BPE是一种国际行业规范,其应用的领域包括生物产品的生产,制药业和个人护理品业等,目前已在世界30多个国家进行认证工作。但作为经济飞速发展的亚洲国家一直鲜有参与。因此,ASME BPE委员会非常希望招募亚洲国家成员.并使其参与到规范的制订中,以扩大规范的使用领域。

  1. Assays for in vitro monitoring of human airway smooth muscle (ASM) and human pulmonary arterial vascular smooth muscle (VSM) cell migration.

    Science.gov (United States)

    Goncharova, Elena A; Goncharov, Dmitry A; Krymskaya, Vera P

    2006-01-01

    Migration of human pulmonary vascular smooth muscle (VSM) cells contributes to vascular remodeling in pulmonary arterial hypertension and atherosclerosis. Evidence also indicates that, in part, migration of airway smooth muscle (ASM) cells may contribute to airway remodeling associated with asthma. Here we describe migration of VSM and ASM cells in vitro using Transwell or Boyden chamber assays. Because dissecting signaling mechanisms regulating cell migration requires molecular approaches, our protocol also describes how to assess migration of transfected VSM and ASM cells. Transwell or Boyden chamber assays can be completed in approximately 8 h and include plating of serum-deprived VSM or ASM cell suspension on membrane precoated with collagen, migration of cells toward chemotactic gradient and visual (Transwell) or digital (Boyden chamber) analysis of membrane. Although the Transwell assay is easy, the Boyden chamber assay requires hands-on experience; however, both assays are reliable cell-based approaches providing valuable information on how chemotactic and inflammatory factors modulate VSM and ASM migration.

  2. 3D Scan-Based Wavelet Transform and Quality Control for Video Coding

    Directory of Open Access Journals (Sweden)

    Parisot Christophe

    2003-01-01

    Full Text Available Wavelet coding has been shown to achieve better compression than DCT coding and moreover allows scalability. 2D DWT can be easily extended to 3D and thus applied to video coding. However, 3D subband coding of video suffers from two drawbacks. The first is the amount of memory required for coding large 3D blocks; the second is the lack of temporal quality due to the sequence temporal splitting. In fact, 3D block-based video coders produce jerks. They appear at blocks temporal borders during video playback. In this paper, we propose a new temporal scan-based wavelet transform method for video coding combining the advantages of wavelet coding (performance, scalability with acceptable reduced memory requirements, no additional CPU complexity, and avoiding jerks. We also propose an efficient quality allocation procedure to ensure a constant quality over time.

  3. Performance of FSO-OFDM based on BCH code

    Directory of Open Access Journals (Sweden)

    Jiao Xiao-lu

    2016-01-01

    Full Text Available As contrasted with the traditional OOK (on-off key system, FSO-OFDM system can resist the atmospheric scattering and improve the spectrum utilization rate effectively. Due to the instability of the atmospheric channel, the system will be affected by various factors, and resulting in a high BER. BCH code has a good error correcting ability, particularly in the short-length and medium-length code, and its performance is close to the theoretical value. It not only can check the burst errors but also can correct the random errors. Therefore, the BCH code is applied to the system to reduce the system BER. At last, the semi-physical simulation has been conducted with MATLAB. The simulation results show that when the BER is 10-2, the performance of OFDM is superior 4dB compared with OOK. In different weather conditions (extension rain, advection fog, dust days, when the BER is 10-5, the performance of BCH (255,191 channel coding is superior 4~5dB compared with uncoded system. All in all, OFDM technology and BCH code can reduce the system BER.

  4. Ternary Tree and Clustering Based Huffman Coding Algorithm

    Directory of Open Access Journals (Sweden)

    Pushpa R. Suri

    2010-09-01

    Full Text Available In this study, the focus was on the use of ternary tree over binary tree. Here, a new two pass Algorithm for encoding Huffman ternary tree codes was implemented. In this algorithm we tried to find out the codeword length of the symbol. Here I used the concept of Huffman encoding. Huffman encoding was a two pass problem. Here the first pass was to collect the letter frequencies. You need to use that information to create the Huffman tree. Note that char values range from -128 to 127, so you will need to cast them. I stored the data as unsigned chars to solve this problem, and then the range is 0 to 255. Open the output file and write the frequency table to it. Open the input file, read characters from it, gets the codes, and writes the encoding into the output file. Once a Huffman code has been generated, data may be encoded simply by replacing each symbol with its code. To reduce the memory size and fasten the process of finding the codeword length for a symbol in a Huffman tree, we proposed a memory efficient data structure to represent the codeword length of Huffman ternary tree. In this algorithm we tried to find out the length of the code of the symbols used in the tree.

  5. A wavelet-based quadtree driven stereo image coding

    Science.gov (United States)

    Bensalma, Rafik; Larabi, Mohamed-Chaker

    2009-02-01

    In this work, a new stereo image coding technique is proposed. The new approach integrates the coding of the residual image with the disparity map. The latter computed in the wavelet transform domain. The motivation behind using this transform is that it imitates some properties of the human visual system (HVS), particularly, the decomposition in the perspective canals. Therefore, using the wavelet transform allows for better perceptual image quality preservation. In order to estimate the disparity map, we used a quadtree segmentation in each wavelet frequency band. This segmentation has the advantage of minimizing the entropy. Dyadic squares in the subbands of target image that they are not matched with other in the reference image constitutes the residuals are coded by using an arithmetic codec. The obtained results are evaluated by using the SSIM and PSNR criteria.

  6. Research of RA Coding Algorithm Based on AWGN Channel

    Directory of Open Access Journals (Sweden)

    Xianzhong Chen

    2012-04-01

    Full Text Available In order to study the performance of RA, and the impacts of iterations and code length to the compiled code characteristics, we respectively made simulation analysis on RA, LDPC and TURBO with different parameters. We did it by designing code length, rate and iterations to analyze signals to noise ratio changes. With three patterns comparing the simulation analysis, it turned out that volume reaches the limit of Shannon when RA is in the condition of maximum likelihood decoding. The bit error ratio reduces as the message length goes up, and the performance comes near channel capacity. As the iterations increase, and bit error ratio reduces, and the performance will be better. Research shows that RA has more advantages and wide application propects whether in complexity or in performance.

  7. Semantic-preload video model based on VOP coding

    Science.gov (United States)

    Yang, Jianping; Zhang, Jie; Chen, Xiangjun

    2013-03-01

    In recent years, in order to reduce semantic gap which exists between high-level semantics and low-level features of video when the human understanding image or video, people mostly try the method of video annotation where in signal's downstream, namely further (again) attach labels to the content in video-database. Few people focus on the idea that: Use limited interaction and the means of comprehensive segmentation (including optical technologies) from the front-end of collection of video information (i.e. video camera), with video semantics analysis technology and corresponding concepts sets (i.e. ontology) which belong in a certain domain, as well as story shooting script and the task description of scene shooting etc; Apply different-level semantic descriptions to enrich the attributes of video object and the attributes of image region, then forms a new video model which is based on Video Object Plan (VOP) Coding. This model has potential intellectualized features, and carries a large amount of metadata, and embedded intermediate-level semantic concept into every object. This paper focuses on the latter, and presents a framework of a new video model. At present, this new video model is temporarily named "Video Model of Semantic-Preloaded or Semantic-Preload Video Model (simplified into VMoSP or SPVM)". This model mainly researches how to add labeling to video objects and image regions in real time, here video object and image region are usually used intermediate semantic labeling, and this work is placed on signal's upstream (i.e. video capture production stage). Because of the research needs, this paper also tries to analyses the hierarchic structure of video, and divides the hierarchic structure into nine hierarchy semantic levels, of course, this nine hierarchy only involved in video production process. In addition, the paper also point out that here semantic level tagging work (i.e. semantic preloading) only refers to the four middle-level semantic. All in

  8. A New Solution of Distributed Disaster Recovery Based on Raptor Code

    Science.gov (United States)

    Deng, Kai; Wang, Kaiyun; Ma, Danyang

    For the large cost, low data availability in the condition of multi-node storage and poor capacity of intrusion tolerance of traditional disaster recovery which is based on simple copy, this paper put forward a distributed disaster recovery scheme based on raptor codes. This article introduces the principle of raptor codes, and analyses its coding advantages, and gives a comparative analysis between this solution and traditional solutions through the aspects of redundancy, data availability and capacity of intrusion tolerance. The results show that the distributed disaster recovery solution based on raptor codes can achieve higher data availability as well as better intrusion tolerance capabilities in the premise of lower redundancy.

  9. Flowgen: Flowchart-based documentation for C + + codes

    Science.gov (United States)

    Kosower, David A.; Lopez-Villarejo, J. J.

    2015-11-01

    We present the Flowgen tool, which generates flowcharts from annotated C + + source code. The tool generates a set of interconnected high-level UML activity diagrams, one for each function or method in the C + + sources. It provides a simple and visual overview of complex implementations of numerical algorithms. Flowgen is complementary to the widely-used Doxygen documentation tool. The ultimate aim is to render complex C + + computer codes accessible, and to enhance collaboration between programmers and algorithm or science specialists. We describe the tool and a proof-of-concept application to the VINCIA plug-in for simulating collisions at CERN's Large Hadron Collider.

  10. Flowgen: Flowchart-Based Documentation for C++ Codes

    CERN Document Server

    Kosower, David A

    2014-01-01

    We present the Flowgen tool, which generates flowcharts from annotated C++ source code. The tool generates a set of interconnected high-level UML activity diagrams, one for each function or method in the C++ sources. It provides a simple and visual overview of complex implementations of numerical algorithms. Flowgen is complementary to the widely-used Doxygen documentation tool. The ultimate aim is to render complex C++ computer codes accessible, and to enhance collaboration between programmers and algorithm or science specialists. We describe the tool and a proof-of-concept application to the VINCIA plug-in for simulating collisions at CERN's Large Hadron Collider.

  11. Efficient RTL-based code generation for specified DSP C-compiler

    Science.gov (United States)

    Pan, Qiaohai; Liu, Peng; Shi, Ce; Yao, Qingdong; Zhu, Shaobo; Yan, Li; Zhou, Ying; Huang, Weibing

    2001-12-01

    A C-compiler is a basic tool for most embedded systems programmers. It is the tool by which the ideas and algorithms in your application (expressed as C source code) are transformed into machine code executable by the target processor. Our research was to develop an optimizing C-compiler for a specified 16-bit DSP. As one of the most important part in the C-compiler, Code Generation's efficiency and performance directly affect to the resultant target assembly code. Thus, in order to improve the performance of the C-compiler, we constructed an efficient code generation based on RTL, an intermediate language used in GNU CC. The code generation accepts RTL as main input, takes good advantage of features specific to RTL and specified DSP's architecture, and generates compact assembly code of the specified DSP. In this paper, firstly, the features of RTL will be briefly introduced. Then, the basic principle of constructing the code generation will be presented in detail. According to the basic principle, this paper will discuss the architecture of the code generation, including: syntax tree construction / reconstruction, basic RTL instruction extraction, behavior description at RTL level, and instruction description at assembly level. The optimization strategies used in the code generation for generating compact assembly code will also be given in this paper. Finally, we will achieve the conclusion that the C-compiler using this special code generation achieved high efficiency we expected.

  12. Research on lithography based on the digital coding-mask technique

    Science.gov (United States)

    Xu, Yanqiang; Luo, Ningning; Zhang, Zhimin; Bai, Lu; Gao, Yiqing

    2016-10-01

    Digital coding-mask technique based on digital micro-mirror devices (DMD) is proposed in this paper. The fundamental rule of digital coding-mask technique is to modulate the incident light intensity by adjusting the transmittance of the units on the coding-mask. The transmittance is controlled by the apertures on the units of the coding-mask. Lohmann's III coding method and error diffusion coding method are employed to coding mask, and wavelet transformation is used to suppress the background noise of the mask image. Real-time control on the image of the digital coding mask can be realized by loading the coded mask image to DMD, which is driven by a computer. Digital coding-mask technique gives full play of the advantages of DMD, such as real time and flexibility. In addition, the digital coding-mask technique is helpful to deal with the problem of mask aberration, which is caused by the nonlinear effect in the process of projection and exposure. This technique can also make use of optimization algorithm to suppress the background noise of the digital coding-mask images so that the quality of the relief structure of photoresist is improved.

  13. A Low-Jitter Wireless Transmission Based on Buffer Management in Coding-Aware Routing

    Directory of Open Access Journals (Sweden)

    Cunbo Lu

    2015-08-01

    Full Text Available It is significant to reduce packet jitter for real-time applications in a wireless network. Existing coding-aware routing algorithms use the opportunistic network coding (ONC scheme in a packet coding algorithm. The ONC scheme never delays packets to wait for the arrival of a future coding opportunity. The loss of some potential coding opportunities may degrade the contribution of network coding to jitter performance. In addition, most of the existing coding-aware routing algorithms assume that all flows participating in the network have equal rate. This is unrealistic, since multi-rate environments often appear. To overcome the above problem and expand coding-aware routing to multi-rate scenarios, from the view of data transmission, we present a low-jitter wireless transmission algorithm based on buffer management (BLJCAR, which decides packets in coding node according to the queue-length based threshold policy instead of the regular ONC policy as used in existing coding-aware routing algorithms. BLJCAR is a unified framework to merge the single rate case and multiple rate case. Simulations results show that the BLJCAR algorithm embedded in coding-aware routing outperforms the traditional ONC policy in terms of jitter, packet delivery delay, packet loss ratio and network throughput in network congestion in any traffic rates.

  14. Statistical re-evaluation of the ASME K{sub IC} and K{sub IR} fracture toughness reference curves

    Energy Technology Data Exchange (ETDEWEB)

    Wallin, K.; Rintamaa, R. [Valtion Teknillinen Tutkimuskeskus, Espoo (Finland)

    1998-11-01

    Historically the ASME reference curves have been treated as representing absolute deterministic lower bound curves of fracture toughness. In reality, this is not the case. They represent only deterministic lower bound curves to a specific set of data, which represent a certain probability range. A recently developed statistical lower bound estimation method called the `Master curve`, has been proposed as a candidate for a new lower bound reference curve concept. From a regulatory point of view, the Master curve is somewhat problematic in that it does not claim to be an absolute deterministic lower bound, but corresponds to a specific theoretical failure probability that can be chosen freely based on application. In order to be able to substitute the old ASME reference curves with lower bound curves based on the master curve concept, the inherent statistical nature (and confidence level) of the ASME reference curves must be revealed. In order to estimate the true inherent level of safety, represented by the reference curves, the original data base was re-evaluated with statistical methods and compared to an analysis based on the master curve concept. The analysis reveals that the 5% lower bound Master curve has the same inherent degree of safety as originally intended for the K{sub IC}-reference curve. Similarly, the 1% lower bound Master curve corresponds to the K{sub IR}-reference curve. (orig.)

  15. High Pitch Delay Resolution Technique for Tonal Language Speech Coding Based on Multi-Pulse Based Code Excited Linear Prediction Algorithm

    Directory of Open Access Journals (Sweden)

    Suphattharachai Chomphan

    2011-01-01

    Full Text Available Problem statement: In spontaneous speech communication, speech coding is an important process that should be taken into account, since the quality of coded speech depends on the efficiency of the speech coding algorithm. As for tonal language which tone plays important role not only on the naturalness and also the intelligibility of the speech, tone must be treated appropriately. Approach: This study proposes a modification of flexible Multi-Pulse based Code Excited Linear Predictive (MP-CELP coder with multiple bitrates and bitrate scalabilities for tonal language speech in the multimedia applications. The coder consists of a core coder and bitrate scalable tools. The High Pitch Delay Resolutions (HPDR are applied to the adaptive codebook of core coder for tonal language speech quality improvement. The bitrate scalable tool employs multi-stage excitation coding based on an embedded-coding approach. The multi-pulse excitation codebook at each stage is adaptively produced depending on the selected excitation signal at the previous stage. Results: The experimental results show that the speech quality of the proposed coder is improved above the speech quality of the conventional coder without pitch-resolution adaptation. Conclusion: From the study, it is a strong evidence to further apply the proposed technique in the speech coding systems or other speech processing technologies.

  16. Computer code for double beta decay QRPA based calculations

    Energy Technology Data Exchange (ETDEWEB)

    Barbero, C. A.; Mariano, A. [Departamento de Física, Facultad de Ciencias Exactas, Universidad Nacional de La Plata, La Plata, Argentina and Instituto de Física La Plata, CONICET, La Plata (Argentina); Krmpotić, F. [Instituto de Física La Plata, CONICET, La Plata, Argentina and Instituto de Física Teórica, Universidade Estadual Paulista, São Paulo (Brazil); Samana, A. R.; Ferreira, V. dos Santos [Departamento de Ciências Exatas e Tecnológicas, Universidade Estadual de Santa Cruz, BA (Brazil); Bertulani, C. A. [Department of Physics, Texas A and M University-Commerce, Commerce, TX (United States)

    2014-11-11

    The computer code developed by our group some years ago for the evaluation of nuclear matrix elements, within the QRPA and PQRPA nuclear structure models, involved in neutrino-nucleus reactions, muon capture and β{sup ±} processes, is extended to include also the nuclear double beta decay.

  17. A novel chaotic encryption scheme based on arithmetic coding

    Energy Technology Data Exchange (ETDEWEB)

    Mi Bo [Department of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China)], E-mail: mi_bo@163.com; Liao Xiaofeng; Chen Yong [Department of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China)

    2008-12-15

    In this paper, under the combination of arithmetic coding and logistic map, a novel chaotic encryption scheme is presented. The plaintexts are encrypted and compressed by using an arithmetic coder whose mapping intervals are changed irregularly according to a keystream derived from chaotic map and plaintext. Performance and security of the scheme are also studied experimentally and theoretically in detail.

  18. Honor Codes: Evidence Based Strategies for Improving Academic Integrity

    Science.gov (United States)

    Tatum, Holly; Schwartz, Beth M.

    2017-01-01

    Although there is evidence of cheating at all levels of education, institutions often do not implement or design integrity policies, such as honor codes, to prevent and adjudicate academic dishonesty. Further, faculty members rarely discuss academic integrity expectations or policies with their students. When cheating does occur, faculty members…

  19. Language-Based Security for Malicious Mobile Code

    Science.gov (United States)

    2007-09-30

    Algol), and the Berkeley SDS-940 system employed object-code rewriting as part of its system pro- filer. More recently, the SPIN [5], Vino [52, 47], and...1993. [52] E. Yasuhiro, J. Gwertzman, M. Seltzer, C. Small, Keith A. Smith, and D. Tang. VINO : The 1994 fall harvest. Technical Report TR-34-94

  20. Convolutional Network Coding Based on Matrix Power Series Representation

    CERN Document Server

    Guo, Wangmei; Sun, Qifu Tyler

    2011-01-01

    In this paper, convolutional network coding is formulated by means of matrix power series representation of the local encoding kernel (LEK) matrices and global encoding kernel (GEK) matrices to establish its theoretical fundamentals for practical implementations. From the encoding perspective, the GEKs of a convolutional network code (CNC) are shown to be uniquely determined by its LEK matrix $K(z)$ if $K_0$, the constant coefficient matrix of $K(z)$, is nilpotent. This will simplify the CNC design because a nilpotent $K_0$ suffices to guarantee a unique set of GEKs. Besides, the relation between coding topology and $K(z)$ is also discussed. From the decoding perspective, the main theme is to justify that the first $L+1$ terms of the GEK matrix $F(z)$ at a sink $r$ suffice to check whether the code is decodable at $r$ with delay $L$ and to start decoding if so. The concomitant decoding scheme avoids dealing with $F(z)$, which may contain infinite terms, as a whole and hence reduces the complexity of decodabil...

  1. Code Synchronization Algorithm Based on Segment Correlation in Spread Spectrum Communication

    Directory of Open Access Journals (Sweden)

    Aohan Li

    2015-10-01

    Full Text Available Spread Spectrum (SPSP Communication is the theoretical basis of Direct Sequence Spread Spectrum (DSSS transceiver technology. Spreading code, modulation, demodulation, carrier synchronization and code synchronization in SPSP communications are the core parts of DSSS transceivers. This paper focuses on the code synchronization problem in SPSP communications. A novel code synchronization algorithm based on segment correlation is proposed. The proposed algorithm can effectively deal with the informational misjudgment caused by the unreasonable data acquisition times. This misjudgment may lead to an inability of DSSS receivers to restore transmitted signals. Simulation results show the feasibility of a DSSS transceiver design based on the proposed code synchronization algorithm. Finally, the communication functions of the DSSS transceiver based on the proposed code synchronization algorithm are implemented on Field Programmable Gate Array (FPGA.

  2. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    Science.gov (United States)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and

  3. ASM-024, a piperazinium compound, promotes the in vitro relaxation of β2-adrenoreceptor desensitized tracheas.

    Science.gov (United States)

    Israël-Assayag, Evelyne; Beaulieu, Marie-Josée; Cormier, Yvon

    2015-01-01

    Inhaled β2-adrenoreceptor agonists are widely used in asthma and chronic obstructive pulmonary disease (COPD) for bronchoconstriction relief. β2-Adrenoreceptor agonists relax airway smooth muscle cells via cyclic adenosine monophosphate (cAMP) mediated pathways. However, prolonged stimulation induces functional desensitization of the β2-adrenoreceptors (β2-AR), potentially leading to reduced clinical efficacy with chronic or prolonged administration. ASM-024, a small synthetic molecule in clinical stage development, has shown activity at the level of nicotinic receptors and possibly at the muscarinic level and presents anti-inflammatory and bronchodilator properties. Aerosolized ASM-024 reduces airway resistance in mice and promotes in-vitro relaxation of tracheal and bronchial preparations from animal and human tissues. ASM-024 increased in vitro relaxation response to maximally effective concentration of short-acting beta-2 agonists in dog and human bronchi. Although the precise mechanisms by which ASM-024 promotes airway smooth muscle (ASM) relaxation remain unclear, we hypothesized that ASM-024 will attenuate and/or abrogate agonist-induced contraction and remain effective despite β2-AR tachyphylaxis. β2-AR tachyphylaxis was induced with salbutamol, salmeterol and formoterol on guinea pig tracheas. The addition of ASM-024 relaxed concentration-dependently intact or β2-AR desensitized tracheal rings precontracted with methacholine. ASM-024 did not induce any elevation of intracellular cAMP in isolated smooth muscle cells; moreover, blockade of the cAMP pathway with an adenylate cyclase inhibitor had no significant effect on ASM-024-induced guinea pig trachea relaxation. Collectively, these findings show that ASM-024 elicits relaxation of β2-AR desensitized tracheal preparations and suggest that ASM-024 mediates smooth muscle relaxation through a different target and signaling pathway than β2-adrenergic receptor agonists. These findings suggest ASM-024

  4. [Ca2+]i oscillations in ASM: relationship with persistent airflow obstruction in asthma.

    Science.gov (United States)

    Sweeney, David; Hollins, Fay; Gomez, Edith; Saunders, Ruth; Challiss, R A John; Brightling, Christopher E

    2014-07-01

    The cause of airway smooth muscle (ASM) hypercontractility in asthma is not fully understood. The relationship of spontaneous intracellular calcium oscillation frequency in ASM to asthma severity was investigated. Oscillations were increased in subjects with impaired lung function abolished by extracellular calcium removal, attenuated by caffeine and unaffected by verapamil or nitrendipine. Whether modulation of increased spontaneous intracellular calcium oscillations in ASM from patients with impaired lung function represents a therapeutic target warrants further investigation.

  5. Theory and Practice of Non-Binary Graph-Based Codes: A Combinatorial View

    OpenAIRE

    Amiri, Behzad

    2015-01-01

    We are undergoing a revolution in data. The ever-growing amount of information in our world has created an unprecedented demand for ultra-reliable, affordable, and resource-efficient data storage systems. Error-correcting codes, as a critical component of any memory device, will play a crucial role in the future of data storage.One particular class of error-correcting codes, known as graph-based codes, has drawn significant attention in both academia and in industry. Graph-based codes offer s...

  6. QR code based noise-free optical encryption and decryption of a gray scale image

    Science.gov (United States)

    Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-03-01

    In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.

  7. A NEW DESIGN METHOD OF CDMA SPREADING CODES BASED ON MULTI-RATE UNITARY FILTER BANK

    Institute of Scientific and Technical Information of China (English)

    Bi Jianxin; Wang Yingmin; Yi Kechu

    2001-01-01

    It is well-known that the multi-valued CDMA spreading codes can be designed by means of a pair of mirror multi-rate filter banks based on some optimizing criterion. This paper indicates that there exists a theoretical bound in the performance of its circulating correlation property, which is given by an explicit expression. Based on this analysis, a criterion of maximizing entropy is proposed to design such codes. Computer simulation result suggests that the resulted codes outperform the conventional binary balanced Gold codes for an asynchronous CDMA system.

  8. A Unidirectional Split-key Based Signature Protocol with Encrypted Function in Mobile Code Environment

    Institute of Scientific and Technical Information of China (English)

    MIAOFuyou; YANGShoubao; XIONGYan; HUABei; WANGXingfu

    2005-01-01

    In mobile code environment, signing private keys are liable to be exposed; visited hosts are susceptible to be attacked by all kinds of vicious mobile codes, therefore a signer often sends remote nodes mobile codes containing an encrypted signature function to complete a signature. The paper first presents a unidirectional split-key scheme for private key protection based on RSA, which is more simple and secure than secret sharing; and then proposes a split-key based signature protocol with encrypted function, which is traceable, undeniable and malignance resistant. Security analysis shows that the protocol can effectively protect the signing private key and complete secure signatures in mobile code environment.

  9. Galactic Cosmic Ray Event-Based Risk Model (GERM) Code

    Science.gov (United States)

    Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.

    2013-01-01

    This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic

  10. Opportunistic quantum network coding based on quantum teleportation

    Science.gov (United States)

    Shang, Tao; Du, Gang; Liu, Jian-wei

    2016-04-01

    It seems impossible to endow opportunistic characteristic to quantum network on the basis that quantum channel cannot be overheard without disturbance. In this paper, we propose an opportunistic quantum network coding scheme by taking full advantage of channel characteristic of quantum teleportation. Concretely, it utilizes quantum channel for secure transmission of quantum states and can detect eavesdroppers by means of quantum channel verification. What is more, it utilizes classical channel for both opportunistic listening to neighbor states and opportunistic coding by broadcasting measurement outcome. Analysis results show that our scheme can reduce the times of transmissions over classical channels for relay nodes and can effectively defend against classical passive attack and quantum active attack.

  11. Security Concerns and Countermeasures in Network Coding Based Communications Systems

    DEFF Research Database (Denmark)

    Talooki, Vahid; Bassoli, Riccardo; Roetter, Daniel Enrique Lucani

    2015-01-01

    This survey paper shows the state of the art in security mechanisms, where a deep review of the current research and the status of this topic is carried out. We start by introducing network coding and its variety applications in enhancing current traditional networks. In particular, we analyze two...... key protocol types, namely, state-aware and stateless protocols, specifying the benefits and disadvantages of each one of them. We also present the key security assumptions of network coding (NC) systems as well as a detailed analysis of the security goals and threats, both passive and active....... This paper also presents a detailed taxonomy and a timeline of the different NC security mechanisms and schemes reported in the literature. Current proposed security mechanisms and schemes for NC in the literature are classified later. Finally a timeline of these mechanism and schemes is presented....

  12. Depth-based coding of MVD data for 3D video extension of H.264/AVC

    Science.gov (United States)

    Rusanovskyy, Dmytro; Hannuksela, Miska M.; Su, Wenyi

    2013-06-01

    This paper describes a novel approach of using depth information for advanced coding of associated video data in Multiview Video plus Depth (MVD)-based 3D video systems. As a possible implementation of this conception, we describe two coding tools that have been developed for H.264/AVC based 3D Video Codec as response to Moving Picture Experts Group (MPEG) Call for Proposals (CfP). These tools are Depth-based Motion Vector Prediction (DMVP) and Backward View Synthesis Prediction (BVSP). Simulation results conducted under JCT-3V/MPEG 3DV Common Test Conditions show, that proposed in this paper tools reduce bit rate of coded video data by 15% of average delta bit rate reduction, which results in 13% of bit rate savings on total for the MVD data over the state-of-the-art MVC+D coding. Moreover, presented in this paper conception of depth-based coding of video has been further developed by MPEG 3DV and JCT-3V and this work resulted in even higher compression efficiency, bringing about 20% of delta bit rate reduction on total for coded MVD data over the reference MVC+D coding. Considering significant gains, proposed in this paper coding approach can be beneficial for development of new 3D video coding standards. [Figure not available: see fulltext.

  13. Resistor-logic demultiplexers for nanoelectronics based on constant-weight codes.

    Science.gov (United States)

    Kuekes, Philip J; Robinett, Warren; Roth, Ron M; Seroussi, Gadiel; Snider, Gregory S; Stanley Williams, R

    2006-02-28

    The voltage margin of a resistor-logic demultiplexer can be improved significantly by basing its connection pattern on a constant-weight code. Each distinct code determines a unique demultiplexer, and therefore a large family of circuits is defined. We consider using these demultiplexers for building nanoscale crossbar memories, and determine the voltage margin of the memory system based on a particular code. We determine a purely code-theoretic criterion for selecting codes that will yield memories with large voltage margins, which is to minimize the ratio of the maximum to the minimum Hamming distance between distinct codewords. For the specific example of a 64 × 64 crossbar, we discuss what codes provide optimal performance for a memory.

  14. Context-Based 2D-VLC Entropy Coder in AVS Video Coding Standard

    Institute of Scientific and Technical Information of China (English)

    Qiang Wang; De-Bin Zhao; Wen Gao

    2006-01-01

    In this paper, a Context-based 2D Variable Length Coding (C2DVLC) method for coding the transformed residuals in AVS video coding standard is presented. One feature in C2DVLC is the usage of multiple 2D-VLC tables and another feature is the usage of simple Exponential-Golomb codes. C2DVLC employs context-based adaptive multiple table coding to exploit the statistical correlation between DCT coefficients of one block for higher coding efficiency. ExpGolomb codes are applied to code the pairs of the run-length of zero coefficients and the nonzero coefficient for lower storage requirement. C2DVLC is a low complexity coder in terms of both computational time and memory requirement.The experimental results show that C2DVLC can gain 0.34dB in average for the tested videos when compared with the traditional 2D-VLC coding method like that used in MPEG-2. And compared with CAVLC in H.264/AVC, C2DVLC shows similar coding efficiency.

  15. Estado asmático en niños.

    Directory of Open Access Journals (Sweden)

    Camilo Cañas

    2009-10-01

    Full Text Available El estado asmático es una entidad que se ve con relativa frecuencia en los servicios de emergencia; el pilar de su tratamiento son los esteroides y los ß-agonistas. Sin embargo cuando el enfermo no responde favorablemente al tratamiento se debe recurrir a otras alternativas como: adrenalina subcutánea, anticolinérgicos, aminofilina, sulfato de magnesio, helio, gases anestésicos, etc. Sólo 5% de los casos de asma severa requieren ventilación mecánica pero en tales ocasiones la mortalidad puede ser 13%.

  16. Application study of piecewise context-based adaptive binary arithmetic coding combined with modified LZC

    Science.gov (United States)

    Su, Yan; Jun, Xie Cheng

    2006-08-01

    An algorithm of combining LZC and arithmetic coding algorithm for image compression is presented and both theory deduction and simulation result prove the correctness and feasibility of the algorithm. According to the characteristic of context-based adaptive binary arithmetic coding and entropy, LZC was modified to cooperate the optimized piecewise arithmetic coding, this algorithm improved the compression ratio without any additional time consumption compared to traditional method.

  17. Interleaver Design Method for Turbo Codes Based on Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    Tan Ying; Sun Hong; Zhou Huai-bei

    2004-01-01

    This paper describes a new interleaver construction technique for turbo code. The technique searches as much as possible pseudo-random interleaving patterns under a certain condition using genetic algorithms(GAs). The new interleavers have the superiority of the S-random interleavers and this interleaver construction technique can reduce the time taken to generate pseudo-random interleaving patterns under a certain condition. Tbe results obtained indicate that the new interleavers yield an equal to or better performance than the Srandom interleavers. Compared to the S-random interleaver,this design requires a lower level of computational complexity.

  18. Optical antialiasing filters based on complementary Golay codes.

    Science.gov (United States)

    Leger, J R; Schuler, J; Morphis, N; Knowlden, R

    1997-07-10

    An optical filter that has an ideal response for removing aliasing noise from a sampled imaging system is described. The all-phase filter uses complementary Golay codes to achieve an optimum low-pass transfer function with no sidelobes. A computer model shows that the optical system has the expected performance in the ideal case, but degrades somewhat with wavelength variations and image aberrations. An experimental demonstration of the filter shows the optical transfer function performance and the response to imagery with a sampled detector.

  19. Study Of Coded Based Mechanism In WSN System

    Directory of Open Access Journals (Sweden)

    Kaksha S.Thakare

    2016-04-01

    Full Text Available Wireless Sensor networks (WSN is an emerging technology and have great potential to be employed in critical situations like battlefields and commercial applications such as building, traffic surveillance, habitat monitoring and smart homes and many more scenarios.One of the major challenges wireless sensor networks face today is QoS. In order to ensure data security and quality of service required by an application in an energy efficient way, we propose a mechanism for QoS routing with coding and selective encryption scheme for WSNs.Our approach provides reliable and secure data transmission and can adapt to the resource constraints of WSNs.

  20. Quantum secret sharing based on quantum error-correcting codes

    Institute of Scientific and Technical Information of China (English)

    Zhang Zu-Rong; Liu Wei-Tao; Li Cheng-Zu

    2011-01-01

    Quantum secret sharing(QSS) is a procedure of sharing classical information or quantum information by using quantum states. This paper presents how to use a [2k - 1,1, k] quantum error-correcting code (QECC) to implement a quantum (k, 2k - 1) threshold scheme. It also takes advantage of classical enhancement of the [2k - 1, l,k] QECC to establish a QSS scheme which can share classical information and quantum information simultaneously. Because information is encoded into QECC, these schemes can prevent intercept-resend attacks and be implemented on some noisy channels.

  1. Multiple description scalable video coding based on 3D lifted wavelet transform

    Institute of Scientific and Technical Information of China (English)

    JIANG Gang-yi; YU Mei; YU Zhou; YE Xi-en; ZHANG Wen-qin; KIM Yong-deak

    2006-01-01

    In this work, a new method to deal with the unconnected pixels in motion compensated temporal filtering (MCTF) is presented, which is designed to improve the performance of 3D lifted wavelet coding. Furthermore, multiple description scalable coding (MDSC) is investigated, and novel MDSC schemes based on 3D wavelet coding are proposed, using the lifting implementation of temporal filtering. The proposed MDSC schemes can avoid the mismatch problem in multiple description video coding, and have high scalability and robustness of video transmission. Experimental results showed that the proposed schemes are feasible and adequately effective.

  2. An Efficient Soft Decoder of Block Codes Based on Compact Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmed Azouaoui

    2012-09-01

    Full Text Available Soft-decision decoding is an NP-hard problem with great interest to developers of communication systems. We present an efficient soft-decision decoder of linear block codes based on compact genetic algorithm (cGA and compare its performances with various other decoding algorithms including Shakeel algorithm. The proposed algorithm uses the dual code in contrast to Shakeel algorithm which uses the code itself. Hence, this new approach reduces the decoding complexity of high rates codes. The complexity and an optimized version of this new algorithm are also presented and discussed.

  3. Warped Discrete Cosine Transform-Based Low Bit-Rate Block Coding Using Image Downsampling

    Directory of Open Access Journals (Sweden)

    Ertürk Sarp

    2007-01-01

    Full Text Available This paper presents warped discrete cosine transform (WDCT-based low bit-rate block coding using image downsampling. While WDCT aims to improve the performance of conventional DCT by frequency warping, the WDCT has only been applicable to high bit-rate coding applications because of the overhead required to define the parameters of the warping filter. Recently, low bit-rate block coding based on image downsampling prior to block coding followed by upsampling after the decoding process is proposed to improve the compression performance for low bit-rate block coders. This paper demonstrates that a superior performance can be achieved if WDCT is used in conjunction with image downsampling-based block coding for low bit-rate applications.

  4. gevolution: a cosmological N-body code based on General Relativity

    CERN Document Server

    Adamek, Julian; Durrer, Ruth; Kunz, Martin

    2016-01-01

    We present a new N-body code, gevolution, for the evolution of large scale structure in the Universe. Our code is based on a weak field expansion of General Relativity and calculates all six metric degrees of freedom in Poisson gauge. N-body particles are evolved by solving the geodesic equation which we write in terms of a canonical momentum such that it remains valid also for relativistic particles. We validate the code by considering the Schwarzschild solution and, in the Newtonian limit, by comparing with the Newtonian N-body code Gadget-2. We then proceed with a simulation of large scale structure in a Universe with massive neutrinos where we study the gravitational slip induced by the neutrino shear stress. The code can be extended to include different kinds of dark energy or modified gravity models and going beyond the usually adopted quasi-static approximation. Our code is publicly available.

  5. Proposing a Web-Based Tutorial System to Teach Malay Language Braille Code to the Sighted

    Science.gov (United States)

    Wah, Lee Lay; Keong, Foo Kok

    2010-01-01

    The "e-KodBrailleBM Tutorial System" is a web-based tutorial system which is specially designed to teach, facilitate and support the learning of Malay Language Braille Code to individuals who are sighted. The targeted group includes special education teachers, pre-service teachers, and parents. Learning Braille code involves memorisation…

  6. Probabilistic hypergraph based hash codes for social image search

    Institute of Scientific and Technical Information of China (English)

    Yi XIE; Hui-min YU; Roland HU

    2014-01-01

    With the rapid development of the Internet, recent years have seen the explosive growth of social media. This brings great challenges in performing efficient and accurate image retrieval on a large scale. Recent work shows that using hashing methods to embed high-dimensional image features and tag information into Hamming space provides a powerful way to index large collections of social images. By learning hash codes through a spectral graph partitioning algorithm, spectral hashing (SH) has shown promising performance among various hashing approaches. However, it is incomplete to model the relations among images only by pairwise simple graphs which ignore the relationship in a higher order. In this paper, we utilize a probabilistic hypergraph model to learn hash codes for social image retrieval. A probabilistic hypergraph model offers a higher order repre-sentation among social images by connecting more than two images in one hyperedge. Unlike a normal hypergraph model, a probabilistic hypergraph model considers not only the grouping information, but also the similarities between vertices in hy-peredges. Experiments on Flickr image datasets verify the performance of our proposed approach.

  7. CONSTRUCTION OF REGULAR LDPC LIKE CODES BASED ON FULL RANK CODES AND THEIR ITERATIVE DECODING USING A PARITY CHECK TREE

    Directory of Open Access Journals (Sweden)

    H. Prashantha Kumar

    2011-09-01

    Full Text Available Low density parity check (LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical Shannon limit for a memory less channel. LDPC codes are finding increasing use in applications like LTE-Networks, digital television, high density data storage systems, deep space communication systems etc. Several algebraic and combinatorial methods are available for constructing LDPC codes. In this paper we discuss a novel low complexity algebraic method for constructing regular LDPC like codes derived from full rank codes. We demonstrate that by employing these codes over AWGN channels, coding gains in excess of 2dB over un-coded systems can be realized when soft iterative decoding using a parity check tree is employed.

  8. Fault Tolerant Parallel Filters Based On Bch Codes

    Directory of Open Access Journals (Sweden)

    K.Mohana Krishna

    2015-04-01

    Full Text Available Digital filters are used in signal processing and communication systems. In some cases, the reliability of those systems is critical, and fault tolerant filter implementations are needed. Over the years, many techniques that exploit the filters’ structure and properties to achieve fault tolerance have been proposed. As technology scales, it enables more complex systems that incorporate many filters. In those complex systems, it is common that some of the filters operate in parallel, for example, by applying the same filter to different input signals. Recently, a simple technique that exploits the presence of parallel filters to achieve multiple fault tolerance has been presented. In this brief, that idea is generalized to show that parallel filters can be protected using Bose– Chaudhuri–Hocquenghem codes (BCH in which each filter is the equivalent of a bit in a traditional ECC. This new scheme allows more efficient protection when the number of parallel filters is large.

  9. An Interpolation Procedure for List Decoding Reed--Solomon codes Based on Generalized Key Equations

    CERN Document Server

    Zeh, Alexander; Augot, Daniel

    2011-01-01

    The key step of syndrome-based decoding of Reed-Solomon codes up to half the minimum distance is to solve the so-called Key Equation. List decoding algorithms, capable of decoding beyond half the minimum distance, are based on interpolation and factorization of multivariate polynomials. This article provides a link between syndrome-based decoding approaches based on Key Equations and the interpolation-based list decoding algorithms of Guruswami and Sudan for Reed-Solomon codes. The original interpolation conditions of Guruswami and Sudan for Reed-Solomon codes are reformulated in terms of a set of Key Equations. These equations provide a structured homogeneous linear system of equations of Block-Hankel form, that can be solved by an adaption of the Fundamental Iterative Algorithm. For an $(n,k)$ Reed-Solomon code, a multiplicity $s$ and a list size $\\listl$, our algorithm has time complexity \\ON{\\listl s^4n^2}.

  10. Development and Validation of Generalized Lifting Line Based Code for Wind Turbine Aerodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Grasso, F.; Garrel, A. van; Schepers, J.G. [ECN Wind Energy, Petten (Netherlands)

    2011-01-15

    In order to accurately model large, advanced and efficient wind turbines, more reliable and realistic aerodynamic simulation tools are necessary. Most of the available codes are based on the blade element momentum theory. These codes are fast but not well suited to properly describe the physics of wind turbines. On the other hand, by using computational fluid-dynamics codes, in which full Navier-Stokes equations are implemented, a strong expertise and a lot of computer time to perform analyses are required. A code, based on a generalized form of Prandtl's lifting line in combination with a free wake vortex wake has been developed at Energy research Centre of Netherlands. In the present work, the development of this new code is presented, together with the results coming from numerical-experimental comparisons. The final part of the work is dedicated to the analysis of innovative configurations like winglets and curved blades.

  11. Research of multi-path routing based on network coding in space information networks

    Directory of Open Access Journals (Sweden)

    Yu Geng

    2014-06-01

    Full Text Available A multi-path routing algorithm based on network coding is proposed for combating long propagation delay and high bit error rate of space information networks. On the basis of traditional multi-path routing, the algorithm uses a random linear network coding strategy to code data packets. Code number is determined by the next hop link status and the number of current received packets sent by the upstream node together. The algorithm improves retransmission and cache mechanisms through using redundancy caused by network coding. Meanwhile, the algorithm also adopts the flow distribution strategy based on time delay to balance network load. Simulation results show that the proposed routing algorithm can effectively improve packet delivery rate, reduce packet delay, and enhance network performance.

  12. A Novel Error Correcting System Based on Product Codes for Future Magnetic Recording Channels

    CERN Document Server

    Van, Vo Tam

    2012-01-01

    We propose a novel construction of product codes for high-density magnetic recording based on binary low-density parity check (LDPC) codes and binary image of Reed Solomon (RS) codes. Moreover, two novel algorithms are proposed to decode the codes in the presence of both AWGN errors and scattered hard errors (SHEs). Simulation results show that at a bit error rate (bER) of approximately 10^-8, our method allows improving the error performance by approximately 1.9dB compared with that of a hard decision decoder of RS codes of the same length and code rate. For the mixed error channel including random noises and SHEs, the signal-to-noise ratio (SNR) is set at 5dB and 150 to 400 SHEs are randomly generated. The bit error performance of the proposed product code shows a significant improvement over that of equivalent random LDPC codes or serial concatenation of LDPC and RS codes.

  13. User-inspired design methodology using Affordance Structure Matrix (ASM for construction projects

    Directory of Open Access Journals (Sweden)

    Maheswari J. Uma

    2017-01-01

    Full Text Available Traditionally, design phase of construction projects is often performed with incomplete and inaccurate user preferences. This is due to inefficiencies in the methodologies used for capturing the user requirements that can subsequently lead to inconsistencies and result in non-optimised end-result. Iterations and subsequent reworks due to such design inefficiencies is one of the major reasons for unsuccessful project delivery as they impact project performance measures such as time and cost among others. The existing design theories and practice are primarily based on functional requirements. Function-based design deals with design of artifact alone, which may yield favourable or unfavourable consequences with the design artifact. However, incorporating other interactions such as interactions between user & designer is necessary for optimised end-result. Hence, the objective of this research work is to devise a systematic design methodology considering all the three interactions among users, designers and artefacts for improved design efficiency. In this study, it has been attempted to apply the theory of affordances in a case project that involves the design of an offshore facility. A step-by-step methodology for developing Affordance Structure Matrix (ASM, which integrates House of Quality (HOQ and Design Structure Matrix (DSM, is proposed that can effectively capture the user requirements. HOQ is a popular quality management tool for capturing client requirements and DSM is a matrix-based tool that can capture the interdependency among the design entities. The proposed methodology utilises the strengths of both the tools, as DSM compliments HOQ in the process. In this methodology, different affordances such as AUA (Artifact-User-Affordance, AAA (Artifact-Artifact-Affordance and DDA (Designer-Designer-Affordance are captured systematically. Affordance is considered to be user-driven in this context that is in contrast to prevailing design

  14. A Review & Assessment of Current Operating Conditions Allowable Stresses in ASME Section III Subsection NH

    Energy Technology Data Exchange (ETDEWEB)

    R. W. Swindeman

    2009-12-14

    The current operating condition allowable stresses provided in ASME Section III, Subsection NH were reviewed for consistency with the criteria used to establish the stress allowables and with the allowable stresses provided in ASME Section II, Part D. It was found that the S{sub o} values in ASME III-NH were consistent with the S values in ASME IID for the five materials of interest. However, it was found that 0.80 S{sub r} was less than S{sub o} for some temperatures for four of the materials. Only values for alloy 800H appeared to be consistent with the criteria on which S{sub o} values are established. With the intent of undertaking a more detailed evaluation of issues related to the allowable stresses in ASME III-NH, the availabilities of databases for the five materials were reviewed and augmented databases were assembled.

  15. Multiple frequencies sequential coding for SSVEP-based brain-computer interface.

    Directory of Open Access Journals (Sweden)

    Yangsong Zhang

    Full Text Available BACKGROUND: Steady-state visual evoked potential (SSVEP-based brain-computer interface (BCI has become one of the most promising modalities for a practical noninvasive BCI system. Owing to both the limitation of refresh rate of liquid crystal display (LCD or cathode ray tube (CRT monitor, and the specific physiological response property that only a very small number of stimuli at certain frequencies could evoke strong SSVEPs, the available frequencies for SSVEP stimuli are limited. Therefore, it may not be enough to code multiple targets with the traditional frequencies coding protocols, which poses a big challenge for the design of a practical SSVEP-based BCI. This study aimed to provide an innovative coding method to tackle this problem. METHODOLOGY/PRINCIPAL FINDINGS: In this study, we present a novel protocol termed multiple frequencies sequential coding (MFSC for SSVEP-based BCI. In MFSC, multiple frequencies are sequentially used in each cycle to code the targets. To fulfill the sequential coding, each cycle is divided into several coding epochs, and during each epoch, certain frequency is used. Obviously, different frequencies or the same frequency can be presented in the coding epochs, and the different epoch sequence corresponds to the different targets. To show the feasibility of MFSC, we used two frequencies to realize four targets and carried on an offline experiment. The current study shows that: 1 MFSC is feasible and efficient; 2 the performance of SSVEP-based BCI based on MFSC can be comparable to some existed systems. CONCLUSIONS/SIGNIFICANCE: The proposed protocol could potentially implement much more targets with the limited available frequencies compared with the traditional frequencies coding protocol. The efficiency of the new protocol was confirmed by real data experiment. We propose that the SSVEP-based BCI under MFSC might be a promising choice in the future.

  16. Application of grammar-based codes for lossless compression of digital mammograms

    Science.gov (United States)

    Li, Xiaoli; Krishnan, Srithar; Ma, Ngok-Wah

    2006-01-01

    A newly developed grammar-based lossless source coding theory and its implementation was proposed in 1999 and 2000, respectively, by Yang and Kieffer. The code first transforms the original data sequence into an irreducible context-free grammar, which is then compressed using arithmetic coding. In the study of grammar-based coding for mammography applications, we encountered two issues: processing time and limited number of single-character grammar G variables. For the first issue, we discover a feature that can simplify the matching subsequence search in the irreducible grammar transform process. Using this discovery, an extended grammar code technique is proposed and the processing time of the grammar code can be significantly reduced. For the second issue, we propose to use double-character symbols to increase the number of grammar variables. Under the condition that all the G variables have the same probability of being used, our analysis shows that the double- and single-character approaches have the same compression rates. By using the methods proposed, we show that the grammar code can outperform three other schemes: Lempel-Ziv-Welch (LZW), arithmetic, and Huffman on compression ratio, and has similar error tolerance capabilities as LZW coding under similar circumstances.

  17. A NOVEL CONSTRUCTION OF QUANTUM LDPC CODES BASED ON CYCLIC CLASSES OF LINES IN EUCLIDEAN GEOMETRIES

    Institute of Scientific and Technical Information of China (English)

    Cao Dong; Song Yaoliang; Zhao Shengmei

    2012-01-01

    The dual-containing (or self-orthogonal) formalism of Calderbank-Shor-Steane (CSS) codes provides a universal connection between a classical linear code and a Quantum Error-Correcting Code (QECC).We propose a novel class of quantum Low Density Parity Check (LDPC) codes constructed from cyclic classes of lines in Euclidean Geometry (EG).The corresponding constructed parity check matrix has quasi-cyclic structure that can be encoded flexibility,and satisfies the requirement of dual-containing quantum code.Taking the advantage of quasi-cyclic structure,we use a structured approach to construct Generalized Parity Check Matrix (GPCM).This new class of quantum codes has higher code rate,more sparse check matrix,and exactly one four-cycle in each pair of two rows.Experimental results show that the proposed quantum codes,such as EG(2,q)Ⅱ-QECC,EG(3,q)Ⅱ-QECC,have better performance than that of other methods based on EG,over the depolarizing channel and decoded with iterative decoding based on the sum-product decoding algorithm.

  18. A PIC Based Code for Studying Space Charge Effects in Spiral Inflector

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    <正>1 Modeling and code developing In this work we developed a new iterative numerical model based on the conventional PIC methods for modeling intense particle transport in inflector. Fig. 1 is the flow diagram of the iterative algorithm.

  19. Reduced-Complexity Decoder of Long Reed-Solomon Codes Based on Composite Cyclotomic Fourier Transforms

    CERN Document Server

    Wu, Xuebin

    2011-01-01

    Long Reed-Solomon (RS) codes are desirable for digital communication and storage systems due to their improved error performance, but the high computational complexity of their decoders is a key obstacle to their adoption in practice. As discrete Fourier transforms (DFTs) can evaluate a polynomial at multiple points, efficient DFT algorithms are promising in reducing the computational complexities of syndrome based decoders for long RS codes. In this paper, we first propose partial composite cyclotomic Fourier transforms (CCFTs) and then devise syndrome based decoders for long RS codes over large finite fields based on partial CCFTs. The new decoders based on partial CCFTs achieve a significant saving of computational complexities for long RS codes. Since partial CCFTs have modular and regular structures, the new decoders are suitable for hardware implementations. To further verify and demonstrate the advantages of partial CCFTs, we implement in hardware the syndrome computation block for a $(2720, 2550)$ sho...

  20. Auto Code Generation for Simulink-Based Attitude Determination Control System

    Science.gov (United States)

    MolinaFraticelli, Jose Carlos

    2012-01-01

    This paper details the work done to auto generate C code from a Simulink-Based Attitude Determination Control System (ADCS) to be used in target platforms. NASA Marshall Engineers have developed an ADCS Simulink simulation to be used as a component for the flight software of a satellite. This generated code can be used for carrying out Hardware in the loop testing of components for a satellite in a convenient manner with easily tunable parameters. Due to the nature of the embedded hardware components such as microcontrollers, this simulation code cannot be used directly, as it is, on the target platform and must first be converted into C code; this process is known as auto code generation. In order to generate C code from this simulation; it must be modified to follow specific standards set in place by the auto code generation process. Some of these modifications include changing certain simulation models into their atomic representations which can bring new complications into the simulation. The execution order of these models can change based on these modifications. Great care must be taken in order to maintain a working simulation that can also be used for auto code generation. After modifying the ADCS simulation for the auto code generation process, it is shown that the difference between the output data of the former and that of the latter is between acceptable bounds. Thus, it can be said that the process is a success since all the output requirements are met. Based on these results, it can be argued that this generated C code can be effectively used by any desired platform as long as it follows the specific memory requirements established in the Simulink Model.

  1. Code qualification of structural materials for AFCI advanced recycling reactors.

    Energy Technology Data Exchange (ETDEWEB)

    Natesan, K.; Li, M.; Majumdar, S.; Nanstad, R.K.; Sham, T.-L. (Nuclear Engineering Division); (ORNL)

    2012-05-31

    ) and the Power Reactor Innovative Small Module (PRISM), the NRC/Advisory Committee on Reactor Safeguards (ACRS) raised numerous safety-related issues regarding elevated-temperature structural integrity criteria. Most of these issues remained unresolved today. These critical licensing reviews provide a basis for the evaluation of underlying technical issues for future advanced sodium-cooled reactors. Major materials performance issues and high temperature design methodology issues pertinent to the ARR are addressed in the report. The report is organized as follows: the ARR reference design concepts proposed by the Argonne National Laboratory and four industrial consortia were reviewed first, followed by a summary of the major code qualification and licensing issues for the ARR structural materials. The available database is presented for the ASME Code-qualified structural alloys (e.g. 304, 316 stainless steels, 2.25Cr-1Mo, and mod.9Cr-1Mo), including physical properties, tensile properties, impact properties and fracture toughness, creep, fatigue, creep-fatigue interaction, microstructural stability during long-term thermal aging, material degradation in sodium environments and effects of neutron irradiation for both base metals and weld metals. An assessment of modified versions of Type 316 SS, i.e. Type 316LN and its Japanese version, 316FR, was conducted to provide a perspective for codification of 316LN or 316FR in Subsection NH. Current status and data availability of four new advanced alloys, i.e. NF616, NF616+TMT, NF709, and HT-UPS, are also addressed to identify the R&D needs for their code qualification for ARR applications. For both conventional and new alloys, issues related to high temperature design methodology are described to address the needs for improvements for the ARR design and licensing. Assessments have shown that there are significant data gaps for the full qualification and licensing of the ARR structural materials. Development and evaluation of

  2. Quantitative Characterization of Super-Resolution Infrared Imaging Based on Time-Varying Focal Plane Coding

    Science.gov (United States)

    Wang, X.; Yuan, Y.; Zhang, J.; Chen, Y.; Cheng, Y.

    2014-10-01

    High resolution infrared image has been the goal of an infrared imaging system. In this paper, a super-resolution infrared imaging method using time-varying coded mask is proposed based on focal plane coding and compressed sensing theory. The basic idea of this method is to set a coded mask on the focal plane of the optical system, and the same scene could be sampled many times repeatedly by using time-varying control coding strategy, the super-resolution image is further reconstructed by sparse optimization algorithm. The results of simulation are quantitatively evaluated by introducing the Peak Signal-to-Noise Ratio (PSNR) and Modulation Transfer Function (MTF), which illustrate that the effect of compressed measurement coefficient r and coded mask resolution m on the reconstructed image quality. Research results show that the proposed method will promote infrared imaging quality effectively, which will be helpful for the practical design of new type of high resolution ! infrared imaging systems.

  3. Context-based lossless image compression with optimal codes for discretized Laplacian distributions

    Science.gov (United States)

    Giurcaneanu, Ciprian Doru; Tabus, Ioan; Stanciu, Cosmin

    2003-05-01

    Lossless image compression has become an important research topic, especially in relation with the JPEG-LS standard. Recently, the techniques known for designing optimal codes for sources with infinite alphabets have been applied for the quantized Laplacian sources which have probability mass functions with two geometrically decaying tails. Due to the simple parametric model of the source distribution the Huffman iterations are possible to be carried out analytically, using the concept of reduced source, and the final codes are obtained as a sequence of very simple arithmetic operations, avoiding the need to store coding tables. We propose the use of these (optimal) codes in conjunction with context-based prediction, for noiseless compression of images. To reduce further the average code length, we design Escape sequences to be employed when the estimation of the distribution parameter is unreliable. Results on standard test files show improvements in compression ratio when comparing with JPEG-LS.

  4. QoS Based Capacity Enhancement for WCDMA Network with Coding Scheme

    CERN Document Server

    Ayyappan, K; 10.5121/vlsic.2010.1102

    2010-01-01

    The wide-band code division multiple access (WCDMA) based 3G and beyond cellular mobile wireless networks are expected to provide a diverse range of multimedia services to mobile users with guaranteed quality of service (QoS). To serve diverse quality of service requirements of these networks it necessitates new radio resource management strategies for effective utilization of network resources with coding schemes. Call admission control (CAC) is a significant component in wireless networks to guarantee quality of service requirements and also to enhance the network resilience. In this paper capacity enhancement for WCDMA network with convolutional coding scheme is discussed and compared with block code and without coding scheme to achieve a better balance between resource utilization and quality of service provisioning. The model of this network is valid for the real-time (RT) and non-real-time (NRT) services having different data rate. Simulation results demonstrate the effectiveness of the network using co...

  5. State injection, lattice surgery, and dense packing of the deformation-based surface code

    Science.gov (United States)

    Nagayama, Shota; Satoh, Takahiko; Van Meter, Rodney

    2017-01-01

    Resource consumption of the conventional surface code is expensive, in part due to the need to separate the defects that create the logical qubit far apart on the physical qubit lattice. We propose that instantiating the deformation-based surface code using superstabilizers will make it possible to detect short error chains connecting the superstabilizers, allowing us to place logical qubits close together. Additionally, we demonstrate the process of conversion from the defect-based surface code, which works as arbitrary state injection, and a lattice-surgery-like controlled not (cnot) gate implementation that requires fewer physical qubits than the braiding cnot gate. Finally, we propose a placement design for the deformation-based surface code and analyze its resource consumption; large-scale quantum computation requires 25/d2+170 d +289 4 physical qubits per logical qubit, where d is the code distance of the standard surface code, whereas the planar code requires 16 d2-16 d +4 physical qubits per logical qubit, for a reduction of about 50%.

  6. Mean shift based log-Gabor wavelet image coding

    Institute of Scientific and Technical Information of China (English)

    LI Ji-liang; FANG Xiang-zhong; HOU Jun

    2007-01-01

    In this paper, we propose a sparse overcomplete image approximation method based on the ideas of overcomplete log-Gabor wavelet, mean shift and energy concentration. The proposed approximation method selects the necessary wavelet coefficients with a mean shift based algorithm, and concentrates energy on the selected coefficients. It can sparsely approximate the original image, and converges faster than the existing local competition based method. Then, we propose a new compression scheme based on the above approximation method. The scheme has compression performance similar to JPEG 2000. The images decoded with the proposed compression scheme appear more pleasant to the human eyes than those with JPEG 2000.

  7. A System Call Randomization Based Method for Countering Code-Injection Attacks

    Directory of Open Access Journals (Sweden)

    Zhaohui Liang

    2009-10-01

    Full Text Available Code-injection attacks pose serious threat to today’s Internet. The existing code-injection attack defense methods have some deficiencies on performance overhead and effectiveness. To this end, we propose a method that uses system called randomization to counter code injection attacks based on instruction set randomization idea. System calls must be used when an injected code would perform its actions. By creating randomized system calls of the target process, an attacker who does not know the key to the randomization algorithm will inject code that isn’t randomized like as the target process and is invalid for the corresponding de-randomized module. The injected code would fail to execute without calling system calls correctly. Moreover, with extended complier, our method creates source code randomization during its compiling and implements binary executable files randomization by feature matching. Our experiments on built prototype show that our method can effectively counter variety code injection attacks with low-overhead.

  8. Application of the French codes to the pressurized thermal shocks assessment

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Mingya; Wang, Rong Shan; Yu, Weiwei; Lu, Feng; Zhang, Guo Dong; Xue, Fei; Chen, Zhilin [Suzhou Nuclear Power Research Institute, Life Management Center, Suzhou (China); Qian, Guian [Paul Scherrer Institute, Nuclear Energy and Safety Department, Villigen (Switzerland); Shi, Jinhua [Amec Foster Wheeler, Clean Energy Department, Gloucester (United Kingdom)

    2016-12-15

    The integrity of a reactor pressure vessel (RPV) related to pressurized thermal shocks (PTSs) has been extensively studied. This paper introduces an integrity assessment of an RPV subjected to a PTS transient based on the French codes. In the USA, the 'screening criterion' for maximum allowable embrittlement of RPV material is developed based on the probabilistic fracture mechanics. However, in the French RCC-M and RSE-M codes, which are developed based on the deterministic fracture mechanics, there is no 'screening criterion'. In this paper, the methodology in the RCC-M and RSE-M codes, which are used for PTS analysis, are firstly discussed. The bases of the French codes are compared with ASME and FAVOR codes. A case study is also presented. The results show that the method in the RCC-M code that accounts for the influence of cladding on the stress intensity factor (SIF) may be nonconservative. The SIF almost doubles if the weld residual stress is considered. The approaches included in the codes differ in many aspects, which may result in significant differences in the assessment results. Therefore, homogenization of the codes in the long time operation of nuclear power plants is needed.

  9. Proof-Carrying Code Based Tool for Secure Information Flow of Assembly Programs

    Directory of Open Access Journals (Sweden)

    Abdulrahman Muthana

    2009-01-01

    Full Text Available Problem statement: How a host (the code consumer can determine with certainty that a downloaded program received from untrusted source (the code producer will maintain the confidentiality of the data it manipulates and it is safe to install and execute. Approach: The approach adopted for verifying that a downloaded program will not leak confidential data to unauthorized parties was based on the concept of Proof-Carrying Code (PCC. A mobile program (in its assembly form was analyzed for information flow security based on the concept of proof-carrying code. The security policy was centered on a type system for analyzing information flows within assembly programs based on the notion of noninterference. Results: A verification tool for verifying assembly programs for information flow security was built. The tool certifies SPARC assembly programs for secure information flow by statically analyzing the program based on the idea of Proof-Carrying Code (PCC. The tool operated directly on the machine-code requiring only the inputs and outputs of the code annotated with security levels. The tool provided a windows user interface enabling the users to control the verification process. The proofs that untrusted program did not leak sensitive information were generated and checked on the host machine and if they are valid, then the untrusted program can be installed and executed safely. Conclusion: By basing proof-carrying code infrastructure on information flow analysis type-system, a sufficient assurance of protecting confidential data manipulated by the mobile program can be obtained. This assurance was come due to the fact that type systems provide a sufficient guarantee of protecting confidentiality.

  10. Fractal Video Coding Using Fast Normalized Covariance Based Similarity Measure

    Directory of Open Access Journals (Sweden)

    Ravindra E. Chaudhari

    2016-01-01

    Full Text Available Fast normalized covariance based similarity measure for fractal video compression with quadtree partitioning is proposed in this paper. To increase the speed of fractal encoding, a simplified expression of covariance between range and overlapped domain blocks within a search window is implemented in frequency domain. All the covariance coefficients are normalized by using standard deviation of overlapped domain blocks and these are efficiently calculated in one computation by using two different approaches, namely, FFT based and sum table based. Results of these two approaches are compared and they are almost equal to each other in all aspects, except the memory requirement. Based on proposed simplified similarity measure, gray level transformation parameters are computationally modified and isometry transformations are performed using rotation/reflection properties of IFFT. Quadtree decompositions are used for the partitions of larger size of range block, that is, 16 × 16, which is based on target level of motion compensated prediction error. Experimental result shows that proposed method can increase the encoding speed and compression ratio by 66.49% and 9.58%, respectively, as compared to NHEXS method with increase in PSNR by 0.41 dB. Compared to H.264, proposed method can save 20% of compression time with marginal variation in PSNR and compression ratio.

  11. High-Capacity Quantum Secure Direct Communication Based on Quantum Hyperdense Coding with Hyperentanglement

    Institute of Scientific and Technical Information of China (English)

    WANG Tie-Jun; LI Tao; DU Fang-Fang; DENG Fu-Guo

    2011-01-01

    We present a quantum hyperdense coding protocol with hyperentanglement in polarization and spatial-mode degrees of freedom of photons first and then give the details for a quantum secure direct communication (QSDC)protocol based on this quantum hyperdense coding protocol. This QSDC protocol has the advantage of having a higher capacity than the quantum communication protocols with a qubit system. Compared with the QSDC protocol based on superdense coding with d-dimensional systems, this QSDC protocol is more feasible as the preparation of a high-dimension quantum system is more difficult than that of a two-level quantum system at present.%@@ We present a quantum hyperdense coding protocol with hyperentanglement in polarization and spatial-mode degrees of freedom of photons first and then give the details for a quantum secure direct communication(QSDC)protocol based on this quantum hyperdense coding protocol.This QSDC protocol has the advantage of having a higher capacity than the quantum communication protocols with a qubit system.Compared with the QSDC protocol based on superdense coding with d-dimensional systems, this QSDC protocol is more feasible as the preparation of a high-dimension quantum system is more difficult than that of a two-level quantum system at present.

  12. A Steganography Based on CT-CDMA Communication Scheme Using Complete Complementary Codes

    CERN Document Server

    Kojima, Tetsuya

    2010-01-01

    It has been shown that complete complementary codes can be applied into some communication systems like approximately synchronized CDMA systems because of its good correlation properties. CT-CDMA is one of the communication systems based on complete complementary codes. In this system, the information data of the multiple users can be transmitted by using the same set of complementary codes through a single frequency band. In this paper, we propose to apply CT-CDMA systems into a kind of steganography. It is shown that a large amount of secret data can be embedded in the stego image by the proposed method through some numerical experiments using color images.

  13. A new approach to information coding and protection based on the theory of matroids

    Directory of Open Access Journals (Sweden)

    V. Borshevich

    1994-06-01

    Full Text Available A new approach to coding and protection of information in the computer and telecommunication systems is proposed and discussed. It is based on the mathematical apparatus of the theory of matroids, which in combination with the randomization method gives the possibility to protect large volumes of information against spoils, having considerably high speed of coding/decoding of information even on the PC platform. The proposed approach open the way for building of a new class of codes usable for recovering of large amount of information with high degree of accuracy.

  14. Particle-in-Cell Codes for plasma-based particle acceleration

    CERN Document Server

    Pukhov, Alexander

    2016-01-01

    Basic principles of particle-in-cell (PIC ) codes with the main application for plasma-based acceleration are discussed. The ab initio full electromagnetic relativistic PIC codes provide the most reliable description of plasmas. Their properties are considered in detail. Representing the most fundamental model, the full PIC codes are computationally expensive. The plasma-based acceler- ation is a multi-scale problem with very disparate scales. The smallest scale is the laser or plasma wavelength (from one to hundred microns) and the largest scale is the acceleration distance (from a few centimeters to meters or even kilometers). The Lorentz-boost technique allows to reduce the scale disparity at the costs of complicating the simulations and causing unphysical numerical instabilities in the code. Another possibility is to use the quasi-static approxi- mation where the disparate scales are separated analytically.

  15. Costs and Advantages of Object-Based Image Coding with Shape-Adaptive Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Cagnazzo Marco

    2007-01-01

    Full Text Available Object-based image coding is drawing a great attention for the many opportunities it offers to high-level applications. In terms of rate-distortion performance, however, its value is still uncertain, because the gains provided by an accurate image segmentation are balanced by the inefficiency of coding objects of arbitrary shape, with losses that depend on both the coding scheme and the object geometry. This work aims at measuring rate-distortion costs and gains for a wavelet-based shape-adaptive encoder similar to the shape-adaptive texture coder adopted in MPEG-4. The analysis of the rate-distortion curves obtained in several experiments provides insight about what performance gains and losses can be expected in various operative conditions and shows the potential of such an approach for image coding.

  16. Regulation of dynein-mediated autophagosomes trafficking by ASM in CASMCs.

    Science.gov (United States)

    Xu, Ming; Zhang, Qiufang; Li, Pin-Lan; Nguyen, Thaison; Li, Xiang; Zhang, Yang

    2016-01-01

    Acid sphingomyelinase (ASM; gene symbol Smpd1) has been shown to play a crucial role in autophagy maturation by controlling lysosomal fusion with autophagosomes in coronary arterial smooth muscle cells (CASMCs). However, the underlying molecular mechanism by which ASM controls autophagolysosomal fusion remains unknown. In primary cultured CASMCs, lysosomal Ca2+ induced by 7-ketocholesterol (7-Ket, an atherogenic stimulus and autophagy inducer) was markedly attenuated by ASM deficiency or TRPML1 gene silencing suggesting that ASM signaling is required for TRPML1 channel activity and subsequent lysosomal Ca(2+) release. In these CASMCs, ASM deficiency or TRPML1 gene silencing markedly inhibited 7-Ket-induced dynein activation. In addition, 7-Ket-induced autophagosome trafficking, an event associated with lysosomal Ca(2+) release and dynein activity, was significantly inhibited in ASM-deficient (Smpd1(-/-)) CASMCs compared to that in Smpd1(+/+) CASMCs. Finally, overexpression of TRPML1 proteins restored 7-Ket-induced lysosomal Ca(2+) release and autophagosome trafficking in Smpd1-/- CASMCs. Collectively, these results suggest that ASM plays a critical role in regulating lysosomal TRPML1-Ca(2+) signaling and subsequent dynein-mediated autophagosome trafficking, which leads its role in controlling autophagy maturation in CASMCs under atherogenic stimulation.

  17. Do Performance-Based Codes Support Universal Design in Architecture?

    DEFF Research Database (Denmark)

    Grangaard, Sidse; Frandsen, Anne Kathrine

    2016-01-01

    understanding of accessibility and UD is directly related to buildings like hospitals and care centers. When the objective is both innovative and inclusive architecture, the request of a performance-based model should be followed up by a knowledge enhancement effort in the building sector. Bloom´s taxonomy...

  18. Code generation based on formal BURS theory and heuristic search

    NARCIS (Netherlands)

    Nymeyer, Albert; Katoen, Joost P.

    BURS theory provides a powerful mechanism to efficiently generate pattern matches in a given expression tree. BURS, which stands for bottom-up rewrite system, is based on term rewrite systems, to which costs are added. We formalise the underlying theory, and derive an algorithm that computes all

  19. Do Performance-Based Codes Support Universal Design in Architecture?

    DEFF Research Database (Denmark)

    Grangaard, Sidse; Frandsen, Anne Kathrine

    2016-01-01

    understanding of accessibility and UD is directly related to buildings like hospitals and care centers. When the objective is both innovative and inclusive architecture, the request of a performance-based model should be followed up by a knowledge enhancement effort in the building sector. Bloom´s taxonomy...

  20. 78 FR 37885 - Approval of American Society of Mechanical Engineers' Code Cases

    Science.gov (United States)

    2013-06-24

    ... Mechanical Engineers' Code Cases; Proposed Rule #0;#0;Federal Register / Vol. 78, No. 121 / Monday, June 24... American Society of Mechanical Engineers' Code Cases AGENCY: Nuclear Regulatory Commission. ACTION... revised Code Cases published by the American Society of Mechanical Engineers (ASME). This proposed...

  1. STUDY OF CODING GENERATOR BASED ON IN-SYSTEM PROGRAMMING TECHNIQUE AND DEVICES

    Institute of Scientific and Technical Information of China (English)

    Liu Duren; Jin Yajing; Ren Zhichun

    2002-01-01

    This paper presents a design of coding waveform generator controlled by microcomputer or single-chip microcomputer and realizes arbitrary coding waveform combination based on In-system programming(ISP) technique and High Density Programmable Logic Deivce (HDPLD),and using latch register, control counter and easily expanded PS (Parallel in & Serial out) shift register array. This scheme can overcome some shortcomings in past schemes, so that hardware design can be realized by means of software.

  2. Safety relief valves according new requirements of EN (PED) versus AD/TRD or ASME

    Energy Technology Data Exchange (ETDEWEB)

    Foellmer, B.; Schnettler, A. [Bopp and Reuther, Mannheim (Germany)

    2004-07-01

    In Europe, only Pressure safety relief valves which conform with the Pressure Equipment Directive (PED) 97/23 EC may be used. They are classified PED Category IV and a Notified Body validates the fulfilment of the PED requirements in accordance with a selected conformity evaluation procedure also drawn from PED. The harmonized standards or other technical reference works are stated in a manufacturer's declaration of conformity, which is supplied with the safety relief valve at delivery. Only this ultimately makes it possible to establish the basis used for CE certification and the certified properties which can be derived there from. The CE symbol in the identification plate alone does not supply sufficient information for this purpose. A comparative assessment of the harmonized EN standards compared against the AD and TRD technical rules in this article discloses differences in the certified properties and the applications for spring-loaded safety relief valves. The ASME code is also included in the assessment, since it plays a significant role at least outside Europe. (orig.)

  3. Model based code generation for distributed embedded systems

    OpenAIRE

    Raghav, Gopal; Gopalswamy, Swaminathan; Radhakrishnan, Karthikeyan; Hugues, Jérôme; Delange, Julien

    2010-01-01

    Embedded systems are becoming increasingly complex and more distributed. Cost and quality requirements necessitate reuse of the functional software components for multiple deployment architectures. An important step is the allocation of software components to hardware. During this process the differences between the hardware and application software architectures must be reconciled. In this paper we discuss an architecture driven approach involving model-based techniques to resolve these diff...

  4. Gray Code ADC Based on an Analog Neural Circuit

    Directory of Open Access Journals (Sweden)

    L. Michaeli

    1995-04-01

    Full Text Available In this paper a new neural ADC design is presented, which is based on the idea to replace all functional components needed in the ADC block scheme by a simple connection of neurons. Transformation of ADC functional scheme into an analog neural structure and its computer simulation is one of the main results of this paper. Furthermore, a discrete component prototype of the proposed A/D converter is discussed and experimental results are also given.

  5. ASME Non-Nuclear Authorization Certification and Recertification Requirements of Nondestructive Testing%ASME 非核授权证书取换证无损检测要求

    Institute of Scientific and Technical Information of China (English)

    金磊

    2015-01-01

    ASME 取换证过程中,无损检测作为质量保证的一部分,是 ASME 取换证过程中重要的一环。介绍在 ASME 非核授权证书取换证过程中涉及的 ASME 规范2013版中对无损检测的相关要求,给相关技术人员提供借鉴。%In the process of ASME certification and recertification,nondestructive testing as the part of the quality assurance is important.This paper introduces relevant requirements of nondestructive testing involved in ASME 2013 version in the process of the ASME certification and recertification for non-nuclear application,the technician can consult it.

  6. An Examination of the Performance Based Building Code on the Design of a Commercial Building

    Directory of Open Access Journals (Sweden)

    John Greenwood

    2012-11-01

    Full Text Available The Building Code of Australia (BCA is the principal code under which building approvals in Australia are assessed. The BCA adopted performance-based solutions for building approvals in 1996. Performance-based codes are based upon a set of explicit objectives, stated in terms of a hierarchy of requirements beginning with key general objectives. With this in mind, the research presented in this paper aims to analyse the impact of the introduction of the performance-based code within Western Australia to gauge the effect and usefulness of alternative design solutions in commercial construction using a case study project. The research revealed that there are several advantages to the use of alternative designs and that all parties, in general, are in favour of the performance-based building code of Australia. It is suggested that change in the assessment process to streamline the alternative design path is needed for the greater use of the performance-based alternative. With appropriate quality control measures, minor variations to the deemed-to-satisfy provisions could easily be managed by the current and future building surveying profession.

  7. Demonstration of Emulator-Based Bayesian Calibration of Safety Analysis Codes: Theory and Formulation

    Directory of Open Access Journals (Sweden)

    Joseph P. Yurko

    2015-01-01

    Full Text Available System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here with Markov Chain Monte Carlo (MCMC sampling feasible. This work uses Gaussian Process (GP based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.

  8. Lossy to lossless object-based coding of 3-D MRI data.

    Science.gov (United States)

    Menegaz, Gloria; Thiran, Jean-Philippe

    2002-01-01

    We propose a fully three-dimensional (3-D) object-based coding system exploiting the diagnostic relevance of the different regions of the volumetric data for rate allocation. The data are first decorrelated via a 3-D discrete wavelet transform. The implementation via the lifting steps scheme allows to map integer-to-integer values, enabling lossless coding, and facilitates the definition of the object-based inverse transform. The coding process assigns disjoint segments of the bitstream to the different objects, which can be independently accessed and reconstructed at any up-to-lossless quality. Two fully 3-D coding strategies are considered: embedded zerotree coding (EZW-3D) and multidimensional layered zero coding (MLZC), both generalized for region of interest (ROI)-based processing. In order to avoid artifacts along region boundaries, some extra coefficients must be encoded for each object. This gives rise to an overheading of the bitstream with respect to the case where the volume is encoded as a whole. The amount of such extra information depends on both the filter length and the decomposition depth. The system is characterized on a set of head magnetic resonance images. Results show that MLZC and EZW-3D have competitive performances. In particular, the best MLZC mode outperforms the others state-of-the-art techniques on one of the datasets for which results are available in the literature.

  9. Vision-based fast location of multi-bar code in any direction

    Science.gov (United States)

    Lin, Sheng-Xin; Zhao, Xiao-Fang; Liu, Hua-Zhu

    2017-07-01

    The automatic location of the bar code is a key step in the bar code image recognition system. It is extremely confined that the generalization of the traditional bar code localization algorithms due to the requirements of both direction and quality of bar code, and most of them are only aimed at the single barcode localization. In this paper, we have proposed a novel multi-barcode location algorithm in arbitrary direction based on the accumulation of the linear gray value. First, the line coordinates of the barcode region is determined by the image normalized cross-correlation algorithm. Then the center line of gray value of cumulative distribution is used to analyze the barcode boundary and to determine the number of bar code within the region. Finally, the precise positioning of the barcode region is obtained. The experiments have demonstrated that our proposed method can be used to identify all the bar codes in any area, and automatically locate the bar codes in any direction.

  10. Do Performance-Based Codes Support Universal Design in Architecture?

    Science.gov (United States)

    Grangaard, Sidse; Frandsen, Anne Kathrine

    2016-01-01

    The research project 'An analysis of the accessibility requirements' studies how Danish architectural firms experience the accessibility requirements of the Danish Building Regulations and it examines their opinions on how future regulative models can support innovative and inclusive design - Universal Design (UD). The empirical material consists of input from six workshops to which all 700 Danish Architectural firms were invited, as well as eight group interviews. The analysis shows that the current prescriptive requirements are criticized for being too homogenous and possibilities for differentiation and zoning are required. Therefore, a majority of professionals are interested in a performance-based model because they think that such a model will support 'accessibility zoning', achieving flexibility because of different levels of accessibility in a building due to its performance. The common understanding of accessibility and UD is directly related to buildings like hospitals and care centers. When the objective is both innovative and inclusive architecture, the request of a performance-based model should be followed up by a knowledge enhancement effort in the building sector. Bloom's taxonomy of educational objectives is suggested as a tool for such a boost. The research project has been financed by the Danish Transport and Construction Agency.

  11. Assessing the performance of a parallel MATLAB-based 3D convection code

    Science.gov (United States)

    Kirkpatrick, G. J.; Hasenclever, J.; Phipps Morgan, J.; Shi, C.

    2008-12-01

    We are currently building 2D and 3D MATLAB-based parallel finite element codes for mantle convection and melting. The codes use the MATLAB implementation of core MPI commands (eg. Send, Receive, Broadcast) for message passing between computational subdomains. We have found that code development and algorithm testing are much faster in MATLAB than in our previous work coding in C or FORTRAN, this code was built from scratch with only 12 man-months of effort. The one extra cost w.r.t. C coding on a Beowulf cluster is the cost of the parallel MATLAB license for a >4core cluster. Here we present some preliminary results on the efficiency of MPI messaging in MATLAB on a small 4 machine, 16core, 32Gb RAM Intel Q6600 processor-based cluster. Our code implements fully parallelized preconditioned conjugate gradients with a multigrid preconditioner. Our parallel viscous flow solver is currently 20% slower for a 1,000,000 DOF problem on a single core in 2D as the direct solve MILAMIN MATLAB viscous flow solver. We have tested both continuous and discontinuous pressure formulations. We test with various configurations of network hardware, CPU speeds, and memory using our own and MATLAB's built in cluster profiler. So far we have only explored relatively small (up to 1.6GB RAM) test problems. We find that with our current code and Intel memory controller bandwidth limitations we can only get ~2.3 times performance out of 4 cores than 1 core per machine. Even for these small problems the code runs faster with message passing between 4 machines with one core each than 1 machine with 4 cores and internal messaging (1.29x slower), or 1 core (2.15x slower). It surprised us that for 2D ~1GB-sized problems with only 3 multigrid levels, the direct- solve on the coarsest mesh consumes comparable time to the iterative solve on the finest mesh - a penalty that is greatly reduced either by using a 4th multigrid level or by using an iterative solve at the coarsest grid level. We plan to

  12. The ASM Curriculum Guidelines for Undergraduate Microbiology: A Case Study of the Advocacy Role of Societies in Reform Efforts.

    Science.gov (United States)

    Horak, Rachel E A; Merkel, Susan; Chang, Amy

    2015-05-01

    A number of national reports, including Vision and Change in Undergraduate Biology Education: A Call to Action, have called for drastic changes in how undergraduate biology is taught. To that end, the American Society for Microbiology (ASM) has developed new Curriculum Guidelines for undergraduate microbiology that outline a comprehensive curriculum for any undergraduate introductory microbiology course or program of study. Designed to foster enduring understanding of core microbiology concepts, the Guidelines work synergistically with backwards course design to focus teaching on student-centered goals and priorities. In order to qualitatively assess how the ASM Curriculum Guidelines are used by educators and learn more about the needs of microbiology educators, the ASM Education Board distributed two surveys to the ASM education community. In this report, we discuss the results of these surveys (353 responses). We found that the ASM Curriculum Guidelines are being implemented in many different types of courses at all undergraduate levels. Educators indicated that the ASM Curriculum Guidelines were very helpful when planning courses and assessments. We discuss some specific ways in which the ASM Curriculum Guidelines have been used in undergraduate classrooms. The survey identified some barriers that microbiology educators faced when trying to adopt the ASM Curriculum Guidelines, including lack of time, lack of financial resources, and lack of supporting resources. Given the self-reported challenges to implementing the ASM Curriculum Guidelines in undergraduate classrooms, we identify here some activities related to the ASM Curriculum Guidelines that the ASM Education Board has initiated to assist educators in the implementation process.

  13. Novel video coding algorithm based on 3D-binDCT

    Institute of Scientific and Technical Information of China (English)

    NI Wei; GUO Bao-long; YANG Liu

    2005-01-01

    In this paper we propose a three dimensional multiplierless discrete cosine transform(DCT) with lifting scheme called 3D-binDCT.Based on 3D-binDCT,a novel video coding algorithm without motion estimation/compensation is proposed.It uses the 3D-binDCT to exploit spatial or temporal redundancy.The computation of binDCT only needs shift and addition operations,thus the computational complexity is minimized.DC coefficient prediction,modified scan mode and arithmetic coding techniques are also adopted.Extensive simulation results show that the proposed coding scheme provides higher coding efficiency and improves visual quality, and it is easy to be realized by software and hardware.

  14. Space-time Block Codes Based on Quasi-Orthogonal Designs

    Institute of Scientific and Technical Information of China (English)

    李正权; 胡光锐; 单红梅

    2004-01-01

    A new space-time block codes based on quasi-orthogonal designs are put forward. First the channel model is formulated. Then the connection between orthogonal/quasiorthogonal designs and space-time block codes is explored.Finally we make simulations for the transmission of 4 bits/s/Hz and 6 bits/s/Hz using eight transmit antennas using the rate 3/4 quasi-orthogonal space-time block code and the rate 1/2 full-diversity orthogonal space-time block code.Simulation results show that full transmission rate is more important for very low signal noise ratio (SNR) and high bit error probability (BEP), while full diversity is more important for very high SNR and low BEP.

  15. Image coding based on maximum entropy partitioning for identifying improbable intensities related to facial expressions

    Indian Academy of Sciences (India)

    SEBA SUSAN; NANDINI AGGARWAL; SHEFALI CHAND; AYUSH GUPTA

    2016-12-01

    In this paper we investigate information-theoretic image coding techniques that assign longer codes to improbable, imprecise and non-distinct intensities in the image. The variable length coding techniques when applied to cropped facial images of subjects with different facial expressions, highlight the set of low probability intensities that characterize the facial expression such as the creases in the forehead, the widening of the eyes and the opening and closing of the mouth. A new coding scheme based on maximum entropy partitioning is proposed in our work, particularly to identify the improbable intensities related to different emotions. The improbable intensities when used as a mask decode the facial expression correctly, providing an effectiveplatform for future emotion categorization experiments

  16. Mining discriminative class codes for multi-class classification based on minimizing generalization errors

    Science.gov (United States)

    Eiadon, Mongkon; Pipanmaekaporn, Luepol; Kamonsantiroj, Suwatchai

    2016-07-01

    Error Correcting Output Code (ECOC) has emerged as one of promising techniques for solving multi-class classification. In the ECOC framework, a multi-class problem is decomposed into several binary ones with a coding design scheme. Despite this, the suitable multi-class decomposition scheme is still ongoing research in machine learning. In this work, we propose a novel multi-class coding design method to mine the effective and compact class codes for multi-class classification. For a given n-class problem, this method decomposes the classes into subsets by embedding a structure of binary trees. We put forward a novel splitting criterion based on minimizing generalization errors across the classes. Then, a greedy search procedure is applied to explore the optimal tree structure for the problem domain. We run experiments on many multi-class UCI datasets. The experimental results show that our proposed method can achieve better classification performance than the common ECOC design methods.

  17. Novel video coding algorithm based on 3D-binDCT

    Science.gov (United States)

    Ni, Wei; Guo, Bao-Long; Yang, Liu

    2005-11-01

    In this paper we propose a three dimensional multiplierless discrete cosine transform (DCT) with lifting scheme called 3D-binDCT. Based on 3D-binDCT, a novel video coding algorithm without motion estimation/compensation is proposed. It uses the 3D-binDCT to exploit spatial or temporal redundancy. The computation of binDCT only needs shift and addition operations, thus the computational complexity is minimized. DC coefficient prediction, modified scan mode and arithmetic coding techniques are also adopted. Extensive simulation results show that the proposed coding scheme provides higher coding efficiency and improves visual quality, and it is easy to be realized by software and hardware.

  18. Memory-efficient contour-based region-of-interest coding of arbitrarily large images

    Science.gov (United States)

    Sadaka, Nabil G.; Abousleman, Glen P.; Karam, Lina J.

    2007-04-01

    In this paper, we present a memory-efficient, contour-based, region-of-interest (ROI) algorithm designed for ultra-low-bit- rate compression of very large images. The proposed technique is integrated into a user-interactive wavelet-based image coding system in which multiple ROIs of any shape and size can be selected and coded efficiently. The coding technique compresses region-of-interest and background (non-ROI) information independently by allocating more bits to the selected targets and fewer bits to the background data. This allows the user to transmit large images at very low bandwidths with lossy/lossless ROI coding, while preserving the background content to a certain level for contextual purposes. Extremely large images (e.g., 65000 X 65000 pixels) with multiple large ROIs can be coded with minimal memory usage by using intelligent ROI tiling techniques. The foreground information at the encoder/decoder is independently extracted for each tile without adding extra ROI side information to the bit stream. The arbitrary ROI contour is down-sampled and differential chain coded (DCC) for efficient transmission. ROI wavelet masks for each tile are generated and processed independently to handle any size image and any shape/size of overlapping ROIs. The resulting system dramatically reduces the data storage and transmission bandwidth requirements for large digital images with multiple ROIs.

  19. A new hybrid coding for protein secondary structure prediction based on primary structure similarity.

    Science.gov (United States)

    Li, Zhong; Wang, Jing; Zhang, Shunpu; Zhang, Qifeng; Wu, Wuming

    2017-03-16

    The coding pattern of protein can greatly affect the prediction accuracy of protein secondary structure. In this paper, a novel hybrid coding method based on the physicochemical properties of amino acids and tendency factors is proposed for the prediction of protein secondary structure. The principal component analysis (PCA) is first applied to the physicochemical properties of amino acids to construct a 3-bit-code, and then the 3 tendency factors of amino acids are calculated to generate another 3-bit-code. Two 3-bit-codes are fused to form a novel hybrid 6-bit-code. Furthermore, we make a geometry-based similarity comparison of the protein primary structure between the reference set and the test set before the secondary structure prediction. We finally use the support vector machine (SVM) to predict those amino acids which are not detected by the primary structure similarity comparison. Experimental results show that our method achieves a satisfactory improvement in accuracy in the prediction of protein secondary structure.

  20. Prediction Method for Image Coding Quality Based on Differential Information Entropy

    Directory of Open Access Journals (Sweden)

    Xin Tian

    2014-02-01

    Full Text Available For the requirement of quality-based image coding, an approach to predict the quality of image coding based on differential information entropy is proposed. First of all, some typical prediction approaches are introduced, and then the differential information entropy is reviewed. Taking JPEG2000 as an example, the relationship between differential information entropy and the objective assessment indicator PSNR at a fixed compression ratio is established via data fitting, and the constraint for fitting is to minimize the average error. Next, the relationship among differential information entropy, compression ratio and PSNR at various compression ratios is constructed and this relationship is used as an indicator to predict the image coding quality. Finally, the proposed approach is compared with some traditional approaches. From the experiments, it can be seen that the differential information entropy has a better linear relationship with image coding quality than that with the image activity. Therefore, the conclusion can be reached that the proposed approach is capable of predicting image coding quality at low compression ratios with small errors, and can be widely applied in a variety of real-time space image coding systems for its simplicity.

  1. Wavelet-based image coding using saliency map

    Science.gov (United States)

    Vargic, Radoslav; Kučerová, Júlia; Polec, Jaroslav

    2016-11-01

    Visual information is very important in human perceiving of the surrounding world. During the observation of the considered scene, some image parts are more salient than others. This fact is conventionally addressed using the regions of interest approach. We are presenting an approach that captures the saliency information per pixel basis using one continuous saliency map for a whole image and which is directly used in the lossy image compression algorithm. Although for the encoding/decoding part of the algorithm, the notion region is not necessary anymore; the resulting method can, due to its nature, efficiently emulate large amounts of regions of interest with various significance. We provide reference implementation of this approach based on the set partitioning in hierarchical trees (SPIHT) algorithm and show that the proposed method is effective and has potential to achieve significantly better results in comparison to the original SPIHT algorithm. The approach is not limited to SPIHT algorithm and can be coupled with, e.g., JPEG 2000 as well.

  2. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe...... the codes succinctly using Gröbner bases....

  3. SPATIALLY SCALABLE RESOLUTION IMAGE CODING METHOD WITH MEMORY OPTIMIZATION BASED ON WAVELET TRANSFORM

    Institute of Scientific and Technical Information of China (English)

    Wang Na; Zhang Li; Zhou Xiao'an; Jia Chuanying; Li Xia

    2005-01-01

    This letter exploits fundamental characteristics of a wavelet transform image to form a progressive octave-based spatial resolution. Each wavelet subband is coded based on zeroblock and quardtree partitioning ordering scheme with memory optimization technique. The method proposed in this letter is of low complexity and efficient for Internet plug-in software.

  4. A 3-layer coding scheme for biometry template protection based on spectral minutiae

    NARCIS (Netherlands)

    Shao, X.; Xu, H.; Veldhuis, Raymond N.J.; Slump, Cornelis H.

    2011-01-01

    Spectral Minutiae (SM) representation enables the combination of minutiae-based fingerprint recognition systems with template protection schemes based on fuzzy commitment, but it requires error-correcting codes that can handle high bit error rates (i.e. above 40%). In this paper, we propose a

  5. Comparisons of ANS, ASME, AWS, and NFPA standards cited in the NRC standard review plan, NUREG-0800, and related documents

    Energy Technology Data Exchange (ETDEWEB)

    Ankrum, A.R.; Bohlander, K.L.; Gilbert, E.R.; Spiesman, J.B. [Pacific Northwest Lab., Richland, WA (United States)

    1995-11-01

    This report provides the results of comparisons of the cited and latest versions of ANS, ASME, AWS and NFPA standards cited in the NRC Standard Review Plan for the Review of Safety Analysis Reports for Nuclear Power Plants (NUREG 0800) and related documents. The comparisons were performed by Battelle Pacific Northwest Laboratories in support of the NRC`s Standard Review Plan Update and Development Program. Significant changes to the standards, from the cited version to the latest version, are described and discussed in a tabular format for each standard. Recommendations for updating each citation in the Standard Review Plan are presented. Technical considerations and suggested changes are included for related regulatory documents (i.e., Regulatory Guides and the Code of Federal Regulations) citing the standard. The results and recommendations presented in this document have not been subjected to NRC staff review.

  6. Equilíbrio corporal em crianças e adolescentes asmáticos e não asmáticos

    OpenAIRE

    Marta Cristina Rodrigues da Silva; Sara Teresinha Corazza; Juliana Izabel Katzer; Carlos Bolli Mota; Juliana Côrrea Soares

    2013-01-01

    O objetivo foi analisar e comparar o equilíbrio corporal em crianças e adolescentes asmáticos e não asmáticos. Fizeram parte do grupo de estudos 24 sujeitos com idades de 7 a 14 anos divididos em dois grupos: grupo asmático e grupo controle. Para avaliação do equilíbrio corporal utilizou-se uma plataforma de força. Foram utilizadas as condições, olhos abertos e fechados com três tentativas aleatórias, com duração de 30 segundos cada uma. Os resultados apontaram diferença significativa entre o...

  7. Nine-year-old children use norm-based coding to visually represent facial expression.

    Science.gov (United States)

    Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian

    2013-10-01

    Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based.

  8. Code Syntax-Comparison Algorithm Based on Type-Redefinition-Preprocessing and Rehash Classification

    Directory of Open Access Journals (Sweden)

    Baojiang Cui

    2011-08-01

    Full Text Available The code comparison technology plays an important role in the fields of software security protection and plagiarism detection. Nowadays, there are mainly FIVE approaches of plagiarism detection, file-attribute-based, text-based, token-based, syntax-based and semantic-based. The prior three approaches have their own limitations, while the technique based on syntax has its shortage of detection ability and low efficiency that all of these approaches cannot meet the requirements on large-scale software plagiarism detection. Based on our prior research, we propose an algorithm on type redefinition plagiarism detection, which could detect the level of simple type redefinition, repeating pattern redefinition, and the redefinition of type with pointer. Besides, this paper also proposes a code syntax-comparison algorithm based on rehash classification, which enhances the node storage structure of the syntax tree, and greatly improves the efficiency.

  9. SpECTRE: A Task-based Discontinuous Galerkin Code for Relativistic Astrophysics

    CERN Document Server

    Kidder, Lawrence E; Foucart, Francois; Schnetter, Erik; Teukolsky, Saul A; Bohn, Andy; Deppe, Nils; Diener, Peter; Hébert, François; Lippuner, Jonas; Miller, Jonah; Ott, Christian D; Scheel, Mark A; Vincent, Trevor

    2016-01-01

    We introduce a new relativistic astrophysics code, SpECTRE, that combines a discontinuous Galerkin method with a task-based parallelism model. SpECTRE's goal is to achieve more accurate solutions for challenging relativistic astrophysics problems such as core-collapse supernovae and binary neutron star mergers. The robustness of the discontinuous Galerkin method allows for the use of high-resolution shock capturing methods in regions where (relativistic) shocks are found, while exploiting high-order accuracy in smooth regions. A task-based parallelism model allows efficient use of the largest supercomputers for problems with a heterogeneous workload over disparate spatial and temporal scales. We argue that the locality and algorithmic structure of discontinuous Galerkin methods will exhibit good scalability within a task-based parallelism framework. We demonstrate the code on a wide variety of challenging benchmark problems in (non)-relativistic (magneto)-hydrodynamics. We demonstrate the code's scalability i...

  10. Effective wavelet-based compression method with adaptive quantization threshold and zerotree coding

    Science.gov (United States)

    Przelaskowski, Artur; Kazubek, Marian; Jamrogiewicz, Tomasz

    1997-10-01

    Efficient image compression technique especially for medical applications is presented. Dyadic wavelet decomposition by use of Antonini and Villasenor bank filters is followed by adaptive space-frequency quantization and zerotree-based entropy coding of wavelet coefficients. Threshold selection and uniform quantization is made on a base of spatial variance estimate built on the lowest frequency subband data set. Threshold value for each coefficient is evaluated as linear function of 9-order binary context. After quantization zerotree construction, pruning and arithmetic coding is applied for efficient lossless data coding. Presented compression method is less complex than the most effective EZW-based techniques but allows to achieve comparable compression efficiency. Specifically our method has similar to SPIHT efficiency in MR image compression, slightly better for CT image and significantly better in US image compression. Thus the compression efficiency of presented method is competitive with the best published algorithms in the literature across diverse classes of medical images.

  11. 一种图像压缩的改进方法%Context coding based on the shortest code length

    Institute of Scientific and Technical Information of China (English)

    罗迪; 张金鹏

    2014-01-01

    提出了一种自适应最短码长熵编码方法,用加权后的条件概率分布进行Context建模,最后用算术编码对图像进行编码,从而达到压缩效果。%This paper introduces an shortest adaptively yards long entropy coding method. After using the adaptively weighted conditional probability distribution of the Context modeling with arithmetic coding of image coding then achieve the compres-sion effect.

  12. High Level Analysis, Design and Validation of Distributed Mobile Systems with CoreASM

    Science.gov (United States)

    Farahbod, R.; Glässer, U.; Jackson, P. J.; Vajihollahi, M.

    System design is a creative activity calling for abstract models that facilitate reasoning about the key system attributes (desired requirements and resulting properties) so as to ensure these attributes are properly established prior to actually building a system. We explore here the practical side of using the abstract state machine (ASM) formalism in combination with the CoreASM open source tool environment for high-level design and experimental validation of complex distributed systems. Emphasizing the early phases of the design process, a guiding principle is to support freedom of experimentation by minimizing the need for encoding. CoreASM has been developed and tested building on a broad scope of applications, spanning computational criminology, maritime surveillance and situation analysis. We critically reexamine here the CoreASM project in light of three different application scenarios.

  13. VLSI Architectures for Sliding-Window-Based Space-Time Turbo Trellis Code Decoders

    Directory of Open Access Journals (Sweden)

    Georgios Passas

    2012-01-01

    Full Text Available The VLSI implementation of SISO-MAP decoders used for traditional iterative turbo coding has been investigated in the literature. In this paper, a complete architectural model of a space-time turbo code receiver that includes elementary decoders is presented. These architectures are based on newly proposed building blocks such as a recursive add-compare-select-offset (ACSO unit, A-, B-, Γ-, and LLR output calculation modules. Measurements of complexity and decoding delay of several sliding-window-technique-based MAP decoder architectures and a proposed parameter set lead to defining equations and comparison between those architectures.

  14. Context-based coding of bilevel images enhanced by digital straight line analysis

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2006-01-01

    , or segmentation maps are also encoded efficiently. The algorithm is not targeted at document images with text, which can be coded efficiently with dictionary-based techniques as in JBIG2. The scheme is based on a local analysis of the digital straightness of the causal part of the object boundary, which is used...... in the context definition for arithmetic encoding. Tested on individual images of standard TV resolution binary shapes and the binary layers of a digital map, the proposed algorithm outperforms PWC, JBIG, JBIG2, and MPEG-4 CAE. On the binary shapes, the code lengths are reduced by 21%, 27 %, 28 %, and 41...

  15. Introduction into scientific work methods-a necessity when performance-based codes are introduced

    DEFF Research Database (Denmark)

    Dederichs, Anne; Sørensen, Lars Schiøtt

    The introduction of performance-based codes in Denmark in 2004 requires new competences from people working with different aspects of fire safety in the industry and the public sector. This abstract presents an attempt in reducing problems with handling and analysing the mathematical methods...... and CFD models when applying performance-based codes. This is done within the educational program "Master of Fire Safety Engineering" at the department of Civil Engineering at the Technical University of Denmark. It was found that the students had general problems with academic methods. Therefore, a new...

  16. Bounds on Subspace Codes Based on Subspaces of Type (m,1 in Singular Linear Space

    Directory of Open Access Journals (Sweden)

    You Gao

    2014-01-01

    Full Text Available The Sphere-packing bound, Singleton bound, Wang-Xing-Safavi-Naini bound, Johnson bound, and Gilbert-Varshamov bound on the subspace codes n+l,M,d,(m,1q based on subspaces of type (m,1 in singular linear space Fq(n+l over finite fields Fq are presented. Then, we prove that codes based on subspaces of type (m,1 in singular linear space attain the Wang-Xing-Safavi-Naini bound if and only if they are certain Steiner structures in Fq(n+l.

  17. A Contourlet-Based Embedded Image Coding Scheme on Low Bit-Rate

    Science.gov (United States)

    Song, Haohao; Yu, Songyu

    Contourlet transform (CT) is a new image representation method, which can efficiently represent contours and textures in images. However, CT is a kind of overcomplete transform with a redundancy factor of 4/3. If it is applied to image compression straightforwardly, the encoding bit-rate may increase to meet a given distortion. This fact baffles the coding community to develop CT-based image compression techniques with satisfactory performance. In this paper, we analyze the distribution of significant contourlet coefficients in different subbands and propose a new contourlet-based embedded image coding (CEIC) scheme on low bit-rate. The well-known wavelet-based embedded image coding (WEIC) algorithms such as EZW, SPIHT and SPECK can be easily integrated into the proposed scheme by constructing a virtual low frequency subband, modifying the coding framework of WEIC algorithms according to the structure of contourlet coefficients, and adopting a high-efficiency significant coefficient scanning scheme for CEIC scheme. The proposed CEIC scheme can provide an embedded bit-stream, which is desirable in heterogeneous networks. Our experiments demonstrate that the proposed scheme can achieve the better compression performance on low bit-rate. Furthermore, thanks to the contourlet adopted in the proposed scheme, more contours and textures in the coded images are preserved to ensure the superior subjective quality.

  18. Single-intensity-recording optical encryption technique based on phase retrieval algorithm and QR code

    Science.gov (United States)

    Wang, Zhi-peng; Zhang, Shuai; Liu, Hong-zhao; Qin, Yi

    2014-12-01

    Based on phase retrieval algorithm and QR code, a new optical encryption technology that only needs to record one intensity distribution is proposed. In this encryption process, firstly, the QR code is generated from the information to be encrypted; and then the generated QR code is placed in the input plane of 4-f system to have a double random phase encryption. For only one intensity distribution in the output plane is recorded as the ciphertext, the encryption process is greatly simplified. In the decryption process, the corresponding QR code is retrieved using phase retrieval algorithm. A priori information about QR code is used as support constraint in the input plane, which helps solve the stagnation problem. The original information can be recovered without distortion by scanning the QR code. The encryption process can be implemented either optically or digitally, and the decryption process uses digital method. In addition, the security of the proposed optical encryption technology is analyzed. Theoretical analysis and computer simulations show that this optical encryption system is invulnerable to various attacks, and suitable for harsh transmission conditions.

  19. A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX.

    Science.gov (United States)

    Jabbari, Keyvan; Seuntjens, Jan

    2014-07-01

    An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 10(6) particles in an Intel Core 2 Duo 2.66 GHZ desktop computer.

  20. A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX

    Directory of Open Access Journals (Sweden)

    Keyvan Jabbari

    2014-01-01

    Full Text Available An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue. This code can transport protons in wide range of energies (up to 200 MeV for proton. The validity of the fast Monte Carlo (MC code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10% near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 10 6 particles in an Intel Core 2 Duo 2.66 GHZ desktop computer.

  1. QoS Based Capacity Enhancement for WCDMA Network with Coding Scheme

    Directory of Open Access Journals (Sweden)

    K.AYYAPPAN

    2010-03-01

    Full Text Available The wide-band code division multiple access (WCDMA based 3G and beyond cellular mobile wirelessnetworks are expected to provide a diverse range of multimedia services to mobile users withguaranteed quality of service (QoS. To serve diverse quality of service requirements of these networksit necessitates new radio resource management strategies for effective utilization of network resourceswith coding schemes. Call admission control (CAC is a significant component in wireless networks toguarantee quality of service requirements and also to enhance the network resilience. In this papercapacity enhancement for WCDMA network with convolutional coding scheme is discussed andcompared with block code and without coding scheme to achieve a better balance between resourceutilization and quality of service provisioning. The model of this network is valid for the real-time (RTand non-real-time (NRT services having different data rate. Simulation results demonstrate theeffectiveness of the network using convolutional code in terms of capacity enhancement and QoS of thevoice and video services.

  2. GPU-accelerated 3D neutron diffusion code based on finite difference method

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Q.; Yu, G.; Wang, K. [Dept. of Engineering Physics, Tsinghua Univ. (China)

    2012-07-01

    Finite difference method, as a traditional numerical solution to neutron diffusion equation, although considered simpler and more precise than the coarse mesh nodal methods, has a bottle neck to be widely applied caused by the huge memory and unendurable computation time it requires. In recent years, the concept of General-Purpose computation on GPUs has provided us with a powerful computational engine for scientific research. In this study, a GPU-Accelerated multi-group 3D neutron diffusion code based on finite difference method was developed. First, a clean-sheet neutron diffusion code (3DFD-CPU) was written in C++ on the CPU architecture, and later ported to GPUs under NVIDIA's CUDA platform (3DFD-GPU). The IAEA 3D PWR benchmark problem was calculated in the numerical test, where three different codes, including the original CPU-based sequential code, the HYPRE (High Performance Pre-conditioners)-based diffusion code and CITATION, were used as counterpoints to test the efficiency and accuracy of the GPU-based program. The results demonstrate both high efficiency and adequate accuracy of the GPU implementation for neutron diffusion equation. A speedup factor of about 46 times was obtained, using NVIDIA's Geforce GTX470 GPU card against a 2.50 GHz Intel Quad Q9300 CPU processor. Compared with the HYPRE-based code performing in parallel on an 8-core tower server, the speedup of about 2 still could be observed. More encouragingly, without any mathematical acceleration technology, the GPU implementation ran about 5 times faster than CITATION which was speeded up by using the SOR method and Chebyshev extrapolation technique. (authors)

  3. CONSTRUCTION OF NONSYSTEMATIC LOW-DENSITY PARITY-CHECK CODES BASED ON SYMMETRIC BALANCED INCOMPLETE BLOCK DESIGN

    Institute of Scientific and Technical Information of China (English)

    Lin Dengsheng; Li Qiang; Li Shaoqian

    2008-01-01

    This paper studies the nonsystematic Low-Density Parity-Check (LDPC) codes based on Symmetric Balanced Incomplete Block Design (SBIBD). First, it is concluded that the performance degradation of nonsystematic linear block codes is bounded by the average row weight of generalized inverses of their generator matrices and code rate. Then a class of nonsystematic LDPC codes con- structed based on SBIBD is presented. Their characteristics include: both generator matrices and parity-check matrices are sparse and cyclic, which are simple to encode and decode; and almost arbi- trary rate codes can be easily constructed, so they are rate-compatible codes. Because there are sparse generalized inverses of generator matrices, the performance of the proposed codes is only 0.15dB away from that of the traditional systematic LDPC codes.

  4. Low-Complexity Multiple Description Coding of Video Based on 3D Block Transforms

    Directory of Open Access Journals (Sweden)

    Norkin Andrey

    2007-01-01

    Full Text Available The paper presents a multiple description (MD video coder based on three-dimensional (3D transforms. Two balanced descriptions are created from a video sequence. In the encoder, video sequence is represented in a form of coarse sequence approximation (shaper included in both descriptions and residual sequence (details which is split between two descriptions. The shaper is obtained by block-wise pruned 3D-DCT. The residual sequence is coded by 3D-DCT or hybrid, LOT+DCT, 3D-transform. The coding scheme is targeted to mobile devices. It has low computational complexity and improved robustness of transmission over unreliable networks. The coder is able to work at very low redundancies. The coding scheme is simple, yet it outperforms some MD coders based on motion-compensated prediction, especially in the low-redundancy region. The margin is up to 3 dB for reconstruction from one description.

  5. Low-Complexity Multiple Description Coding of Video Based on 3D Block Transforms

    Directory of Open Access Journals (Sweden)

    Andrey Norkin

    2007-02-01

    Full Text Available The paper presents a multiple description (MD video coder based on three-dimensional (3D transforms. Two balanced descriptions are created from a video sequence. In the encoder, video sequence is represented in a form of coarse sequence approximation (shaper included in both descriptions and residual sequence (details which is split between two descriptions. The shaper is obtained by block-wise pruned 3D-DCT. The residual sequence is coded by 3D-DCT or hybrid, LOT+DCT, 3D-transform. The coding scheme is targeted to mobile devices. It has low computational complexity and improved robustness of transmission over unreliable networks. The coder is able to work at very low redundancies. The coding scheme is simple, yet it outperforms some MD coders based on motion-compensated prediction, especially in the low-redundancy region. The margin is up to 3 dB for reconstruction from one description.

  6. A code-based approach for labeling in complex irregular regions

    Institute of Scientific and Technical Information of China (English)

    Zhi-long LI; Jun-jie CAO; Xiu-ping LIU; Zhi-xun SU

    2009-01-01

    Labeling information in a complex irregular region is a useful procedure occurring frequently in sheet metal and the furniture industry which will be beneficial in parts management. A fast code-based labeler (FCBL) is proposed to accomplish this objective in this paper. The region is first discretized, and then encoded by the Freeman encoding technique for providing the 2D regional information by 1D codes with redundancies omitted. We enhance the encoding scheme to make it more suitable for our complex problem. Based on the codes, searching algorithms are designed and can be extended with customized constraints. In addition, by introducing a smart optimal direction estimation, the labeling speed and accuracy of FCBL are significantly improved. Experiments with a large range of real data gained from industrial factories demonstrate the stability and millisecond-level speed of FCBL. The proposed method has been integrated into a shipbuilding CAD system, and plays a very important role in ship parts labeling process.

  7. Feature-based coding system: a new way of characterizing hypnosis styles.

    Science.gov (United States)

    Varga, Katalin; Kekecs, Zoltán

    2015-01-01

    In this pilot study, the authors introduce a new system to assess hypnosis style. The Feature-Based Coding System (FBCS) comprises 24 standard individual hypnosis sessions, which were videotaped and coded according to both a previous and the new coding system. In addition, both subjects and hypnotists filled the Archaic Involvement Measure (AIM), the Phenomenology of Consciousness Inventory (PCI), and the Dyadic Interactional Harmony Questionnaire (DIH). The interrater agreement of FBCS was good and the construct Maternal-Paternal Axis had a good internal consistency (α = .95). Construct validity was also supported by the findings. Based on these results, a larger scale study is warranted to further establish the reliability and usefulness of this tool.

  8. Optical information encryption based on incoherent superposition with the help of the QR code

    Science.gov (United States)

    Qin, Yi; Gong, Qiong

    2014-01-01

    In this paper, a novel optical information encryption approach is proposed with the help of QR code. This method is based on the concept of incoherent superposition which we introduce for the first time. The information to be encrypted is first transformed into the corresponding QR code, and thereafter the QR code is further encrypted into two phase only masks analytically by use of the intensity superposition of two diffraction wave fields. The proposed method has several advantages over the previous interference-based method, such as a higher security level, a better robustness against noise attack, a more relaxed work condition, and so on. Numerical simulation results and actual smartphone collected results are shown to validate our proposal.

  9. Serial Min-max Decoding Algorithm Based on Variable Weighting for Nonbinary LDPC Codes

    Directory of Open Access Journals (Sweden)

    Zhongxun Wang

    2013-09-01

    Full Text Available In this paper, we perform an analysis on the min-max decoding algorithm for nonbinary LDPC(low-density parity-check codes and propose serial min-max decoding algorithm. Combining with the weighted processing of the variable node message, we propose serial min-max decoding algorithm based on variable weighting for nonbinary LDPC codes in the end. The simulation indicates that when the bit error rate is 10^-3,compared with serial min-max decoding algorithm ,traditional min-max decoding algorithm and traditional minsum algorithm ,serial min-max decoding algorithm based on variable weighting can offer additional coding gain 0.2dB、0.8dB and 1.4dB respectively in additional white Gaussian noise channel and under binary phase shift keying modulation.  

  10. GCP: Gossip-based Code Propagation for Large-scale Mobile Wireless Sensor Networks

    CERN Document Server

    Busnel, Yann; Fleury, Eric; Kermarrec, Anne-Marie

    2007-01-01

    Wireless sensor networks (WSN) have recently received an increasing interest. They are now expected to be deployed for long periods of time, thus requiring software updates. Updating the software code automatically on a huge number of sensors is a tremendous task, as ''by hand'' updates can obviously not be considered, especially when all participating sensors are embedded on mobile entities. In this paper, we investigate an approach to automatically update software in mobile sensor-based application when no localization mechanism is available. We leverage the peer-to-peer cooperation paradigm to achieve a good trade-off between reliability and scalability of code propagation. More specifically, we present the design and evaluation of GCP ({\\emph Gossip-based Code Propagation}), a distributed software update algorithm for mobile wireless sensor networks. GCP relies on two different mechanisms (piggy-backing and forwarding control) to improve significantly the load balance without sacrificing on the propagatio...

  11. Location Based Service in Indoor Environment Using Quick Response Code Technology

    Science.gov (United States)

    Hakimpour, F.; Zare Zardiny, A.

    2014-10-01

    Today by extensive use of intelligent mobile phones, increased size of screens and enriching the mobile phones by Global Positioning System (GPS) technology use of location based services have been considered by public users more than ever.. Based on the position of users, they can receive the desired information from different LBS providers. Any LBS system generally includes five main parts: mobile devices, communication network, positioning system, service provider and data provider. By now many advances have been gained in relation to any of these parts; however the users positioning especially in indoor environments is propounded as an essential and critical issue in LBS. It is well known that GPS performs too poorly inside buildings to provide usable indoor positioning. On the other hand, current indoor positioning technologies such as using RFID or WiFi network need different hardware and software infrastructures. In this paper, we propose a new method to overcome these challenges. This method is using the Quick Response (QR) Code Technology. QR Code is a 2D encrypted barcode with a matrix structure which consists of black modules arranged in a square grid. Scanning and data retrieving process from QR Code is possible by use of different camera-enabled mobile phones only by installing the barcode reader software. This paper reviews the capabilities of QR Code technology and then discusses the advantages of using QR Code in Indoor LBS (ILBS) system in comparison to other technologies. Finally, some prospects of using QR Code are illustrated through implementation of a scenario. The most important advantages of using this new technology in ILBS are easy implementation, spending less expenses, quick data retrieval, possibility of printing the QR Code on different products and no need for complicated hardware and software infrastructures.

  12. Spatial coding-based approach for partitioning big spatial data in Hadoop

    Science.gov (United States)

    Yao, Xiaochuang; Mokbel, Mohamed F.; Alarabi, Louai; Eldawy, Ahmed; Yang, Jianyu; Yun, Wenju; Li, Lin; Ye, Sijing; Zhu, Dehai

    2017-09-01

    Spatial data partitioning (SDP) plays a powerful role in distributed storage and parallel computing for spatial data. However, due to skew distribution of spatial data and varying volume of spatial vector objects, it leads to a significant challenge to ensure both optimal performance of spatial operation and data balance in the cluster. To tackle this problem, we proposed a spatial coding-based approach for partitioning big spatial data in Hadoop. This approach, firstly, compressed the whole big spatial data based on spatial coding matrix to create a sensing information set (SIS), including spatial code, size, count and other information. SIS was then employed to build spatial partitioning matrix, which was used to spilt all spatial objects into different partitions in the cluster finally. Based on our approach, the neighbouring spatial objects can be partitioned into the same block. At the same time, it also can minimize the data skew in Hadoop distributed file system (HDFS). The presented approach with a case study in this paper is compared against random sampling based partitioning, with three measurement standards, namely, the spatial index quality, data skew in HDFS, and range query performance. The experimental results show that our method based on spatial coding technique can improve the query performance of big spatial data, as well as the data balance in HDFS. We implemented and deployed this approach in Hadoop, and it is also able to support efficiently any other distributed big spatial data systems.

  13. Definition of the basic DEMO tokamak geometry based on systems code studies

    Energy Technology Data Exchange (ETDEWEB)

    Meszaros, Botond, E-mail: botond.meszaros@efda.org [EFDA Power Plant Physics and Technology, Garching (Germany); Bachmann, Christian [EFDA Power Plant Physics and Technology, Garching (Germany); Kemp, Richard [CCFE, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Federici, Gianfranco [EFDA Power Plant Physics and Technology, Garching (Germany)

    2015-10-15

    Highlights: • The definition of the DEMO 2D geometry has been introduced. • A methodology to derive the DEMO radial and vertical builds from the PROCESS systems code results has been defined. • Other 2D and 3D geometrical assumptions required to create a sensible 3D configuration model of DEMO have been defined. - Abstract: This paper describes the methodology that has been developed and applied to derive the principal geometry of the main DEMO tokamak systems, in particular the radial and vertical cross section based on the systems code output parameters, while exact parameters are described elsewhere [1]. This procedure reviews the analysis of the radial and vertical build provided by the system code to verify critical integration interfaces, e.g. missing or too large gaps and/or insufficient thickness of components, and updates these dimensions based on results of more detailed analyses (e.g. neutronics, plasma scenario modelling, etc.) that were carried out outside of the system code in the past years. As well as providing a 3D configuration model of the DEMO tokamak for integrated engineering analysis, the results can also be used to refine the systems code model. This method, subject to continuous refinement, controls the derivation of the main machine parameters and ensures their coherence vis-à-vis a number of agreed controlled physics and engineering assumptions.

  14. Exact-Repair Minimum Bandwidth Regenerating Codes Based on Evaluation of Linearized Polynomials

    CERN Document Server

    Xie, Hongmei

    2012-01-01

    In this paper, we propose two new constructions of exact-repair minimum storage regenerating (exact-MBR) codes. Both constructions obtain the encoded symbols by first treating the message vector over GF(q) as a linearized polynomial and then evaluating it over an extension field GF(q^m). The evaluation points are chosen so that the encoded symbols at any node are conjugates of each other, while corresponding symbols of different nodes are linearly dependent with respect to GF(q). These properties ensure that data repair can be carried out over the base field GF(q), instead of matrix inversion over the extension field required by some existing exact-MBR codes. To the best of our knowledge, this approach is novel in the construction of exact-MBR codes. One of our constructions leads to exact-MBR codes with arbitrary parameters. These exact-MBR codes have higher data reconstruction complexities but lower data repair complexities than their counterparts based on the product-matrix approach; hence they may be suit...

  15. A parallel code base on discontinuous Galerkin method on three dimensional unstructured meshes for MHD equations

    Science.gov (United States)

    Li, Xujing; Zheng, Weiying

    2016-10-01

    A new parallel code based on discontinuous Galerkin (DG) method for hyperbolic conservation laws on three dimensional unstructured meshes is developed recently. This code can be used for simulations of MHD equations, which are very important in magnetic confined plasma research. The main challenges in MHD simulations in fusion include the complex geometry of the configurations, such as plasma in tokamaks, the possibly discontinuous solutions and large scale computing. Our new developed code is based on three dimensional unstructured meshes, i.e. tetrahedron. This makes the code flexible to arbitrary geometries. Second order polynomials are used on each element and HWENO type limiter are applied. The accuracy tests show that our scheme reaches the desired three order accuracy and the nonlinear shock test demonstrate that our code can capture the sharp shock transitions. Moreover, One of the advantages of DG compared with the classical finite element methods is that the matrices solved are localized on each element, making it easy for parallelization. Several simulations including the kink instabilities in toroidal geometry will be present here. Chinese National Magnetic Confinement Fusion Science Program 2015GB110003.

  16. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  17. Development of libraries for ORIGEN2 code based on JENDL-3.2

    Energy Technology Data Exchange (ETDEWEB)

    Suyama, Kenya; Katakura, Jun-ichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ishikawa, Makoto; Ohkawachi, Yasushi

    1998-03-01

    The working Group of JNDC `Nuclide Generation Evaluation` has launched a project to make libraries for ORIGEN2 code based on the latest nuclear data library `JENDL-3.2` for current design of LWR and FBR fuels. Many of these libraries are under validation. (author)

  18. Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder

    Science.gov (United States)

    Glover, Daniel R. (Inventor)

    1995-01-01

    Digital data coders/decoders are used extensively in video transmission. A digitally encoded video signal is separated into subbands. Separating the video into subbands allows transmission at low data rates. Once the data is separated into these subbands it can be coded and then decoded by statistical coders such as the Lempel-Ziv based coder.

  19. On Rational Interpolation-Based List-Decoding and List-Decoding Binary Goppa Codes

    DEFF Research Database (Denmark)

    Beelen, Peter; Høholdt, Tom; Nielsen, Johan Sebastian Rosenkilde;

    2013-01-01

    We derive the Wu list-decoding algorithm for generalized Reed–Solomon (GRS) codes by using Gröbner bases over modules and the Euclidean algorithm as the initial algorithm instead of the Berlekamp–Massey algorithm. We present a novel method for constructing the interpolation polynomial fast. We give...

  20. Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code

    Science.gov (United States)

    Taherkhani, Ahmad; Malmi, Lauri

    2013-01-01

    In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…

  1. Sets of disjoint snakes based on a Reed-Muller code and covering the hypercube

    NARCIS (Netherlands)

    Van Zanten, A.J.; Haryanto, L.

    2008-01-01

    A snake-in-the-box code (or snake) of word length n is a simple circuit in an n-dimensional cube Q n , with the additional property that any two non-neighboring words in the circuit differ in at least two positions. To construct such snakes a straightforward, non-recursive method is developed based

  2. Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code

    Science.gov (United States)

    Taherkhani, Ahmad; Malmi, Lauri

    2013-01-01

    In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…

  3. Solid Warehouse Material Management System Based on ERP and Bar Code Technology

    Institute of Scientific and Technical Information of China (English)

    ZHANG Cheng; WANG Jie; YUAN Bing; WU Chao; HU Qiao-dan

    2004-01-01

    This paper presents a manufacturing material management system based on ERP, which is combined with industrial bar code information collection and material management, and carries out extensive research on the system structure and function model, as well as a detailed application scheme.

  4. On the Security of Digital Signature Schemes Based on Error-Correcting Codes

    NARCIS (Netherlands)

    Xu, Sheng-bo; Doumen, J.M.; van Tilborg, Henk

    We discuss the security of digital signature schemes based on error-correcting codes. Several attacks to the Xinmei scheme are surveyed, and some reasons given to explain why the Xinmei scheme failed, such as the linearity of the signature and the redundancy of public keys. Another weakness is found

  5. Preliminary In-vivo Results For Spatially Coded Synthetic Transmit Aperture Ultrasound Based On Frequency Division

    DEFF Research Database (Denmark)

    Gran, Fredrik; Hansen, Kristoffer Lindskov; Jensen, Jørgen Arendt;

    2006-01-01

    This paper investigates the possibility of using spatial coding based on frequency division for in-vivo synthetic transmit aperture (STA) ultrasound imaging. When using spatial encoding for STA, it is possible to use several transmitters simultaneously and separate the signals at the receiver. Th...

  6. Sets of disjoint snakes based on a Reed-Muller code and covering the hypercube

    NARCIS (Netherlands)

    Van Zanten, A.J.; Haryanto, L.

    2008-01-01

    A snake-in-the-box code (or snake) of word length n is a simple circuit in an n-dimensional cube Q n , with the additional property that any two non-neighboring words in the circuit differ in at least two positions. To construct such snakes a straightforward, non-recursive method is developed based

  7. A optimized context-based adaptive binary arithmetic coding algorithm in progressive H.264 encoder

    Science.gov (United States)

    Xiao, Guang; Shi, Xu-li; An, Ping; Zhang, Zhao-yang; Gao, Ge; Teng, Guo-wei

    2006-05-01

    Context-based Adaptive Binary Arithmetic Coding (CABAC) is a new entropy coding method presented in H.264/AVC that is highly efficient in video coding. In the method, the probability of current symbol is estimated by using the wisely designed context model, which is adaptive and can approach to the statistic characteristic. Then an arithmetic coding mechanism largely reduces the redundancy in inter-symbol. Compared with UVLC method in the prior standard, CABAC is complicated but efficiently reduce the bit rate. Based on thorough analysis of coding and decoding methods of CABAC, This paper proposed two methods, sub-table method and stream-reuse methods, to improve the encoding efficiency implemented in H.264 JM code. In JM, the CABAC function produces bits one by one of every syntactic element. Multiplication operating times after times in the CABAC function lead to it inefficient.The proposed algorithm creates tables beforehand and then produce every bits of syntactic element. In JM, intra-prediction and inter-prediction mode selection algorithm with different criterion is based on RDO(rate distortion optimization) model. One of the parameter of the RDO model is bit rate that is produced by CABAC operator. After intra-prediction or inter-prediction mode selection, the CABAC stream is discard and is recalculated to output stream. The proposed Stream-reuse algorithm puts the stream in memory that is created in mode selection algorithm and reuses it in encoding function. Experiment results show that our proposed algorithm can averagely speed up 17 to 78 MSEL higher speed for QCIF and CIF sequences individually compared with the original algorithm of JM at the cost of only a little memory space. The CABAC was realized in our progressive h.264 encoder.

  8. Antropometria e mastigação em crianças asmáticas Anthropometry and chewing in asthmatic children

    Directory of Open Access Journals (Sweden)

    Daniele Andrade da Cunha

    2009-01-01

    Full Text Available OBJETIVOS: caracterizar os padrões da antropometria facial em crianças asmáticas; identificar a presença de assimetrias faciais em crianças asmáticas e não asmáticas; e relacionar o lado de predomínio mastigatório com a presença de assimetria facial em crianças asmáticas e não asmáticas. MÉTODOS: participaram da pesquisa 60 crianças com idade entre 6 e 10 anos. Destas, 30 possuíam diagnóstico em prontuário de asma moderada ou grave e 30 crianças não apresentavam asma, fazendo parte do grupo controle. Foram realizadas avaliações antropométrica facial e da mastigação dessas crianças. RESULTADOS: em relação às mensurações antropométricas faciais nas crianças asmáticas e não-asmáticas não foram reveladas diferenças significativas entre os grupos. No que diz respeito à presença de assimetrias faciais, observou-se que estas ocorreram no grupo controle, assim como, no grupo asmático. O padrão mastigatório predominante em ambos os grupos foi o bilateral simultâneo e quando relacionados à assimetria facial e o predomínio mastigatório, não foram encontradas associações significantes. CONCLUSÃO: não foram encontradas diferenças significativas entre o grupo controle e o grupo asmático em relação às mensurações antropométricas. A assimetria facial foi observada nos dois grupos avaliados. Em ambos o padrão mastigatório bilateral simultâneo foi predominante, porém quando realizada a relação entre assimetria facial e o lado de predomínio mastigatório, não se observou relações significativas.PURPOSES: to characterize the facial anthropometry patterns in asthmatic children; to identify the presence of facial asymmetry and to relate the prevalence of masticatory side with the presence of facial asymmetry in asthmatic and non-asthmatic children. METHODS: 60 children aged 6 to 10 years were evaluated. Thirty among them had a diagnosis of moderate or severe asthma and 30 children with no

  9. A restructuring proposal based on MELCOR for severe accident analysis code development

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sun Hee; Song, Y. M.; Kim, D. H. [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-03-01

    In order to develop a template based on existing MELCOR code, current data saving and transferring methods used in MELCOR are addressed first. Then a naming convention for the constructed module is suggested and an automatic program to convert old variables into new derived type variables has been developed. Finally, a restructured module for the SPR package has been developed to be applied to MELCOR. The current MELCOR code ensures a fixed-size storage for four different data types, and manages the variable-sized data within the storage limit by storing the data on the stacked packages. It uses pointer to identify the variables between the packages. This technique causes a difficult grasping of the meaning of the variables as well as memory waste. New features of FORTRAN90, however, make it possible to allocate the storage dynamically, and to use the user-defined data type which lead to a restructured module development for the SPR package. An efficient memory treatment and as easy understanding of the code are allowed in this developed module. The validation of the template has been done by comparing the results of the modified code with those from the existing code, and it is confirmed that the results are the same. The template for the SPR package suggested in this report hints the extension of the template to the entire code. It is expected that the template will accelerate the code domestication thanks to direct understanding of each variable and easy implementation of modified or newly developed models. 3 refs., 15 figs., 16 tabs. (Author)

  10. gSeaGen: a GENIE-based code for neutrino telescopes

    CERN Document Server

    Distefano, Carla

    2016-01-01

    The gSeaGen code is a GENIE based application to generate neutrino-induced events in an underwater neutrino detector. The gSeaGen code is able to generate events induced by all neutrino flavours, taking into account topological differences between track-type and shower-like events. The neutrino interaction is simulated taking into account the density and the composition of the media surrounding the detector. The main features of gSeaGen will be presented together with some examples of its application within ANTARES and KM3NeT.

  11. gSeaGen: A GENIE-based code for neutrino telescopes

    Directory of Open Access Journals (Sweden)

    Distefano Carla

    2016-01-01

    Full Text Available The gSeaGen code is a GENIE based application to generate neutrino-induced events in an underwater neutrino detector. The gSeaGen code is able to generate events induced by all neutrino flavours, taking into account topological differences between track-type and shower-like events. The neutrino interaction is simulated taking into account the density and the composition of the media surrounding the detector. The main features of gSeaGen will be presented together with some examples of its application within ANTARES and KM3NeT.

  12. A Network-Coding Based Event Diffusion Protocol for Wireless Mesh Networks

    Science.gov (United States)

    Beraldi, Roberto; Alnuweiri, Hussein

    Publish/subscribe is a well know and powerful distributed programming paradigm with many potential applications. In this paper we consider the central problem of any pub/sub implementation, namely the problem of event dissemination, in the case of a Wireless Mesh Network. We propose a protocol based on non-trivial forwarding mechanisms that employ network coding as a central tool for supporting adaptive event dissemination while exploiting the broadcast nature of wireless transmissions. Our results show that network coding provides significant improvements to event diffusion compared to standard blind dissemination solutions, namely flooding and gossiping.

  13. Block-Based Adaptive Vector Lifting Schemes for Multichannel Image Coding

    Directory of Open Access Journals (Sweden)

    Amel Benazza-Benyahia

    2007-04-01

    Full Text Available We are interested in lossless and progressive coding of multispectral images. To this respect, nonseparable vector lifting schemes are used in order to exploit simultaneously the spatial and the interchannel similarities. The involved operators are adapted to the image contents thanks to block-based procedures grounded on an entropy optimization criterion. A vector encoding technique derived from EZW allows us to further improve the efficiency of the proposed approach. Simulation tests performed on remote sensing images show that a significant gain in terms of bit rate is achieved by the resulting adaptive coding method with respect to the non-adaptive one.

  14. Block-Based Adaptive Vector Lifting Schemes for Multichannel Image Coding

    Directory of Open Access Journals (Sweden)

    Pesquet Jean-Christophe

    2007-01-01

    Full Text Available We are interested in lossless and progressive coding of multispectral images. To this respect, nonseparable vector lifting schemes are used in order to exploit simultaneously the spatial and the interchannel similarities. The involved operators are adapted to the image contents thanks to block-based procedures grounded on an entropy optimization criterion. A vector encoding technique derived from EZW allows us to further improve the efficiency of the proposed approach. Simulation tests performed on remote sensing images show that a significant gain in terms of bit rate is achieved by the resulting adaptive coding method with respect to the non-adaptive one.

  15. Fountain code-based error control scheme for dimmable visible light communication systems

    Science.gov (United States)

    Feng, Lifang; Hu, Rose Qingyang; Wang, Jianping; Xu, Peng

    2015-07-01

    In this paper, a novel error control scheme using Fountain codes is proposed in on-off keying (OOK) based visible light communications (VLC) systems. By using Fountain codes, feedback information is needed to be sent back to the transmitter only when transmitted messages are successfully recovered. Therefore improved transmission efficiency, reduced protocol complexity and relative little wireless link-layer delay are gained. By employing scrambling techniques and complementing symbols, the least complemented symbols are needed to support arbitrary dimming target values, and the value of entropy of encoded message are increased.

  16. Adaptive efficient video transmission over the Internet based on congestion control and RS coding

    Institute of Scientific and Technical Information of China (English)

    黄伟红; 张福炎; 孙正兴

    2002-01-01

    An approach based on adaptive congestion control and adaptive error recovery with RS (Reed-Solomon) coding method is presented for efficient video transmission over the Internet.Featured by weighted moving average rate control and TCP-fdendliness,AVSP,a novel adaptive video streaming protocol,is designed with adjustable rate control parameters so as to respond quickly to the QoS status fluctuation during video transmission over the Internet.Combined with congestion control policy,an adaptive RS coding error recovery scheme with variable parameters is presented to enhance the robustness of MPEG video transmission over the Intemet with restriction to the total system bandwidth.``

  17. Dataset for petroleum based stock markets and GAUSS codes for SAMEM.

    Science.gov (United States)

    Khalifa, Ahmed A A; Bertuccelli, Pietro; Otranto, Edoardo

    2017-02-01

    This article includes a unique data set of a balanced daily (Monday, Tuesday and Wednesday) for oil and natural gas volatility and the oil rich economies' stock markets for Saudi Arabia, Qatar, Kuwait, Abu Dhabi, Dubai, Bahrain and Oman, using daily data over the period spanning Oct. 18, 2006-July 30, 2015. Additionally, we have included unique GAUSS codes for estimating the spillover asymmetric multiplicative error model (SAMEM) with application to Petroleum-Based Stock Market. The data, the model and the codes have many applications in business and social science.

  18. Dataset for petroleum based stock markets and GAUSS codes for SAMEM

    Directory of Open Access Journals (Sweden)

    Ahmed A.A. Khalifa

    2017-02-01

    Full Text Available This article includes a unique data set of a balanced daily (Monday, Tuesday and Wednesday for oil and natural gas volatility and the oil rich economies’ stock markets for Saudi Arabia, Qatar, Kuwait, Abu Dhabi, Dubai, Bahrain and Oman, using daily data over the period spanning Oct. 18, 2006–July 30, 2015. Additionally, we have included unique GAUSS codes for estimating the spillover asymmetric multiplicative error model (SAMEM with application to Petroleum-Based Stock Market. The data, the model and the codes have many applications in business and social science.

  19. Image fractal coding algorithm based on complex exponent moments and minimum variance

    Science.gov (United States)

    Yang, Feixia; Ping, Ziliang; Zhou, Suhua

    2017-02-01

    Image fractal coding possesses very high compression ratio, the main problem is low speed of coding. The algorithm based on Complex Exponent Moments(CEM) and minimum variance is proposed to speed up the fractal coding compression. The definition of CEM and its FFT algorithm are presented, and the multi-distorted invariance of CEM are discussed. The multi-distorted invariance of CEM is fit to the fractal property of an image. The optimal matching pair of range blocks and domain blocks in an image is determined by minimizing the variance of their CEM. Theory analysis and experimental results have proved that the algorithm can dramatically reduce the iteration time and speed up image encoding and decoding process.

  20. Real-Coded Quantum-Inspired Genetic Algorithm-Based BP Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Jianyong Liu

    2015-01-01

    Full Text Available The method that the real-coded quantum-inspired genetic algorithm (RQGA used to optimize the weights and threshold of BP neural network is proposed to overcome the defect that the gradient descent method makes the algorithm easily fall into local optimal value in the learning process. Quantum genetic algorithm (QGA is with good directional global optimization ability, but the conventional QGA is based on binary coding; the speed of calculation is reduced by the coding and decoding processes. So, RQGA is introduced to explore the search space, and the improved varied learning rate is adopted to train the BP neural network. Simulation test shows that the proposed algorithm is effective to rapidly converge to the solution conformed to constraint conditions.

  1. ERSN-OpenMC, a Java-based GUI for OpenMC Monte Carlo code

    Directory of Open Access Journals (Sweden)

    Jaafar EL Bakkali

    2016-07-01

    Full Text Available OpenMC is a new Monte Carlo transport particle simulation code focused on solving two types of neutronic problems mainly the k-eigenvalue criticality fission source problems and external fixed fission source problems. OpenMC does not have any Graphical User Interface and the creation of one is provided by our java-based application named ERSN-OpenMC. The main feature of this application is to provide to the users an easy-to-use and flexible graphical interface to build better and faster simulations, with less effort and great reliability. Additionally, this graphical tool was developed with several features, as the ability to automate the building process of OpenMC code and related libraries as well as the users are given the freedom to customize their installation of this Monte Carlo code. A full description of the ERSN-OpenMC application is presented in this paper.

  2. Design and implementation of H.264 based embedded video coding technology

    Science.gov (United States)

    Mao, Jian; Liu, Jinming; Zhang, Jiemin

    2016-03-01

    In this paper, an embedded system for remote online video monitoring was designed and developed to capture and record the real-time circumstances in elevator. For the purpose of improving the efficiency of video acquisition and processing, the system selected Samsung S5PV210 chip as the core processor which Integrated graphics processing unit. And the video was encoded with H.264 format for storage and transmission efficiently. Based on S5PV210 chip, the hardware video coding technology was researched, which was more efficient than software coding. After running test, it had been proved that the hardware video coding technology could obviously reduce the cost of system and obtain the more smooth video display. It can be widely applied for the security supervision [1].

  3. REVERSE DESIGN APPROACH FOR MECHANISM TRAJECTORY BASED ON CODE-CHAINS MATCHING

    Institute of Scientific and Technical Information of China (English)

    ZHANG Shuyou; YI Guodong; XU Xiaofeng

    2007-01-01

    Aiming at the problem of reverse-design of mechanism, a method based on the matching of trajectory code-chains is presented. The motion trajectory of mechanism is described with code-chain,which is normalized to simplify the operation of geometric transformation. The geometric transformation formulas of scale, mirror and rotation for trajectory code-chain are defined, and the reverse design for mechanism trajectory is realized through the analysis and solution of similarity matching between the desired trajectory and the predefined trajectory. The algorithm program and prototype system of reverse design for mechanism trajectory are developed. Application samples show that the method can break the restriction of trajectory patterns in matching, meet the demand of partial matching, and overcome the influence of geometric transformation of trajectory on the reverse design for mechanism.

  4. Simulation of 2D Kinetic Effects in Plasmas using the Grid Based Continuum Code LOKI

    Science.gov (United States)

    Banks, Jeffrey; Berger, Richard; Chapman, Tom; Brunner, Stephan

    2016-10-01

    Kinetic simulation of multi-dimensional plasma waves through direct discretization of the Vlasov equation is a useful tool to study many physical interactions and is particularly attractive for situations where minimal fluctuation levels are desired, for instance, when measuring growth rates of plasma wave instabilities. However, direct discretization of phase space can be computationally expensive, and as a result there are few examples of published results using Vlasov codes in more than a single configuration space dimension. In an effort to fill this gap we have developed the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The code is designed to reduce the cost of phase-space computation by using fully 4th order accurate conservative finite differencing, while retaining excellent parallel scalability that efficiently uses large scale computing resources. In this poster I will discuss the algorithms used in the code as well as some aspects of their parallel implementation using MPI. I will also overview simulation results of basic plasma wave instabilities relevant to laser plasma interaction, which have been obtained using the code.

  5. Fast Image Coding Algorithm Using Indirect-Index Codebook Based on SMVQ

    Institute of Scientific and Technical Information of China (English)

    Bin-Bin Xia; An-Hong Wang; Chin-Chen Chang; Li Liu

    2016-01-01

    Abstract-Side-match vector quantization (SMVQ) achieves better compression performance than vector quantization (VQ) in image coding due to its exploration of the dependence of adjacent pixels. However, SMVQ has the disadvantage of requiring excessive time during the process of coding. Therefore, this paper proposes a fast image coding algorithm using indirect-index codebook based on SMVQ (IIC-SMVQ) to reduce the coding time. Two codebooks, named indirect-index codebook (II-codebook) and entire-state codebook (ES-codebook), are trained and utilized. The II-codebook is trained by using the Linde-Buzo-Gray (LBG) algorithm from side-match information, while the ES-codebook is generated from the clustered residual blocks on the basis of the II-codebook. According to the relationship between these two codebooks,the codeword in the II-codebook can be regarded as an indicator to construct a fast search path, which guides in quickly determining the state codebook from the ES-codebook to encode the to-be-encoded block. The experimental results confirm that the coding time of the proposed scheme is shorter than that of the previous SMVQ.

  6. A Radiation Chemistry Code Based on the Green's Function of the Diffusion Equation

    Science.gov (United States)

    Plante, Ianik; Wu, Honglu

    2014-01-01

    Stochastic radiation track structure codes are of great interest for space radiation studies and hadron therapy in medicine. These codes are used for a many purposes, notably for microdosimetry and DNA damage studies. In the last two decades, they were also used with the Independent Reaction Times (IRT) method in the simulation of chemical reactions, to calculate the yield of various radiolytic species produced during the radiolysis of water and in chemical dosimeters. Recently, we have developed a Green's function based code to simulate reversible chemical reactions with an intermediate state, which yielded results in excellent agreement with those obtained by using the IRT method. This code was also used to simulate and the interaction of particles with membrane receptors. We are in the process of including this program for use with the Monte-Carlo track structure code Relativistic Ion Tracks (RITRACKS). This recent addition should greatly expand the capabilities of RITRACKS, notably to simulate DNA damage by both the direct and indirect effect.

  7. Binary Large Object-Based Approach for QR Code Detection in Uncontrolled Environments

    Directory of Open Access Journals (Sweden)

    Omar Lopez-Rincon

    2017-01-01

    Full Text Available Quick Response QR barcode detection in nonarbitrary environment is still a challenging task despite many existing applications for finding 2D symbols. The main disadvantage of recent applications for QR code detection is a low performance for rotated and distorted single or multiple symbols in images with variable illumination and presence of noise. In this paper, a particular solution for QR code detection in uncontrolled environments is presented. The proposal consists in recognizing geometrical features of QR code using a binary large object- (BLOB- based algorithm with subsequent iterative filtering QR symbol position detection patterns that do not require complex processing and training of classifiers frequently used for these purposes. The high precision and speed are achieved by adaptive threshold binarization of integral images. In contrast to well-known scanners, which fail to detect QR code with medium to strong blurring, significant nonuniform illumination, considerable symbol deformations, and noising, the proposed technique provides high recognition rate of 80%–100% with a speed compatible to real-time applications. In particular, speed varies from 200 ms to 800 ms per single or multiple QR code detected simultaneously in images with resolution from 640 × 480 to 4080 × 2720, respectively.

  8. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    Science.gov (United States)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  9. Analysis of Cooperative Networks Based on WiMAX LDPC Code

    Directory of Open Access Journals (Sweden)

    M.B. Khan

    2014-11-01

    Full Text Available This study focus on the performance analysis of Cooperative communication networks based on WiMAX Low Density Parity Check (LDPC codes. The channel capacity approaching coding technique LDPC having coding gain method Bit Interleave Coded Modulation with Iterative Decoding (BICM-ID is used. The different fading environment is analyze to counter the challenges in wireless communication and provides solutions for the drawbacks in the multiple input multiple output MIMO technology. The relays are used in Cooperative communications networks to increases the range and link reliability at a lower transmit power because once the signal transmit power loses its strength it’s amplify on the relay node and when it suffers from noise it is also decoded at the relay node which increases the link reliability. LDPC with iterative decoding are used to gain BER performance only a small amount of decibel to attain Shannon limit. This performance analysis open the way for WiMAX technology can be used with Cooperative networks by using LDPC codes. The above mention communication system will provides rate, range and reliability at a lower cost, less complexity and lower transmit power.

  10. A coded structured light system based on primary color stripe projection and monochrome imaging.

    Science.gov (United States)

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-10-14

    Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.

  11. A Coded Structured Light System Based on Primary Color Stripe Projection and Monochrome Imaging

    Directory of Open Access Journals (Sweden)

    Armando Viviano Razionale

    2013-10-01

    Full Text Available Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.

  12. Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network

    Directory of Open Access Journals (Sweden)

    Kai Lin

    2016-07-01

    Full Text Available With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC. The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods.

  13. A wavelet packet based block-partitioning image coding algorithm with rate-distortion optimization

    Institute of Scientific and Technical Information of China (English)

    YANG YongMing; XU Chao

    2008-01-01

    As an elegant generalization of wavelet transform, wavelet packet (WP) provides an effective representation tool for adaptive waveform analysis. Recent work shows that image-coding methods based on WP decomposition can achieve significant gain over those based on a usual wavelet transform. However, most of the work adopts a tree-structured quantization scheme, which is a successful technique for wavelet image coding, but not appropriate for WP subbands. This paper presents an image-coding algorithm based on a rate-distortion optimized wavelet packet decomposition and on an intraband block-partitioning scheme. By encoding each WP subband separately with the block-partitioning algorithm and the JPEG2000 context modeling, the proposed algorithm naturally avoids the difficulty in defining parent-offspring relationships for the WP coefficients, which has to be faced when adopting the tree-structured quantization scheme. The experimental results show that the proposed algorithm significantly outperforms SPIHT and JPEG2000 schemes and also surpasses state-of-the-art WP image coding algorithms, in terms of both PSNR and visual quality.

  14. Further validation of liquid metal MHD code for unstructured grid based on OpenFOAM

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Jingchao; Chen, Hongli, E-mail: hlchen1@ustc.edu.cn; He, Qingyun; Ye, Minyou

    2015-11-15

    Highlights: • Specific correction scheme has been adopted to revise the calculating result for non-orthogonal meshes. • The developed MHD code based on OpenFOAM platform has been validated by benchmark cases under uniform and non-uniform magnetic field in round and rectangular ducts. • ALEX experimental results have been used to validate the MHD code based on OpenFOAM. - Abstract: In fusion liquid metal blankets, complex geometries involving contractions, expansions, bends, manifolds are very common. The characteristics of liquid metal flow in these geometries are significant. In order to extend the magnetohydrodynamic (MHD) solver developed on OpenFOAM platform to be applied in the complex geometry, the MHD solver based on unstructured meshes has been implemented. The adoption of non-orthogonal correction techniques in the solver makes it possible to process the non-orthogonal meshes in complex geometries. The present paper focused on the validation of the code under critical conditions. An analytical solution benchmark case and two experimental benchmark cases were conducted to validate the code. Benchmark case I is MHD flow in a circular pipe with arbitrary electric conductivity of the walls in a uniform magnetic field. Benchmark cases II and III are experimental cases of 3D laminar steady MHD flow under fringing magnetic field. In all these cases, the numerical results match well with the benchmark cases.

  15. nRC: non-coding RNA Classifier based on structural features.

    Science.gov (United States)

    Fiannaca, Antonino; La Rosa, Massimo; La Paglia, Laura; Rizzo, Riccardo; Urso, Alfonso

    2017-01-01

    Non-coding RNA (ncRNA) are small non-coding sequences involved in gene expression regulation of many biological processes and diseases. The recent discovery of a large set of different ncRNAs with biologically relevant roles has opened the way to develop methods able to discriminate between the different ncRNA classes. Moreover, the lack of knowledge about the complete mechanisms in regulative processes, together with the development of high-throughput technologies, has required the help of bioinformatics tools in addressing biologists and clinicians with a deeper comprehension of the functional roles of ncRNAs. In this work, we introduce a new ncRNA classification tool, nRC (non-coding RNA Classifier). Our approach is based on features extraction from the ncRNA secondary structure together with a supervised classification algorithm implementing a deep learning architecture based on convolutional neural networks. We tested our approach for the classification of 13 different ncRNA classes. We obtained classification scores, using the most common statistical measures. In particular, we reach an accuracy and sensitivity score of about 74%. The proposed method outperforms other similar classification methods based on secondary structure features and machine learning algorithms, including the RNAcon tool that, to date, is the reference classifier. nRC tool is freely available as a docker image at https://hub.docker.com/r/tblab/nrc/. The source code of nRC tool is also available at https://github.com/IcarPA-TBlab/nrc.

  16. Relative-Residual-Based Dynamic Schedule for Belief Propagation Decoding of LDPC Codes

    Institute of Scientific and Technical Information of China (English)

    Huang Jie; Zhang Lijun

    2011-01-01

    Two Relative-Residual-based Dynamic Schedules (RRDS) for Belief Propagation (BP) decoding of Low-Density Parity-Check (LDPC) codes are proposed,in which the Variable code-RRDS (VN-RRDS) is a greediness-reduced version of the Check code-RRDS (CN-RRDS).The RRDS only processes the variable (or check) node,which has the maximum relative residual among all the variable (or check) nodes in each decoding iteration,thus keeping less greediness and decreased complexity in comparison with the edge-based Variable-to-Check Residual Belief Propagation (VC-RBP) algorithm.Moreover,VN-RRDS propagates first the message which has the largest residual based on all check equations.For different types of LDPC codes,simulation results show that the convergence rate of RRDS is higher than that of VC-RBP while keeping very low computational complexity.Furthermore,VN-RRDS achieves faster convergence as well as better performance than CN-RRDS.

  17. Complete Multiple Description Mesh-Based Video Coding Scheme and Its Performance

    Institute of Scientific and Technical Information of China (English)

    Yang-Li Wang; Cheng-Ke Wu

    2005-01-01

    This paper proposes a multiple description (MD) mesh-based motion coding method, which generates two descriptions for mesh-based motion by subsampling the nodes of a right-angled triangular mesh and dividing them into two groups. Motion vectors associated with the mesh nodes in each group are transmitted over distinct channels. With the nodes in each group, two other regular triangular meshes besides the original one can be constructed, and three different prediction images can be reconstructed according to descriptions available. The proposed MD mesh-based motion coding method is then combined with the pairwise correlating transform (PCT), and a complete MD video coding scheme is proposed. Further measures are taken to reduce the mismatch between the encoder and decoder that occurs when only one description is received and the decoder reconstruction is different from the encoder. The performance of the proposed scheme is evaluated using computer simulations, and the results show, compared to Reibman's MD transform coding (MDTC) method, the proposed scheme achieves better redundancy rate distortion (RRD) performance. In packet loss scenario, the proposed scheme outperforms the MDTC method.

  18. On scalable lossless video coding based on sub-pixel accurate MCTF

    Science.gov (United States)

    Yea, Sehoon; Pearlman, William A.

    2006-01-01

    We propose two approaches to scalable lossless coding of motion video. They achieve SNR-scalable bitstream up to lossless reconstruction based upon the subpixel-accurate MCTF-based wavelet video coding. The first approach is based upon a two-stage encoding strategy where a lossy reconstruction layer is augmented by a following residual layer in order to obtain (nearly) lossless reconstruction. The key advantages of our approach include an 'on-the-fly' determination of bit budget distribution between the lossy and the residual layers, freedom to use almost any progressive lossy video coding scheme as the first layer and an added feature of near-lossless compression. The second approach capitalizes on the fact that we can maintain the invertibility of MCTF with an arbitrary sub-pixel accuracy even in the presence of an extra truncation step for lossless reconstruction thanks to the lifting implementation. Experimental results show that the proposed schemes achieve compression ratios not obtainable by intra-frame coders such as Motion JPEG-2000 thanks to their inter-frame coding nature. Also they are shown to outperform the state-of-the-art non-scalable inter-frame coder H.264 (JM) lossless mode, with the added benefit of bitstream embeddedness.

  19. Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network.

    Science.gov (United States)

    Lin, Kai; Wang, Di; Hu, Long

    2016-07-01

    With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC). The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S) evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods.

  20. Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes

    CERN Document Server

    Pinsky, L; Ferrari, A; Sala, P; Carminati, F; Brun, R

    2001-01-01

    This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be usef...

  1. A framework for evaluating wavelet based watermarking for scalable coded digital item adaptation attacks

    Science.gov (United States)

    Bhowmik, Deepayan; Abhayaratne, Charith

    2009-02-01

    A framework for evaluating wavelet based watermarking schemes against scalable coded visual media content adaptation attacks is presented. The framework, Watermark Evaluation Bench for Content Adaptation Modes (WEBCAM), aims to facilitate controlled evaluation of wavelet based watermarking schemes under MPEG-21 part-7 digital item adaptations (DIA). WEBCAM accommodates all major wavelet based watermarking in single generalised framework by considering a global parameter space, from which the optimum parameters for a specific algorithm may be chosen. WEBCAM considers the traversing of media content along various links and required content adaptations at various nodes of media supply chains. In this paper, the content adaptation is emulated by the JPEG2000 coded bit stream extraction for various spatial resolution and quality levels of the content. The proposed framework is beneficial not only as an evaluation tool but also as design tool for new wavelet based watermark algorithms by picking and mixing of available tools and finding the optimum design parameters.

  2. Low Complexity DCT-based DSC approach forHyperspectral Image Compression with Arithmetic Code

    Directory of Open Access Journals (Sweden)

    Meena Babu Vallakati

    2012-09-01

    Full Text Available This paper proposes low complexity codec for lossy compression on a sample hyperspectral image. These images have two kinds of redundancies: 1 spatial; and 2 spectral. A discrete cosine transform (DCT- based Distributed Source Coding(DSC paradigm with Arithmetic code for low complexity is introduced. Here, Set-partitioning based approach is applied to reorganize DCT coefficients into wavelet like tree structure as Setpartitioning works on wavelet transform, and extract the sign, refinement, and significance bitplanes. The extracted refinement bits are Arithmetic encoded, then by applying low density parity check based (LDPC-based Slepian-Wolf coder is implement to our DSC strategy. Experimental results for SAMSON (Spectroscopic Aerial Mapping System with Onboard Navigation data show that proposed scheme achieve peak signal to noise ratio and compression to a very good extent for water cube compared to building, land or forest cube.

  3. A Brain Computer Interface for Robust Wheelchair Control Application Based on Pseudorandom Code Modulated Visual Evoked Potential

    DEFF Research Database (Denmark)

    Mohebbi, Ali; Engelsholm, Signe K.D.; Puthusserypady, Sadasivan

    2015-01-01

    In this pilot study, a novel and minimalistic Brain Computer Interface (BCI) based wheelchair control application was developed. The system was based on pseudorandom code modulated Visual Evoked Potentials (c-VEPs). The visual stimuli in the scheme were generated based on the Gold code...

  4. 78 FR 79363 - Hazardous Materials: Adoption of ASME Code Section XII and the National Board Inspection Code

    Science.gov (United States)

    2013-12-30

    ... tank. Manufacturers will only elect to utilize Section XII if it makes business sense. II...'s compliance and enforcement programs. Within the United States, the most common modes of... practices, improved materials, advances in welding, examination and testing. Notably, fracture mechanics did...

  5. 46 CFR 56.01-5 - Adoption of ASME B31.1 for power piping, and other standards.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 2 2010-10-01 2010-10-01 false Adoption of ASME B31.1 for power piping, and other... ENGINEERING PIPING SYSTEMS AND APPURTENANCES General § 56.01-5 Adoption of ASME B31.1 for power piping, and... accordance with ASME B31.1 (incorporated by reference; see 46 CFR 56.01-2), as limited, modified, or...

  6. Projeto de vaso de pressão segundo norma ASME e análise pelo método dos elementos finitos

    OpenAIRE

    SILVA, Adson Beserra da.

    2015-01-01

    Este trabalho teve como objetivo projetar um vaso de pressão segundo a norma ASME, efetuar uma análise deste usando o método analítico e o Método dos Elementos Finitos e comparar os resultados obtidos pelos dois métodos. Dois vasos de pressão foram dimensionados seguindo especificações oriundas de um sistema virtual, determinado apenas para este estudo e aplicando essas especificações à ASME seção VIII Divisão I e II. Com base nessas determinações, os vasos foram modelados em 3D utilizando o ...

  7. Wavelet transform and Huffman coding based electrocardiogram compression algorithm: Application to telecardiology

    Science.gov (United States)

    Chouakri, S. A.; Djaafri, O.; Taleb-Ahmed, A.

    2013-08-01

    We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly.

  8. SFCVQ and EZW coding method based on Karhunen-Loeve transformation and integer wavelet transformation

    Science.gov (United States)

    Yan, Jingwen; Chen, Jiazhen

    2007-03-01

    A new hyperspectral image compression method of spectral feature classification vector quantization (SFCVQ) and embedded zero-tree of wavelet (EZW) based on Karhunen-Loeve transformation (KLT) and integer wavelet transformation is represented. In comparison with the other methods, this method not only keeps the characteristics of high compression ratio and easy real-time transmission, but also has the advantage of high computation speed. After lifting based integer wavelet and SFCVQ coding are introduced, a system of nearly lossless compression of hyperspectral images is designed. KLT is used to remove the correlation of spectral redundancy as one-dimensional (1D) linear transform, and SFCVQ coding is applied to enhance compression ratio. The two-dimensional (2D) integer wavelet transformation is adopted for the decorrelation of 2D spatial redundancy. EZW coding method is applied to compress data in wavelet domain. Experimental results show that in comparison with the method of wavelet SFCVQ (WSFCVQ), the method of improved BiBlock zero tree coding (IBBZTC) and the method of feature spectral vector quantization (FSVQ), the peak signal-to-noise ratio (PSNR) of this method can enhance over 9 dB, and the total compression performance is improved greatly.

  9. Wavelet-Based Geometry Coding for Three Dimensional Mesh Using Space Frequency Quantization

    Directory of Open Access Journals (Sweden)

    Shymaa T. El-Leithy

    2009-01-01

    Full Text Available Problem statement: Recently, 3D objects have been used in several applications like internet games, virtual reality and scientific visualization. These applications require real time rendering and fast transmission of large objects through internet. However, due to limitation of bandwidth, the compression and streaming of 3D object is still an open research problem. Approach: Novel procedure for compression and coding of 3-Dimensional (3-D semi-regular meshes using wavelet transform had been introduced. This procedure was based on Space Frequency Quantization (SFQ which was used to minimize distortion error of reconstructed mesh for a different bit-rate constraint. Results: Experimental results had been carried out over five datasets with different mesh intense and irregularity. Results were evaluated by using the peak signal to noise ratio as an error measurement. Experiments showed that 3D SFQ code over performs Progressive Geometry Coder (PGC in terms of quality of compressed meshes. Conclusion: A pure 3D geometry coding algorithm based on wavelet had been introduced. Proposed procedure showed its superiority over the state of art coding techniques. Moreover, bit-stream can be truncated at any point and still decode reasonable visual quality meshes.

  10. SFCVQ and EZW coding method based on Karhunen-Loeve transformation and integer wavelet transformation

    Institute of Scientific and Technical Information of China (English)

    Jingwen Yan; Jiazhen Chen

    2007-01-01

    A new hyperspectral image compression method of spectral feature classification vector quantization (SFCVQ) and embedded zero-tree of wavelet (EZW) based on Karhunen-Loeve transformation (KLT) and integer wavelet transformation is represented. In comparison with the other methods, this method not only keeps the characteristics of high compression ratio and easy real-time transmission, but also has the advantage of high computation speed. After lifting based integer wavelet and SFCVQ coding are introduced, a system of nearly lossless compression of hyperspectral images is designed. KLT is used to remove the correlation of spectral redundancy as one-dimensional (1D) linear transform, and SFCVQ coding is applied to enhance compression ratio. The two-dimensional (2D) integer wavelet transformation is adopted for the decorrelation of 2D spatial redundancy. EZW coding method is applied to compress data in wavelet domain. Experimental results show that in comparison with the method of wavelet SFCVQ (WSFCVQ),the method of improved BiBlock zero tree coding (IBBZTC) and the method of feature spectral vector quantization (FSVQ), the peak signal-to-noise ratio (PSNR) of this method can enhance over 9 dB, and the total compression performance is improved greatly.

  11. Hash-Based Line-by-Line Template Matching for Lossless Screen Image Coding.

    Science.gov (United States)

    Xiulian Peng; Jizheng Xu

    2016-12-01

    Template matching (TM) was proposed in the literature a decade ago to efficiently remove non-local redundancies within an image without transmitting any overhead of displacement vectors. However, the large computational complexity introduced at both the encoder and the decoder, especially for a large search range, limits its widespread use. This paper proposes a hash-based line-by-line template matching (hLTM) for lossless screen image coding, where the non-local redundancy commonly exists in text and graphics parts. By hash-based search, it can largely reduce the search complexity of template matching without an accuracy degradation. Besides, the line-by-line template matching increases prediction accuracy by using a fine granularity. Experimental results show that the hLTM can significantly reduce both the encoding and decoding complexities by 68 and 23 times, respectively, compared with the traditional TM with a search radius of 128. Moreover, when compared with High Efficiency Video Coding screen content coding test model SCM-1.0, it can largely improve coding efficiency by up to 12.68% bits saving on screen contents with rich texts/graphics.

  12. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  13. Postprocessing of interframe coded images based on convex projection and regularization

    Science.gov (United States)

    Joung, Shichang; Kim, Sungjin; Paik, Joon-Ki

    2000-04-01

    In order to reduce blocking artifacts in inter-frame coded images, we propose a new image restoration algorithm, which directly processes differential images before reconstruction. We note that blocking artifact in inter-frame coded images is caused by both 8 X 8 DCT and 16 X 16 macroblock based motion compensation, while that of intra-coded images is caused by 8 X 8 DCT only. According to the observation, we propose a new degradation model for differential images and the corresponding restoration algorithm that utilizes additional constraints and convex sets for discontinuity inside blocks. The proposed restoration algorithm is a modified version of standard regularization that incorporates spatially adaptive lowpass filtering with consideration of edge directions by utilizing a part of DCT coefficients. Most of video coding standard adopt a hybrid structure of block- based motion compensation and block discrete cosine transform (BDCT). By this reason, blocking artifacts are occurred on both block boundary and block interior. For more complete removal of both kinds of blocking artifacts, the restored differential image must satisfy two constraints, such as, directional discontinuities on block boundary and block interior. Those constraints have been used for defining convex sets for restoring differential images.

  14. Reusable amine-based structural motifs for green house gas (CO2) fixation.

    Science.gov (United States)

    Dalapati, Sasanka; Jana, Sankar; Saha, Rajat; Alam, Md Akhtarul; Guchhait, Nikhil

    2012-07-01

    A series of compounds with an amine based structural motif (ASM) have been synthesized for efficient atmospheric CO(2) fixation. The H-bonded ASM-bicarbonate complexes were formed with an in situ generated HCO(3)(-) ion. The complexes have been characterized by IR, (13)C NMR, and X-ray single-crystal structural analysis. ASM-bicarbonate salts have been converted to pure ASMs in quantitative yield under mild conditions for recycling processes.

  15. Property-based Code Slicing for Efficient Verification of OSEK/VDX Operating Systems

    Directory of Open Access Journals (Sweden)

    Mingyu Park

    2012-12-01

    Full Text Available Testing is a de-facto verification technique in industry, but insufficient for identifying subtle issues due to its optimistic incompleteness. On the other hand, model checking is a powerful technique that supports comprehensiveness, and is thus suitable for the verification of safety-critical systems. However, it generally requires more knowledge and cost more than testing. This work attempts to take advantage of both techniques to achieve integrated and efficient verification of OSEK/VDX-based automotive operating systems. We propose property-based environment generation and model extraction techniques using static code analysis, which can be applied to both model checking and testing. The technique is automated and applied to an OSEK/VDX-based automotive operating system, Trampoline. Comparative experiments using random testing and model checking for the verification of assertions in the Trampoline kernel code show how our environment generation and abstraction approach can be utilized for efficient fault-detection.

  16. Region-Based Fractal Image Coding with Freely-Shaped Partition

    Institute of Scientific and Technical Information of China (English)

    SUNYunda; ZHAOYao; YUANBaozong

    2004-01-01

    In Fractal image coding (FIC), a partitioning of the original image into ranges and domains is required, which greatly affects the coding performance. Usually, the more adaptive to the image content the partition is, the higher performance it can achieve. Nowadays, some alleged Region-based fractal coders (RBFC) using split-and-merge strategy can achieve better adaptivity andperformance compared with traditional rectangular block partitions. However, the regions are still with linear contour. In this paper, we present a Freely-shaped Regionbased fractal coder (FS-RBFC) using a two-step partitioning, i.e. coarse partitioning based on fractal dimension and fine partitioning based on region growth, which brings freely-shaped regions. Our highly image-adaptive scheme can achieve better rate-distortion curve than conventional scheme, even more visually pleasing results at the same performance.

  17. Machine-vision-based bar code scanning for long-range applications

    Science.gov (United States)

    Banta, Larry E.; Pertl, Franz A.; Rosenecker, Charles; Rosenberry-Friend, Kimberly A.

    1998-10-01

    Bar code labeling of products has become almost universal in most industries. However, in the steel industry, problems with high temperatures, harsh physical environments and the large sizes of the products and material handling equipment have slowed implementation of bar code based systems in the hot end of the mill. Typical laser-based bar code scanners have maximum scan distances of only 15 feet or so. Longer distance models have been developed which require the use of retro reflective paper labels, but the labels must be very large, are expensive, and cannot stand the heat and physical abuse of the steel mill environment. Furthermore, it is often difficult to accurately point a hand held scanner at targets in bright sunlight or at long distances. An automated product tag reading system based on CCD cameras and computer image processing has been developed by West Virginia University, and demonstrated at the Weirton Steel Corporation. The system performs both the pointing and reading functions. A video camera is mounted on a pan/tilt head, and connected to a personal computer through a frame grabber board. The computer analyzes the images, and can identify product ID tags in a wide-angle scene. It controls the camera to point at each tag and zoom for a closeup picture. The closeups are analyzed and the program need both a barcode and the corresponding alphanumeric code on the tag. This paper describes the camera pointing and bar-code reading functions of the algorithm. A companion paper describes the OCR functions.

  18. A Code Level Based Programmer Assessment and Selection Criterion Using Metric Tools

    Directory of Open Access Journals (Sweden)

    Ezekiel U. Okike

    2014-11-01

    Full Text Available this study presents a code level measurement of computer programs developed by computer programmers using a Chidamber and Kemerer Java metric (CKJM tool and the Myers Briggs Type Indicator (MBTI tool. The identification of potential computer programmers using personality trait factors does not seem to be the best approach without a code level measurement of the quality of programs. Hence the need to evolve a metric tool which measures both personality traits of programmers and code level quality of programs developed by programmers. This is the focus of this study. In this experiment, a set of Java based programming tasks were given to 33 student programmers who could confidently use the Java programming language. The codes developed by these students were analyzed for quality using a CKJM tool. Cohesion, coupling and number of public methods (NPM metrics were used in the study. The choice of these three metrics from the CKJM suite was because they are useful in measuring well designed codes. By examining the cohesion values of classes, high cohesion ranges [0,1] and low coupling imply well designed code. Also number of methods (NPM in a well-designed class is always less than 5 when cohesion range is [0,1]. Results from this study show that 19 of the 33 programmers developed good and cohesive programs while 14 did not. Further analysis revealed the personality traits of programmers and the number of good programs written by them. Programmers with Introverted Sensing Thinking Judging (ISTJ traits produced the highest number of good programs, followed by Introverted iNtuitive Thinking Perceiving (INTP, Introverted iNtuitive Feelingng Perceiving (INTP, and Extroverted Sensing Thinking Judging (ESTJ

  19. FPGA-based LDPC-coded APSK for optical communication systems.

    Science.gov (United States)

    Zou, Ding; Lin, Changyu; Djordjevic, Ivan B

    2017-02-20

    In this paper, with the aid of mutual information and generalized mutual information (GMI) capacity analyses, it is shown that the geometrically shaped APSK that mimics an optimal Gaussian distribution with equiprobable signaling together with the corresponding gray-mapping rules can approach the Shannon limit closer than conventional quadrature amplitude modulation (QAM) at certain range of FEC overhead for both 16-APSK and 64-APSK. The field programmable gate array (FPGA) based LDPC-coded APSK emulation is conducted on block interleaver-based and bit interleaver-based systems; the results verify a significant improvement in hardware efficient bit interleaver-based systems. In bit interleaver-based emulation, the LDPC-coded 64-APSK outperforms 64-QAM, in terms of symbol signal-to-noise ratio (SNR), by 0.1 dB, 0.2 dB, and 0.3 dB at spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz, respectively. It is found by emulation that LDPC-coded 64-APSK for spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz is 1.6 dB, 1.7 dB, and 2.2 dB away from the GMI capacity.

  20. SecureQEMU: Emulation-Based Software Protection Providing Encrypted Code Execution and Page Granularity Code Signing

    Science.gov (United States)

    2008-12-01

    may return to an instruction that jumps to the address of that register [10, 42]. This technique is known as trampolining . In both the above scenarios...disabled, an attacker may be able to trampoline out of that module to execute arbitrary code [29]. Further- more, the attacker may be able to modify...point to the module of the driver, a KLR may be hooking the driver. However, there are techniques to trampoline (jump, call, etc.) off the kernel’s module

  1. NCSA: A New Protocol for Random Multiple Access Based on Physical Layer Network Coding

    CERN Document Server

    Bui, Huyen Chi; Boucheret, Marie-Laure

    2010-01-01

    This paper introduces a random multiple access method for satellite communications, named Network Coding-based Slotted Aloha (NCSA). The goal is to improve diversity of data bursts on a slotted-ALOHA-like channel thanks to error correcting codes and Physical-layer Network Coding (PNC). This scheme can be considered as a generalization of the Contention Resolution Diversity Slotted Aloha (CRDSA) where the different replicas of this system are replaced by the different parts of a single word of an error correcting code. The performance of this scheme is first studied through a density evolution approach. Then, simulations confirm the CRDSA results by showing that, for a time frame of $400$ slots, the achievable total throughput is greater than $0.7\\times C$, where $C$ is the maximal throughput achieved by a centralized scheme. This paper is a first analysis of the proposed scheme which open several perspectives. The most promising approach is to integrate collided bursts into the decoding process in order to im...

  2. Iterative channel decoding of FEC-based multiple-description codes.

    Science.gov (United States)

    Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B

    2012-03-01

    Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.

  3. A novel approach of an absolute coding pattern based on Hamiltonian graph

    Science.gov (United States)

    Wang, Ya'nan; Wang, Huawei; Hao, Fusheng; Liu, Liqiang

    2017-02-01

    In this paper, a novel approach of an optical type absolute rotary encoder coding pattern is presented. The concept is based on the principle of the absolute encoder to find out a unique sequence that ensures an unambiguous shaft position of any angular. We design a single-ring and a n-by-2 matrix absolute encoder coding pattern by using the variations of Hamiltonian graph principle. 12 encoding bits is used in the single-ring by a linear array CCD to achieve an 1080-position cycle encoding. Besides, a 2-by-2 matrix is used as an unit in the 2-track disk to achieve a 16-bits encoding pattern by using an area array CCD sensor (as a sample). Finally, a higher resolution can be gained by an electronic subdivision of the signals. Compared with the conventional gray or binary code pattern (for a 2n resolution), this new pattern has a higher resolution (2n*n) with less coding tracks, which means the new pattern can lead to a smaller encoder, which is essential in the industrial production.

  4. Towards a spectrum-based bar code for identification of weakly fluorescent microparticles

    Science.gov (United States)

    Petrášek, Zdeněk; Wiedemann, Jens; Schwille, Petra

    2014-03-01

    Spectrally resolved detection of fluorescent probes can be used to identify multiple labeled target molecules in an unknown mixture. We study how the spectral shape, the experimental noise, and the number of spectral detection channels affect the success of identification of weakly fluorescent beads on basis of their emission spectra. The proposed formalism allows to estimate the performance of the spectral identification procedure with a given set of spectral codes on the basis of the reference spectra only. We constructed a simple prism-based setup for spectral detection and demonstrate that seven distinct but overlapping spectral codes realized by combining up to three fluorescent dyes bound to a single bead in a barcode-based manner can be reliably identified. The procedure allows correct identification even in the presence of known autofluorescence background stronger than the actual signal.

  5. Algorithm for image retrieval based on edge gradient orientation statistical code.

    Science.gov (United States)

    Zeng, Jiexian; Zhao, Yonggang; Li, Weiye; Fu, Xiang

    2014-01-01

    Image edge gradient direction not only contains important information of the shape, but also has a simple, lower complexity characteristic. Considering that the edge gradient direction histograms and edge direction autocorrelogram do not have the rotation invariance, we put forward the image retrieval algorithm which is based on edge gradient orientation statistical code (hereinafter referred to as EGOSC) by sharing the application of the statistics method in the edge direction of the chain code in eight neighborhoods to the statistics of the edge gradient direction. Firstly, we construct the n-direction vector and make maximal summation restriction on EGOSC to make sure this algorithm is invariable for rotation effectively. Then, we use Euclidean distance of edge gradient direction entropy to measure shape similarity, so that this method is not sensitive to scaling, color, and illumination change. The experimental results and the algorithm analysis demonstrate that the algorithm can be used for content-based image retrieval and has good retrieval results.

  6. Introduction into scientific work methods-a necessity when performance-based codes are introduced

    DEFF Research Database (Denmark)

    Dederichs, Anne; Sørensen, Lars Schiøtt

    The introduction of performance-based codes in Denmark in 2004 requires new competences from people working with different aspects of fire safety in the industry and the public sector. This abstract presents an attempt in reducing problems with handling and analysing the mathematical methods...... and CFD models when applying performance-based codes. This is done within the educational program "Master of Fire Safety Engineering" at the department of Civil Engineering at the Technical University of Denmark. It was found that the students had general problems with academic methods. Therefore, a new...... educational moment is introduced as a result of this investigation. The course is positioned in the program prior the work with the final project. In the course a mini project is worked out, in which the students provides extra training in academic methods....

  7. Distributed Data Storage in Large-Scale Sensor Networks Based on LT Codes

    CERN Document Server

    Jafarizadeh, Saber

    2012-01-01

    This paper proposes an algorithm for increasing data persistency in large-scale sensor networks. In the scenario considered here, k out of n nodes sense the phenomenon and produced ? information packets. Due to usually hazardous environment and limited resources, e.g. energy, sensors in the network are vulnerable. Also due to the large size of the network, gathering information from a few central hopes is not feasible. Flooding is not a desired option either due to limited memory of each node. Therefore the best approach to increase data persistency is propagating data throughout the network by random walks. The algorithm proposed here is based on distributed LT (Luby Transform) codes and it benefits from the low complexity of encoding and decoding of LT codes. In previous algorithms the essential global information (e.g., n and k) are estimated based on graph statistics, which requires excessive transmissions. In our proposed algorithm, these values are obtained without additional transmissions. Also the mix...

  8. Underwater Acoustic Communication Based on Pattern Time Delay Shift Coding Scheme

    Institute of Scientific and Technical Information of China (English)

    YIN Jing-wei; HUI Jun-ying; HUI Juan; YAO Zhi-xiang; WANG Yi-lin

    2006-01-01

    Underwater acoustic communication based on Pattern Time Delay Shift Coding (PDS) communication scheme is studied. The time delay shift values of the pattern are used to encode the digital information in the PDS scheme, which belongs to the Pulse Position Modulation (PPM). The duty cycle of the PDS scheme is small, so it can economize the power for communication. By use of different patterns for code division and different frequencies for channel division, the communication system is capable of mitigating the inter-symbol interference (ISI) caused by the multipath channel. The data rate of communication is 1000 bits/s at 8 kHz bandwidth. The receiver separates the channels by means of band-pass filters, and performs decoding by 4 copy-correlators to estimate the time delay shift value. Based on the theoretical analysis and numerical simulations, the PDS scheme is shown to be a robust and effective approach for underwater acoustic communication.

  9. Stiefel Manifold and TCQ based on Unit Memory Coding for MIMO System

    Directory of Open Access Journals (Sweden)

    Vijey Thayananthan

    2014-02-01

    Full Text Available The Multi Input and Multi Output (MIMO systems have been analyzed with a number of quantization techniques. In this short communication, few problems like performance and accuracy are investigated through a quantization technique based on Stiefel Manifold (SM. In order to improve these problems, suitable Trellis Coded Quantization (TCQ based on Unit Memory (UM coding is studied and applied to SM of MIMO components as a novel approach. Anticipated results are the bit error performance which is an overall improvement of feedback connected between transmitter and receiver of MIMO. As a conclusion, this research not only reduces the quantization problems on SM but also improve the performance and accuracy of limited-rate feedback used in MIMO system.

  10. NetCoDer: A Retransmission Mechanism for WSNs Based on Cooperative Relays and Network Coding

    Directory of Open Access Journals (Sweden)

    Odilson T. Valle

    2016-05-01

    Full Text Available Some of the most difficult problems to deal with when using Wireless Sensor Networks (WSNs are related to the unreliable nature of communication channels. In this context, the use of cooperative diversity techniques and the application of network coding concepts may be promising solutions to improve the communication reliability. In this paper, we propose the NetCoDer scheme to address this problem. Its design is based on merging cooperative diversity techniques and network coding concepts. We evaluate the effectiveness of the NetCoDer scheme through both an experimental setup with real WSN nodes and a simulation assessment, comparing NetCoDer performance against state-of-the-art TDMA-based (Time Division Multiple Access retransmission techniques: BlockACK, Master/Slave and Redundant TDMA. The obtained results highlight that the proposed NetCoDer scheme clearly improves the network performance when compared with other retransmission techniques.

  11. Property study of integer wavelet transform lossless compression coding based on lifting scheme

    Science.gov (United States)

    Xie, Cheng Jun; Yan, Su; Xiang, Yang

    2006-01-01

    In this paper the algorithms and its improvement of integer wavelet transform combining SPIHT and arithmetic coding in image lossless compression is mainly studied. The experimental result shows that if the order of low-pass filter vanish matrix is fixed, the improvement of compression effect is not evident when invertible integer wavelet transform is satisfied and focusing of energy property monotonic increase with transform scale. For the same wavelet bases, the order of low-pass filter vanish matrix is more important than the order of high-pass filter vanish matrix in improving the property of image compression. Integer wavelet transform lossless compression coding based on lifting scheme has no relation to the entropy of image. The effect of compression is depended on the the focuing of energy property of image transform.

  12. Inferential multi-spectral image compression based on distributed source coding

    Science.gov (United States)

    Wu, Xian-yun; Li, Yun-song; Wu, Cheng-ke; Kong, Fan-qiang

    2008-08-01

    Based on the analyses of the interferential multispectral imagery(IMI), a new compression algorithm based on distributed source coding is proposed. There are apparent push motions between the IMI sequences, the relative shift between two images is detected by the block match algorithm at the encoder. Our algorithm estimates the rate of each bitplane with the estimated side information frame. then our algorithm adopts a ROI coding algorithm, in which the rate-distortion lifting procedure is carried out in rate allocation stage. Using our algorithm, the FBC can be removed from the traditional scheme. The compression algorithm developed in the paper can obtain up to 3dB's gain comparing with JPEG2000 and significantly reduce the complexity and storage consumption comparing with 3D-SPIHT at the cost of slight degrade in PSNR.

  13. Improved Data Transmission Scheme of Network Coding Based on Access Point Optimization in VANET

    Directory of Open Access Journals (Sweden)

    Zhe Yang

    2014-01-01

    Full Text Available VANET is a hot spot of intelligent transportation researches. For vehicle users, the file sharing and content distribution through roadside access points (AP as well as the vehicular ad hoc networks (VANET have been an important complement to that cellular network. So the AP deployment is one of the key issues to improve the communication performance of VANET. In this paper, an access point optimization method is proposed based on particle swarm optimization algorithm. The transmission performances of the routing protocol with random linear network coding before and after the access point optimization are analyzed. The simulation results show the optimization model greatly affects the VANET transmission performances based on network coding, and it can enhance the delivery rate by 25% and 14% and reduce the average delay of transmission by 38% and 33%.

  14. Tunable wavefront coded imaging system based on detachable phase mask: Mathematical analysis, optimization and underlying applications

    Science.gov (United States)

    Zhao, Hui; Wei, Jingxuan

    2014-09-01

    The key to the concept of tunable wavefront coding lies in detachable phase masks. Ojeda-Castaneda et al. (Progress in Electronics Research Symposium Proceedings, Cambridge, USA, July 5-8, 2010) described a typical design in which two components with cosinusoidal phase variation operate together to make defocus sensitivity tunable. The present study proposes an improved design and makes three contributions: (1) A mathematical derivation based on the stationary phase method explains why the detachable phase mask of Ojeda-Castaneda et al. tunes the defocus sensitivity. (2) The mathematical derivations show that the effective bandwidth wavefront coded imaging system is also tunable by making each component of the detachable phase mask move asymmetrically. An improved Fisher information-based optimization procedure was also designed to ascertain the optimal mask parameters corresponding to specific bandwidth. (3) Possible applications of the tunable bandwidth are demonstrated by simulated imaging.

  15. ASM-Triggered Too Observations of Kilohertz Oscillations in Three Atoll Sources

    Science.gov (United States)

    Kaaret, P.; Swank, Jean (Technical Monitor)

    2000-01-01

    Three Rossi Timing Explorer (RXTE) observations were carried out for this proposal based on target of opportunity triggers derived from the All-Sky Monitor (ASM) on RXTE. We obtained short observations of 4U1636-536 (15ks) and 4U1735-44 (23ks) and a longer observation of 4U0614+091 (117ks). Our analysis of our observations of the atoll neutron star x-ray binary 4U1735-44 lead to the discovery of a second high frequency quasiperiodic oscillation (QPO) in this source. These results were published in the Astrophysical Journal Letters. The data obtained on the source 4U0614+091 were used in a comprehensive study of this source, which will be published in the Astrophysical Journal. The data from this proposal were particularly critical for that study as they lead to the detection of the highest QPO frequency every found in the x-ray emission from an x-ray binary which will be important in placing limits on the equation of state of nuclear matter.

  16. Modelling waste stabilisation ponds with an extended version of ASM3.

    Science.gov (United States)

    Gehring, T; Silva, J D; Kehl, O; Castilhos, A B; Costa, R H R; Uhlenhut, F; Alex, J; Horn, H; Wichern, M

    2010-01-01

    In this paper an extended version of IWA's Activated Sludge Model No 3 (ASM3) was developed to simulate processes in waste stabilisation ponds (WSP). The model modifications included the integration of algae biomass and gas transfer processes for oxygen, carbon dioxide and ammonia depending on wind velocity and a simple ionic equilibrium. The model was applied to a pilot-scale WSP system operated in the city of Florianópolis (Brazil). The system was used to treat leachate from a municipal waste landfill. Mean influent concentrations to the facultative pond of 1,456 g(COD)/m(3) and 505 g(NH4-N)/m(3) were measured. Experimental results indicated an ammonia nitrogen removal of 89.5% with negligible rates of nitrification but intensive ammonia stripping to the atmosphere. Measured data was used in the simulations to consider the impact of wind velocity on oxygen input of 11.1 to 14.4 g(O2)/(m(2) d) and sun radiation on photosynthesis. Good results for pH and ammonia removal were achieved with mean stripping rates of 18.2 and 4.5 g(N)/(m(2) d) for the facultative and maturation pond respectively. Based on measured chlorophyll a concentrations and depending on light intensity and TSS concentration it was possible to model algae concentrations.

  17. Roles of the Outer Membrane Protein AsmA of Salmonella enterica in the Control of marRAB Expression and Invasion of Epithelial Cells▿

    OpenAIRE

    Ramos Morales, Francisco; Prieto Ortega, Ana Isabel; Hernández Piñero, Sara Belén; Cota García, Ignacio; Pucciarelli, María Graciela; Orlov, Yuri; García del Portillo, Francisco; Casadesús Pursals, José

    2009-01-01

    A genetic screen for suppressors of bile sensitivity in DNA adenine methylase (dam) mutants of Salmonella enterica serovar Typhimurium yielded insertions in an uncharacterized locus homologous to the Escherichia coli asmA gene. Disruption of asmA suppressed bile sensitivity also in phoP and wec mutants of S. enterica and increased the MIC of sodium deoxycholate for the parental strain ATCC 14028. Increased levels of marA mRNA were found in asmA, asmA dam, asmA phoP, and asmA wec strains of S....

  18. Wavelet-Based Mixed-Resolution Coding Approach Incorporating with SPT for the Stereo Image

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pairs, data compression algorithms should be employed to represent stereo pairs efficiently. The proposed techniques generally use blockbased disparity compensation. In order to get the higher compression ratio, this paper employs the wavelet-based mixed-resolution coding technique to incorporate with SPT-based disparity-compensation to compress the stereo image data. The mixed-resolution coding is a perceptually justified technique that is achieved by presenting one eye with a low-resolution image and the other with a high-resolution image. Psychophysical experiments show that the stereo image pairs with one high-resolution image and one low-resolution image provide almost the same stereo depth to that of a stereo image with two high-resolution images. By combining the mixed-resolution coding and SPT-based disparity-compensation techniques, one reference (left) high-resolution image can be compressed by a hierarchical wavelet transform followed by vector quantization and Huffman encoder. After two level wavelet decompositions, for the lowresolution right image and low-resolution left image, subspace projection technique using the fixed block size disparity compensation estimation is used. At the decoder, the low-resolution right subimage is estimated using the disparity from the low-resolution left subimage. A full-size reconstruction is obtained by upsampling a factor of 4 and reconstructing with the synthesis low pass filter. Finally, experimental results are presented, which show that our scheme achieves a PSNR gain (about 0.92dB) as compared to the current block-based disparity compensation coding techniques.``

  19. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    Science.gov (United States)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco

    2015-09-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.

  20. QR-SSO - Towards a QR-Code based Single Sign-On system

    OpenAIRE

    Mukhopadhyay, Syamantak; Argles, David

    2011-01-01

    Today internet users use a single identity to access multiple services. With single sign-on (SSO), users don’t have to remember separate username/password for each service provider, which helps the user to browse through the web seamlessly. SSO is however susceptible to phishing attacks. This paper describes a new anti phishing SSO model based on mobile QR code. Apart from preventing phishing attacks this new model is also safe against man in the middle & reply attacks.

  1. Short range automotive radar based on UWB pseudo-random coding

    OpenAIRE

    2007-01-01

    In this paper, a radar system for short range automotive application based on ultra-wideband (UWB) technology is studied. UWB uses very short pulses, so that the spectrum of the transmitted signals may spread over several Gigahertzes. In order to increase, from one part, the resolution in distance of this radar system and to avoid, from another part, multi-users interferences for an optimal detectability, we propose to improve the radar performances by using coding techniques. It consists on ...

  2. Cyclin D1 in ASM Cells from Asthmatics Is Insensitive to Corticosteroid Inhibition.

    Science.gov (United States)

    Allen, Jodi C; Seidel, Petra; Schlosser, Tobias; Ramsay, Emma E; Ge, Qi; Ammit, Alaina J

    2012-01-01

    Hyperplasia of airway smooth muscle (ASM) is a feature of the remodelled airway in asthmatics. We examined the antiproliferative effectiveness of the corticosteroid dexamethasone on expression of the key regulator of G(1) cell cycle progression-cyclin D1-in ASM cells from nonasthmatics and asthmatics stimulated with the mitogen platelet-derived growth factor BB. While cyclin D1 mRNA and protein expression were repressed in cells from nonasthmatics in contrast, cyclin D1 expression in asthmatics was resistant to inhibition by dexamethasone. This was independent of a repressive effect on glucocorticoid receptor translocation. Our results corroborate evidence demonstrating that corticosteroids inhibit mitogen-induced proliferation only in ASM cells from subjects without asthma and suggest that there are corticosteroid-insensitive proliferative pathways in asthmatics.

  3. Corticosteroids reduce IL-6 in ASM cells via up-regulation of MKP-1.

    Science.gov (United States)

    Quante, Timo; Ng, Yee Ching; Ramsay, Emma E; Henness, Sheridan; Allen, Jodi C; Parmentier, Johannes; Ge, Qi; Ammit, Alaina J

    2008-08-01

    The mechanisms by which corticosteroids reduce airway inflammation are not completely understood. Traditionally, corticosteroids were thought to inhibit cytokines exclusively at the transcriptional level. Our recent evidence, obtained in airway smooth muscle (ASM), no longer supports this view. We have found that corticosteroids do not act at the transcriptional level to reduce TNF-alpha-induced IL-6 gene expression. Rather, corticosteroids inhibit TNF-alpha-induced IL-6 secretion by reducing the stability of the IL-6 mRNA transcript. TNF-alpha-induced IL-6 mRNA decays at a significantly faster rate in ASM cells pretreated with the corticosteroid dexamethasone (t(1/2) = 2.4 h), compared to vehicle (t(1/2) = 9.0 h; P ASM cells.

  4. Cross-domain expression recognition based on sparse coding and transfer learning

    Science.gov (United States)

    Yang, Yong; Zhang, Weiyi; Huang, Yong

    2017-05-01

    Traditional facial expression recognition methods usually assume that the training set and the test set are independent and identically distributed. However, in actual expression recognition applications, the conditions of independent and identical distribution are hardly satisfied for the training set and test set because of the difference of light, shade, race and so on. In order to solve this problem and improve the performance of expression recognition in the actual applications, a novel method based on transfer learning and sparse coding is applied to facial expression recognition. First of all, a common primitive model, that is, the dictionary is learnt. Then, based on the idea of transfer learning, the learned primitive pattern is transferred to facial expression and the corresponding feature representation is obtained by sparse coding. The experimental results in CK +, JAFFE and NVIE database shows that the transfer learning based on sparse coding method can effectively improve the expression recognition rate in the cross-domain expression recognition task and is suitable for the practical facial expression recognition applications.

  5. An Efficient Code-Based Threshold Ring Signature Scheme with a Leader-Participant Model

    Directory of Open Access Journals (Sweden)

    Guomin Zhou

    2017-01-01

    Full Text Available Digital signature schemes with additional properties have broad applications, such as in protecting the identity of signers allowing a signer to anonymously sign a message in a group of signers (also known as a ring. While these number-theoretic problems are still secure at the time of this research, the situation could change with advances in quantum computing. There is a pressing need to design PKC schemes that are secure against quantum attacks. In this paper, we propose a novel code-based threshold ring signature scheme with a leader-participant model. A leader is appointed, who chooses some shared parameters for other signers to participate in the signing process. This leader-participant model enhances the performance because every participant including the leader could execute the decoding algorithm (as a part of signing process upon receiving the shared parameters from the leader. The time complexity of our scheme is close to Courtois et al.’s (2001 scheme. The latter is often used as a basis to construct other types of code-based signature schemes. Moreover, as a threshold ring signature scheme, our scheme is as efficient as the normal code-based ring signature.

  6. Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach.

    Science.gov (United States)

    Danyali, Habibiollah; Mertins, Alfred

    2011-01-01

    In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications.

  7. Coding Model and Mapping Method of Spherical Diamond Discrete Grids Based on Icosahedron

    Directory of Open Access Journals (Sweden)

    LIN Bingxian

    2016-12-01

    Full Text Available Discrete Global Grid(DGG provides a fundamental environment for global-scale spatial data's organization and management. DGG's encoding scheme, which blocks coordinate transformation between different coordination reference frames and reduces the complexity of spatial analysis, contributes a lot to the multi-scale expression and unified modeling of spatial data. Compared with other kinds of DGGs, Diamond Discrete Global Grid(DDGG based on icosahedron is beneficial to the spherical spatial data's integration and expression for much better geometric properties. However, its structure seems more complicated than DDGG on octahedron due to its initial diamond's edges cannot fit meridian and parallel. New challenges are posed when it comes to the construction of hierarchical encoding system and mapping relationship with geographic coordinates. On this issue, this paper presents a DDGG's coding system based on the Hilbert curve and designs conversion methods between codes and geographical coordinates. The study results indicate that this encoding system based on the Hilbert curve can express space scale and location information implicitly with the similarity between DDG and planar grid put into practice, and balances efficiency and accuracy of conversion between codes and geographical coordinates in order to support global massive spatial data's modeling, integrated management and all kinds of spatial analysis.

  8. Evaluating the structural identifiability of the parameters of the EBPR sub-model in ASM2d by the differential algebra method.

    Science.gov (United States)

    Zhang, Tian; Zhang, Daijun; Li, Zhenliang; Cai, Qing

    2010-05-01

    The calibration of ASMs is a prerequisite for their application to simulation of a wastewater treatment plant. This work should be made based on the evaluation of structural identifiability of model parameters. An EBPR sub-model including denitrification phosphorus removal has been incorporated in ASM2d. Yet no report is presented on the structural identifiability of the parameters in the EBPR sub-model. In this paper, the differential algebra approach was used to address this issue. The results showed that the structural identifiability of parameters in the EBPR sub-model could be improved by increasing the measured variables. The reduction factor eta(NO)(3) was identifiable when combined data of aerobic process and anoxic process were assumed. For K(PP), X(PAO) and q(PHA) of the anaerobic process to be uniquely identifiable, one of them is needed to be determined by other ways. Likewise, if prior information on one of the parameters, K(PHA), X(PAO) and q(PP) of the aerobic process, is known, all the parameters are identifiable. The above results could be of interest to the parameter estimation of the EBPR sub-model. The algorithm proposed in the paper is also suitable for other sub-models of ASMs.

  9. Projection based image restoration, super-resolution and error correction codes

    Science.gov (United States)

    Bauer, Karl Gregory

    Super-resolution is the ability of a restoration algorithm to restore meaningful spatial frequency content beyond the diffraction limit of the imaging system. The Gerchberg-Papoulis (GP) algorithm is one of the most celebrated algorithms for super-resolution. The GP algorithm is conceptually simple and demonstrates the importance of using a priori information in the formation of the object estimate. In the first part of this dissertation the continuous GP algorithm is discussed in detail and shown to be a projection on convex sets algorithm. The discrete GP algorithm is shown to converge in the exactly-, over- and under-determined cases. A direct formula for the computation of the estimate at the kth iteration and at convergence is given. This analysis of the discrete GP algorithm sets the stage to connect super-resolution to error-correction codes. Reed-Solomon codes are used for error-correction in magnetic recording devices, compact disk players and by NASA for space communications. Reed-Solomon codes have a very simple description when analyzed with the Fourier transform. This signal processing approach to error- correction codes allows the error-correction problem to be compared with the super-resolution problem. The GP algorithm for super-resolution is shown to be equivalent to the correction of errors with a Reed-Solomon code over an erasure channel. The Restoration from Magnitude (RFM) problem seeks to recover a signal from the magnitude of the spectrum. This problem has applications to imaging through a turbulent atmosphere. The turbulent atmosphere causes localized changes in the index of refraction and introduces different phase delays in the data collected. Synthetic aperture radar (SAR) and hyperspectral imaging systems are capable of simultaneously recording multiple images of different polarizations or wavelengths. Each of these images will experience the same turbulent atmosphere and have a common phase distortion. A projection based restoration

  10. A lossless multichannel bio-signal compression based on low-complexity joint coding scheme for portable medical devices.

    Science.gov (United States)

    Kim, Dong-Sun; Kwon, Jin-San

    2014-09-18

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor.

  11. An algorithm for the study of DNA sequence evolution based on the genetic code.

    Science.gov (United States)

    Sirakoulis, G Ch; Karafyllidis, I; Sandaltzopoulos, R; Tsalides, Ph; Thanailakis, A

    2004-11-01

    Recent studies of the quantum-mechanical processes in the DNA molecule have seriously challenged the principle that mutations occur randomly. The proton tunneling mechanism causes tautomeric transitions in base pairs resulting in mutations during DNA replication. The meticulous study of the quantum-mechanical phenomena in DNA may reveal that the process of mutagenesis is not completely random. We are still far away from a complete quantum-mechanical model of DNA sequence mutagenesis because of the complexity of the processes and the complex three-dimensional structure of the molecule. In this paper we have developed a quantum-mechanical description of DNA evolution and, following its outline, we have constructed a classical model for DNA evolution assuming that some aspects of the quantum-mechanical processes have influenced the determination of the genetic code. Conversely, our model assumes that the genetic code provides information about the quantum-mechanical mechanisms of mutagenesis, as the current code is the product of an evolutionary process that tries to minimize the spurious consequences of mutagenesis. Based on this model we develop an algorithm that can be used to study the accumulation of mutations in a DNA sequence. The algorithm has a user-friendly interface and the user can change key parameters in order to study relevant hypotheses.

  12. Development Of Sputtering Models For Fluids-Based Plasma Simulation Codes

    Science.gov (United States)

    Veitzer, Seth; Beckwith, Kristian; Stoltz, Peter

    2015-09-01

    Rf-driven plasma devices such as ion sources and plasma processing devices for many industrial and research applications benefit from detailed numerical modeling. Simulation of these devices using explicit PIC codes is difficult due to inherent separations of time and spatial scales. One alternative type of model is fluid-based codes coupled with electromagnetics, that are applicable to modeling higher-density plasmas in the time domain, but can relax time step requirements. To accurately model plasma-surface processes, such as physical sputtering and secondary electron emission, kinetic particle models have been developed, where particles are emitted from a material surface due to plasma ion bombardment. In fluid models plasma properties are defined on a cell-by-cell basis, and distributions for individual particle properties are assumed. This adds a complexity to surface process modeling, which we describe here. We describe the implementation of sputtering models into the hydrodynamic plasma simulation code USim, as well as methods to improve the accuracy of fluids-based simulation of plasmas-surface interactions by better modeling of heat fluxes. This work was performed under the auspices of the Department of Energy, Office of Basic Energy Sciences Award #DE-SC0009585.

  13. Optimal Rate Control in H.264 Video Coding Based on Video Quality Metric

    Directory of Open Access Journals (Sweden)

    R. Karthikeyan

    2014-05-01

    Full Text Available The aim of this research is to find a method for providing better visual quality across the complete video sequence in H.264 video coding standard. H.264 video coding standard with its significantly improved coding efficiency finds important applications in various digital video streaming, storage and broadcast. To achieve comparable quality across the complete video sequence with the constrains on bandwidth availability and buffer fullness, it is important to allocate more bits to frames with high complexity or a scene change and fewer bits to other less complex frames. A frame layer bit allocation scheme is proposed based on the perceptual quality metric as indicator of the frame complexity. The proposed model computes the Quality Index ratio (QIr of the predicted quality index of the current frame to the average quality index of all the previous frames in the group of pictures which is used for bit allocation to the current frame along with bits computed based on buffer availability. The standard deviation of the perceptual quality indicator MOS computed for the proposed model is significantly less which means the quality of the video sequence is identical throughout the full video sequence. Thus the experiment results shows that the proposed model effectively handles the scene changes and scenes with high motion for better visual quality.

  14. A PEG Construction of LDPC Codes Based on the Betweenness Centrality Metric

    Directory of Open Access Journals (Sweden)

    BHURTAH-SEEWOOSUNGKUR, I.

    2016-05-01

    Full Text Available Progressive Edge Growth (PEG constructions are usually based on optimizing the distance metric by using various methods. In this work however, the distance metric is replaced by a different one, namely the betweenness centrality metric, which was shown to enhance routing performance in wireless mesh networks. A new type of PEG construction for Low-Density Parity-Check (LDPC codes is introduced based on the betweenness centrality metric borrowed from social networks terminology given that the bipartite graph describing the LDPC is analogous to a network of nodes. The algorithm is very efficient in filling edges on the bipartite graph by adding its connections in an edge-by-edge manner. The smallest graph size the new code could construct surpasses those obtained from a modified PEG algorithm - the RandPEG algorithm. To the best of the authors' knowledge, this paper produces the best regular LDPC column-weight two graphs. In addition, the technique proves to be competitive in terms of error-correcting performance. When compared to MacKay, PEG and other recent modified-PEG codes, the algorithm gives better performance over high SNR due to its particular edge and local graph properties.

  15. Development of OCDMA system based on Flexible Cross Correlation (FCC) code with OFDM modulation

    Science.gov (United States)

    Aldhaibani, A. O.; Aljunid, S. A.; Anuar, M. S.; Arief, A. R.; Rashidi, C. B. M.

    2015-03-01

    The performance of the OCDMA systems is governed by numerous quantitative parameters such as the data rate, simultaneous number of users, the powers of transmitter and receiver, and the type of codes. This paper analyzes the performance of the OCDMA system using OFDM technique to enhance the channel data rate, to save power and increase the number of user of OSCDMA systems compared with previous hybrid subcarrier multiplexing/optical spectrum code division multiplexing (SCM/OSCDM) system. The average received signal to noise ratio (SNR) with the nonlinearity of subcarriers is derived. The theoretical results have been evaluated based on BER and number of users as well as amount of power saved. The proposed system gave better performance and save around -6 dBm of the power as well as increase the number of users twice compare to SCM/OCDMA system. In addition it is robust against interference and much more spectrally efficient than SCM/OCDMA system. The system was designed based on Flexible Cross Correlation (FCC) code which is easier construction, less complexity of encoder/decoder design and flexible in-phase cross-correlation for uncomplicated to implement using Fiber Bragg Gratings (FBGs) for the OCDMA systems for any number of users and weights. The OCDMA-FCC_OFDM improves the number of users (cardinality) 108% compare to SCM/ODCMA-FCC system.

  16. An Allele Real-Coded Quantum Evolutionary Algorithm Based on Hybrid Updating Strategy.

    Science.gov (United States)

    Zhang, Yu-Xian; Qian, Xiao-Yi; Peng, Hui-Deng; Wang, Jian-Hui

    2016-01-01

    For improving convergence rate and preventing prematurity in quantum evolutionary algorithm, an allele real-coded quantum evolutionary algorithm based on hybrid updating strategy is presented. The real variables are coded with probability superposition of allele. A hybrid updating strategy balancing the global search and local search is presented in which the superior allele is defined. On the basis of superior allele and inferior allele, a guided evolutionary process as well as updating allele with variable scale contraction is adopted. And H ε gate is introduced to prevent prematurity. Furthermore, the global convergence of proposed algorithm is proved by Markov chain. Finally, the proposed algorithm is compared with genetic algorithm, quantum evolutionary algorithm, and double chains quantum genetic algorithm in solving continuous optimization problem, and the experimental results verify the advantages on convergence rate and search accuracy.

  17. Low Complexity for Scalable Video Coding Extension of H.264 based on the Complexity of Video

    Directory of Open Access Journals (Sweden)

    Mayada Khairy

    2016-12-01

    Full Text Available Scalable Video Coding (SVC / H.264 is one type of video compression techniques. Which provided more reality in dealing with video compression to provide an efficient video coding based on H.264/AVC. This ensures higher performance through high compression ratio. SVC/H.264 is a complexity technique whereas the takes considerable time for computation the best mode of macroblock and motion estimation through using the exhaustive search techniques. This work reducing the processing time through matching between the complexity of the video and the method of selection macroblock and motion estimation. The goal of this approach is reducing the encoding time and improving the quality of video stream the efficiency of the proposed approach makes it suitable for are many applications as video conference application and security application.

  18. Underwater Acoustic Networks: Channel Models and Network Coding based Lower Bound to Transmission Power for Multicast

    CERN Document Server

    Lucani, Daniel E; Stojanovic, Milica

    2008-01-01

    The goal of this paper is two-fold. First, to establish a tractable model for the underwater acoustic channel useful for network optimization in terms of convexity. Second, to propose a network coding based lower bound for transmission power in underwater acoustic networks, and compare this bound to the performance of several network layer schemes. The underwater acoustic channel is characterized by a path loss that depends strongly on transmission distance and signal frequency. The exact relationship among power, transmission band, distance and capacity for the Gaussian noise scenario is a complicated one. We provide a closed-form approximate model for 1) transmission power and 2) optimal frequency band to use, as functions of distance and capacity. The model is obtained through numerical evaluation of analytical results that take into account physical models of acoustic propagation loss and ambient noise. Network coding is applied to determine a lower bound to transmission power for a multicast scenario, fo...

  19. An Allele Real-Coded Quantum Evolutionary Algorithm Based on Hybrid Updating Strategy

    Directory of Open Access Journals (Sweden)

    Yu-Xian Zhang

    2016-01-01

    Full Text Available For improving convergence rate and preventing prematurity in quantum evolutionary algorithm, an allele real-coded quantum evolutionary algorithm based on hybrid updating strategy is presented. The real variables are coded with probability superposition of allele. A hybrid updating strategy balancing the global search and local search is presented in which the superior allele is defined. On the basis of superior allele and inferior allele, a guided evolutionary process as well as updating allele with variable scale contraction is adopted. And Hε gate is introduced to prevent prematurity. Furthermore, the global convergence of proposed algorithm is proved by Markov chain. Finally, the proposed algorithm is compared with genetic algorithm, quantum evolutionary algorithm, and double chains quantum genetic algorithm in solving continuous optimization problem, and the experimental results verify the advantages on convergence rate and search accuracy.

  20. Novel UEP LT Coding Scheme with Feedback Based on Different Degree Distributions

    Directory of Open Access Journals (Sweden)

    Li Ya-Fang

    2016-01-01

    Full Text Available Traditional unequal error protection (UEP schemes have some limitations and problems, such as the poor UEP performance of high priority data and the seriously sacrifice of low priority data in decoding property. Based on the reasonable applications of different degree distributions in LT codes, this paper puts forward a novel UEP LT coding scheme with a simple feedback to compile these data packets separately. Simulation results show that the proposed scheme can effectively protect high priority data, and improve the transmission efficiency of low priority data from 2.9% to 22.3%. Furthermore, it is fairly suitable to apply this novel scheme to multicast and broadcast environments since only a simple feedback introduced.