Turgeon, Jennifer L.; Minana, Molly A.; Hackney, Patricia; Pilch, Martin M.
The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in the US Department of Energy/National Nuclear Security Agency (DOE/NNSA) Quality Criteria, Revision 10 (QC-1) as 'conformance to customer requirements and expectations'. This quality plan defines the SNL ASC Program software quality engineering (SQE) practices and provides a mapping of these practices to the SNL Corporate Process Requirement (CPR) 001.3.6; 'Corporate Software Engineering Excellence'. This plan also identifies ASC management's and the software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals. This SNL ASC Software Quality Plan establishes the signatories commitments to improving software products by applying cost-effective SQE practices. This plan enumerates the SQE practices that comprise the development of SNL ASC's software products and explains the project teams opportunities for tailoring and implementing the practices.
Willenbring, James Michael; Heroux, Michael Allen
The Trilinos Project is an effort to develop algorithms and enabling technologies within an object-oriented software framework for the solution of large-scale, complex multi-physics engineering and scientific problems. A new software capability is introduced into Trilinos as a package. A Trilinos package is an integral unit and, although there are exceptions such as utility packages, each package is typically developed by a small team of experts in a particular algorithms area such as algebraic preconditioners, nonlinear solvers, etc. The Trilinos Developers SQE Guide is a resource for Trilinos package developers who are working under Advanced Simulation and Computing (ASC) and are therefore subject to the ASC Software Quality Engineering Practices as described in the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan: ASC Software Quality Engineering Practices Version 3.0 document . The Trilinos Developer Policies webpage  contains a lot of detailed information that is essential for all Trilinos developers. The Trilinos Software Lifecycle Model defines the default lifecycle model for Trilinos packages and provides a context for many of the practices listed in this document.
Ellis, Molly A.; Heaphy, Robert; Sturtevant, Judith E.; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Forsythe, Christi A.; Schofield, Joseph Richard, Jr.; Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter
The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR 1.3.2 and 1.3.6 and to a Department of Energy document, 'ASCI Software Quality Engineering: Goals, Principles, and Guidelines'. This document also identifies ASC management and software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.
Sturtevant, Judith E.; Heaphy, Robert; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Minana, Molly A.; Hackney, Patricia; Forsythe, Christi A.; Schofield, Joseph Richard, Jr. (,; .); Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter
The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR 1.3.2 and 1.3.6 and to a Department of Energy document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines. This document also identifies ASC management and software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.
Heaphy, Robert; Sturtevant, Judith E.; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Minana, Molly A.; Hackney, Patricia; Forsythe, Christi A.; Schofield, Joseph Richard, Jr. (,; .); Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter
The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR001.3.2 and CPR001.3.6 and to a Department of Energy document, ''ASCI Software Quality Engineering: Goals, Principles, and Guidelines''. This document also identifies ASC management and software project teams' responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.
Minana, Molly A.; Sturtevant, Judith E.; Heaphy, Robert; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Forsythe, Christi A.; Schofield, Joseph Richard, Jr.; Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter
The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in DOE/AL Quality Criteria (QC-1) as conformance to customer requirements and expectations. This quality plan defines the ASC program software quality practices and provides mappings of these practices to the SNL Corporate Process Requirements (CPR 1.3.2 and CPR 1.3.6) and the Department of Energy (DOE) document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines (GP&G). This quality plan identifies ASC management and software project teams' responsibilities for cost-effective software engineering quality practices. The SNL ASC Software Quality Plan establishes the signatories commitment to improving software products by applying cost-effective software engineering quality practices. This document explains the project teams opportunities for tailoring and implementing the practices; enumerates the practices that compose the development of SNL ASC's software products; and includes a sample assessment checklist that was developed based upon the practices in this document.
Turgeon, Jennifer; Minana, Molly A.
This document provides a guide to the process of conducting software appraisals under the Sandia National Laboratories (SNL) ASC Program. The goal of this document is to describe a common methodology for planning, conducting, and reporting results of software appraisals thereby enabling: development of an objective baseline on implementation of the software quality engineering (SQE) practices identified in the ASC Software Quality Plan across the ASC Program; feedback from project teams on SQE opportunities for improvement; identification of strengths and opportunities for improvement for individual project teams; guidance to the ASC Program on the focus of future SQE activities Document contents include process descriptions, templates to promote consistent conduct of appraisals, and an explanation of the relationship of this procedure to the SNL ASC software program.
Clough, A. J.
The paper describes a software inspection process that can be used to evaluate the quality of software components. Quality criteria, process application, independent testing of the process and proposed associated tool support are covered. Early results indicate that this technique is well suited for assessing software component quality in a standardized fashion. With automated machine assistance to facilitate both the evaluation and selection of software components, such a technique should promote effective reuse of software components.
The author defines some criteria for the evaluation of software quality assurance elements for applicability to the regulation of the nuclear industry. The author then analyses a number of software quality assurance (SQA) standards. The major extracted SQA elements are then discussed, and finally specific software quality assurance recommendations are made for the nuclear industry
Jones, C. [Software Productivity Research, Inc., Burlington, MA (United States)
For many years, software quality assurance lagged behind hardware quality assurance in terms of methods, metrics, and successful results. New approaches such as Quality Function Deployment (QFD) the ISO 9000-9004 standards, the SEI maturity levels, and Total Quality Management (TQM) are starting to attract wide attention, and in some cases to bring software quality levels up to a parity with manufacturing quality levels. Since software is on the critical path for many engineered products, and for internal business systems as well, the new approaches are starting to affect global competition and attract widespread international interest. It can be hypothesized that success in mastering software quality will be a key strategy for dominating global software markets in the 21st century.
Software quality assessment shall monitor and guide the evolution of a system based on quality measurements. This continuous process should ideally involve multiple stakeholders and provide adequate information for each of them to use. We want to support an effective selection of quality measurements based on the type of software and individual information needs of the involved stakeholders. We propose an approach that brings together quality measurements and individual information needs for ...
Quality is not a fixed or universal property of software; it depends on the context and goals of its stakeholders. Hence, when you want to develop a high-quality software system, the first step must be a clear and precise specification of quality. Yet even if you get it right and complete, you can be sure that it will become invalid over time. So the only solution is continuous quality control: the steady and explicit evaluation of a product's properties with respect to its updated quality goals.This book guides you in setting up and running continuous quality control in your environment. Star
There are two important reasons for Software Quality Assurance (SQA) at Allied-Signal Inc., Kansas City Division (KCD): First, the benefits from SQA make good business sense. Second, the Department of Energy has requested SQA. This handbook is one of the first steps in a plant-wide implementation of Software Quality Assurance at KCD. The handbook has two main purposes. The first is to provide information that you will need to perform software quality assurance activities. The second is to provide a common thread to unify the approach to SQA at KCD. 2 figs.
Huo, Ming; Verner, June; Zhu, Liming; Ali Babar, Muhammad
peer-reviewed Agile methods may produce software faster but we also need to know how they meet our quality requirements. In this paper we compare the waterfall model with agile processes to show how agile methods achieve software quality under time pressure and in an unstable requirements environment, i.e. we analyze agile software quality assurance. We present a detailed waterfall model showing its software quality support processes. We then show the quality pra...
DINESH KUMAR; KAPIL KUMAR
Software quality models are a well-accepted means to support quality management of software systems. Over the last 30 years, a multitude of quality models have been proposed and applied with varying degrees of success. Despite successes and standardization efforts, quality models are skill being criticized, as their application in practice exhibits various problems. To some extent, this criticism is caused by an unclear definition of what quality models with respect to their intended mode of ...
This thesis is about software development practices, including the project management aspects, in the context of global software outsourcing. It was focused on the issues of achieving quality product namely here: software. It is built on the premise that the global context, in which the stakeholders are geographically separated by national boundaries, poses unique and inherent challenges derived from separation of place, time and culture.
Jakimoski, Goce; Khajuria, Samant
. Unfortunately, the use of a block cipher as a building block limits the performance of the authenticated encryption schemes to at most one message block per block cipher evaluation. In this paper, we propose the authenticated encryption scheme ASC-1 (Authenticating Stream Cipher One). Similarly to LEX, ASC-1...... uses leak extraction from diÆerent AES rounds to compute the key material that is XOR-ed with the message to compute the ciphertext. Unlike LEX, the ASC-1 operates in a CFB fashion to compute an authentication tag over the encrypted message. We argue that ASC-1 is secure by reducingits (IND-CCA , INT...
The growth in cost and importance of software to NASA has caused NASA to address the improvement of software development across the agency. One of the products of this program is a series of guidebooks that define a NASA concept of the assurance processes that are used in software development. The Software Assurance Guidebook, NASA-GB-A201, issued in September, 1989, provides an overall picture of the NASA concepts and practices in software assurance. Second level guidebooks focus on specific activities that fall within the software assurance discipline, and provide more detailed information for the manager and/or practitioner. This is the second level Software Quality Assurance Audits Guidebook that describes software quality assurance audits in a way that is compatible with practices at NASA Centers.
The paper concerns quality assurance in computing software as applied to the nuclear industry. The emergence of Software Quality Management in systems procurement over the last decade is discussed, as are some of the underlying reasons for its important role in modern procurement practice. Some of the typical aspects of control are highlighted and discussed. (author)
Software quality stems from two distinctive, but associated, topics in software engineering: software functional quality and software structural quality. Software Quality Engineering studies the tenets of both of these notions, which focus on the efficiency and value of a design, respectively. The text addresses engineering quality on both the application and system levels with attention to Information Systems and Embedded Systems as well as recent developments. Targeted at graduate engineering students and software quality specialists, the book analyzes the relationship between functionality
Lawler, R. W.
A conceptual framework for treating software quality from a total system perspective is developed. Examples are given to show how system quality objectives may be allocated to hardware and software; to illustrate trades among quality factors, both hardware and software, to achieve system performance objectives; and to illustrate the impact of certain design choices on software functionality.
Tariq, Muhammad Tahir and Aleem
Quality and security of software are key factors in the software development. This thesis deals with the quality of open source software (OSS for short) and different questions that are related with open source and close source software has discussed in the thesis proposal. Open source software is a process by which we can produce cheap and qualitative software and its source could be re-use in the development of the software. Close source software is more expensive than open source software ...
The key factors which have led to a growing trend of outsourcing are:? Lack of expert-labor in some portions of the business process.? Availability of cheaper labor, whilst not comprising on the quality of output.? Ability and feasibility to concentrate on the other crucial business process.These factors have specifically contributed to most of the outsourced partnersacross different locations in the world. Expertise in communication capabilities,technical expertise and favorable financial pa...
Latika Kharb; Dr. Vijay Singh Rathore
For an effective test measurement, a software tester requires a testing metrics that could measure the quality and productivity of software development process along with increasing its reusability, correctness and maintainability. Until now, the understanding of measuring software quality is not yet sophisticated enough and is still far away from being standardized and in order to assess the software quality, an appropriate set of software metrics needs to be identified that could express th...
Taipale, T. (Taneli)
Today's agile software development can be a complicated process, especially when dealing with a large-scale project with demands for tight communication. The tools used in software development, while aiding the process itself, can also offer meaningful statistics. With the aid of machine learning, these statistics can be used for predicting the behavior patterns of the development process. The starting point of this thesis is a software project developed to be a part of a large telecommun...
Although software quality can't be quantified, the tools and techniques to achieve high quality are available. As management stresses the need for definable software quality programs from vendors and subcontractors and provides the incentives for these programs, the quality of software will improve. EPRI could provide the leadership in establishing guidelines for a balanced software quality program and through workshops provide training to utility staff and management on the methods for evaluating the characteristics of quality software. With the more complex systems discussed at this workshop and particularly with the trend toward the use of artificial intelligence, the importance of quality software will grow dramatically
Full Text Available For an effective test measurement, a software tester requires a testing metrics that could measure the quality and productivity of software development process along with increasing its reusability, correctness and maintainability. Until now, the understanding of measuring software quality is not yet sophisticated enough and is still far away from being standardized and in order to assess the software quality, an appropriate set of software metrics needs to be identified that could express these quality attributes. Our research objective in this paper is to construct and define a set of easy-to measure software metrics for testing to be used as early indicators of external measures of quality. So,we’ve emphasized on the fact that reliable software development with respect to quality could be well achieved by using our set of testing metrics, and for that we’ve given the practical results of evaluation
In modeling the cost and quality of software systems, the relationship between cost and quality must be considered. This explicit relationship is dictated by the criticality of the software being developed. The balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and the developers with respect to the processes being employed
Full Text Available Development of software is a scientific and economic problem, particularly the design of complex systems whichrequire evolving methods and approaches. Agent technology is currently one of the most active and vibrant areas of IT research and development. Object-oriented Software Engineering (OOSE has become an active area of research in recent years. In this paper, we review the framework of software quality management using object- oriented methodology concepts for software agents.The software specification acts as a bridge between customers, architects, software developers and testers. Using object-oriented concept of software agent and its standard it may offer benefits even if the system is implemented without an object-based language or framework . We propose and discuss a software agent framework, specifically to support software quality management. Although still in its initial phases, research indicates some promise in enabling software developers to meet market expectations and produce projects timeously, within budget and to users satisfaction. However, the software quality management environment has also changed and is continuously evolving. Currently software projects are developed and deployed in distributed, pervasive and collaborative environments and its quality should be managed by applying its best standard. From the point of view of software engineering this framework and its standards are applying for developing the software projects.We discuss the standard and benefits that can be gained by using object-oriented concepts, and where the concepts require further development.
Parnas, David Lorge
The Professor David Lorge Parnas Inaugural Lecture, discussing why software quality research is important, what topics he will be studying and how research at the Software Quality Research Laboratory (SQRL) will be conducted.
Berg, Jonathan Charles [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
This document describes the software development practice areas and processes which contribute to the ability of SWiFT software developers to provide quality software. These processes are designed to satisfy the requirements set forth by the Sandia Software Quality Assurance Program (SSQAP). APPROVALS SWiFT Software Quality Assurance Plan (SAND2016-0765) approved by: Department Manager SWiFT Site Lead Dave Minster (6121) Date Jonathan White (6121) Date SWiFT Controls Engineer Jonathan Berg (6121) Date CHANGE HISTORY Issue Date Originator(s) Description A 2016/01/27 Jon Berg (06121) Initial release of the SWiFT Software Quality Assurance Plan
Prakriti Trivedi; Rajeev Kumar
Today most of the applications developed using some existing libraries, codes, open sources etc. As a code is accessed in a program, it is represented as the software component. Such as in java beans and in .net ActiveX controls are the software components. These components are ready to use programming code or controls that excel the code development. A component based software system defines the concept of software reusability. While using these components the main question arise is whether ...
Architecture processes are considerably new parts of organisations’ processes. These processes have the responsibility to aim at high quality and financially successful architectures. However, the activities which promote this aim are not clearly defined yet. This study reviews literature and practitioners’ experiences on quality management activities that could be suggested to promote the achievement of high quality software architectures and a good quality software a...
Full Text Available Software quality is an important issue in the development of successful software application. Many methods have been applied to improve the software quality. Refactoring is one of those methods. But, the effect of refactoring in general on all the software quality attributes is ambiguous. The goal of this paper is to find out the effect of various refactoring methods on quality attributes and to classify them based on their measurable effect on particular software quality attribute. The paper focuses on studying the Reusability, Complexity, Maintainability, Testability, Adaptability, Understandability, Fault Proneness, Stability and Completeness attribute of a software .This, in turn, will assist the developer in determining that whether to apply a certain refactoring method to improve a desirable quality attribute.
Application software is defined as safety-critical if a fault in the software could prevent the system components from performing their nuclear-safety functions. Therefore, for nuclear-safety systems, the AREVA TELEPERMR XS (TXS) system is classified 1E, as defined in the Inst. of Electrical and Electronics Engineers (IEEE) Std 603-1998. The application software is classified as Software Integrity Level (SIL)-4, as defined in IEEE Std 7-4.3.2-2003. The AREVA NP Inc. Software Program Manual (SPM) describes the measures taken to ensure that the TELEPERM XS application software attains a level of quality commensurate with its importance to safety. The manual also describes how TELEPERM XS correctly performs the required safety functions and conforms to established technical and documentation requirements, conventions, rules, and standards. The program manual covers the requirements definition, detailed design, integration, and test phases for the TELEPERM XS application software, and supporting software created by AREVA NP Inc. The SPM is required for all safety-related TELEPERM XS system applications. The program comprises several basic plans and practices: 1. A Software Quality-Assurance Plan (SQAP) that describes the processes necessary to ensure that the software attains a level of quality commensurate with its importance to safety function. 2. A Software Safety Plan (SSP) that identifies the process to reasonably ensure that safety-critical software performs as intended during all abnormal conditions and events, and does not introduce any new hazards that could jeopardize the health and safety of the public. 3. A Software Verification and Validation (V and V) Plan that describes the method of ensuring the software is in accordance with the requirements. 4. A Software Configuration Management Plan (SCMP) that describes the method of maintaining the software in an identifiable state at all times. 5. A Software Operations and Maintenance Plan (SO and MP) that
The 7 Qualities of Highly Secure Software provides a framework for designing, developing, and deploying hacker-resilient software. It uses engaging anecdotes and analogies-ranging from Aesop's fables, athletics, architecture, biology, nursery rhymes, and video games-to illustrate the qualities that are essential for the development of highly secure software. Each chapter details one of the seven qualities that can make your software highly secure and less susceptible to hacker threats. Leveraging real-world experiences and examples, the book: Explains complex security concepts in language that
The US Department of Energy has undertaken an initiative to improve the quality of software used to design and operate their nuclear facilities across the United States. One aspect of this initiative is to revise or create new directives and guides associated with quality practices for the safety software in its nuclear facilities. Safety software includes the safety structures, systems, and components software and firmware, support software and design and analysis software used to ensure the safety of the facility. DOE nuclear facilities are unique when compared to commercial nuclear or other industrial activities in terms of the types and quantities of hazards that must be controlled to protect workers, public and the environment. Because of these differences, DOE must develop an approach to software quality assurance that ensures appropriate risk mitigation by developing a framework of requirements that accomplishes the following goals: (smbullet) Ensures the software processes developed to address nuclear safety in design, operation, construction and maintenance of its facilities are safe (smbullet) Considers the larger system that uses the software and its impacts (smbullet) Ensures that the software failures do not create unsafe conditions Software designers for nuclear systems and processes must reduce risks in software applications by incorporating processes that recognize, detect, and mitigate software failure in safety related systems. It must also ensure that fail safe modes and component testing are incorporated into software design. For nuclear facilities, the consideration of risk is not necessarily sufficient to ensure safety. Systematic evaluation, independent verification and system safety analysis must be considered for software design, implementation, and operation. The software industry primarily uses risk analysis to determine the appropriate level of rigor applied to software practices. This risk-based approach distinguishes safety
Scientific Software Engineering Group, CIC-12
The Software Quality Forum is a triennial conference held by the Software Quality Assurance Subcommittee for the Department of Energy's Quality Managers. The forum centers on key issues, information, and technology important in software development for the Nuclear Weapons Complex. This year it will be opened up to include local information technology companies and software vendors presenting their solutions, ideas, and lessons learned. The Software Quality Forum 2000 will take on a more hands-on, instructional tone than those previously held. There will be an emphasis on providing information, tools, and resources to assist developers in their goal of producing next generation software.
Today’s software development processes are complex. A lot of interaction occurs between developers, the tools they use, and even automatically between different tools. Examples of those interactions are entering a new requirement into the bug tracking system, committing new source code to the repository or automatic code style check during a check-in. To trace and understand the full process is hard. To get insight into these processes and to increase the quality of the resulting software re...
In thesis we look at problem of software quality assurance, especially when it comes to custom solutions projects. First we look at what quality assurance is, how do we mesure it and specialty why we do it. Afterwards we go through main principles, that apply when dealing with quality assurance in general. Since quality assurance is extensive topic, we take more detailed look on one part of quality assurance that is mostly used, that is testing of software. Because we are talking about softwa...
Riis, Troels; Jørgensen, John Leif
This documents describes a test of the implementation of the ASC orbit model for the Champ satellite.......This documents describes a test of the implementation of the ASC orbit model for the Champ satellite....
Villalba de Benito, María Teresa; Fernández Sanz, Luis; Juan J. Cuadrado Gallego; José J. Martínez
Increasing demand for security commercial products requires an improvement of methods for evaluating their software quality. Existing standards offer general frameworks but more specific models which reflect the perception of experts and customers as well as the particular characteristics of this type of products are needed. This article presents a method for generating domain-oriented software quality models for specific types of applications. It is applied to the generation of a model for s...
This paper contains two sections relating to software quality issues. First, the various definitions of software quality are examined and an alternative suggested. It continues with a review of the quality model as defined by McCall, Richards and Walters in 1977 and mentions the later model of Boëhm published in 1978. Each of McCall's quality factors is reviewed and the extent to which they still apply in the late 1990s is commented on. The factors include, integrity, reliability, usability, ...
S., Poornima. U.; V, Suma.
Object oriented approach is one of the popular software development approach for managing complex systems with massive set of requirements. Unlike procedural approach, this approach captures the requirements as set of data rather than services. Further, class is considered as a key unit of the solution-domain with data and services wrapped together, representing architectural design of a basic module. Thus, system complexity is directly related to the number of modules and the degree of inter...
Rasool, Muhammad Ahsan; Jamal, Abdul
War between malware and antimalware software started two decade back and have adopted the modern techniques with the evolution of technological development in the field of information technology. This thesis was targeted to analyze the performance of freeware antivirus programs available in the market. Several tests were performed to analyze the performance with respect to the core responsibilities of these software’s to scan and detect the viruses and also prevent and eradicate form them. Al...
In this paper, we present the Program Analysis Framework (PAF) to analyze the software architecture and software modularity of large software packages using techniques in Aspect Mining. The basic idea about PAF is to record the call relationships information among the important elements firstly and then use the different analysis algorithms to find the crosscutting concerns which could destroy the modularity of the software from this recording information. We evaluate our framework through analyzing DATE, the ALICE Data-Acquisition (DAQ) software which handles the data flow from the detector electronics to the permanent storage archiving. The analysis results prove the effectiveness and efficiency of our framework. PAF has pinpointed a number of possible optimizations which could be applied and help maximizing the software quality. PAF could also be used for the analysis of other projects written in C language.
Kurt G. Vedros
The software quality assurance oversight consists of updating and maintaining revision control of the SAPHIRE 8 quality assurance program documentation and of monitoring revision control of the SAPHIRE 8 source code. This report summarizes the oversight efforts through description of the revision control system (RCS) setup, operation and contents. Documents maintained under revision control include the Acceptance Test Plan (ATP), Configuration Management Plan, Quality Assurance Plan, Software Project Plan, Requirements Traceability Matrix (RTM), System Test Plan, SDP Interface Training Manual, and the SAPHIRE 8, 'New Features and Capabilities Overview'.
Because of the large amount of complex data in environmental projects, particularly large decontamination and decommissioning projects, the quality of the data has a profound impact on the success and cost of the mission. In every phase of the life cycle of the project, including regulatory intervention and legal proceedings, maintaining the quality of data and presenting data in a timely and meaningful manner are critical. In this paper, a systemic view of data quality management from a software engineering perspective is presented. A method of evaluation evolves from this view. This method complements the principles of the data quality objective. When graded adequately, the method of evaluation establishes a paradigm for ensuring data quality for new and renewed projects. This paper also demonstrates that incorporating good practices of software engineering into the data management process leads to continuous improvement of data quality
Smitha Chowdary. Ch; Satya Prasad R; Sobhana. K
The ability of software in satisfying its functional requirements successfully is measured as software reliability, making it one of the most important characteristics of software quality. Improving software processes employed during the software development life cycle is essential to produce reliable software systems of assured quality. Software Reliability Growth Models (SRGMs) aid software engineers and managers in tracking and measuring the growth in reliability as software is being devel...
Since the advent of electronic computers, people have struggled to develop effective software engineering processes. While these processes are similar to those used by hardware engineers, the software industry has earned a reputation for late delivery of inadequate products. Most software managers are looking for ways to deliver quality products faster, or with fewer resources. The development time and product outcome of any software project can be influenced by four variables: the product characteristics, the people involved, the processes they use, and the underlying technology. In order to have an impact on the productivity of a software development effort, the manager must focus on and balance these areas. This paper will discuss effective ways to improve productivity by using this approach.
One of major problems with wich the quality control program of an environmental measurements laboratory is confronted is the evaluation of the performances of software packages for the analysis of gamma-ray spectra. A program of tests for evaluating the performances of the software package (SPECTRAN-F, Canberra Inc.) used by our laboratory is being carried out. In this first paper the results of a preliminary study concerning the evaluation of the performance of the doublet analysis routine are presented
Burton, Stu; Swanson, Kent; Leonard, Lisa
Celite corporation and Andersen Consulting have developed an advanced approach to traditional software development called the application software factory (ASF)." The approach is an integration of technology and total quality "management" techniques that includes the use of an expert system to guide module design and perform "module programming." The expert system component is called the knowledge-based design assistant and its inclusion in the ASF methodology" has significantly reduced modul...
Full Text Available Identifying and specifying user requirements is an integral part of information systems design and is critical for the project success. More than 50% of the reasons for the project failure presented in the CHAOS report  and study of a US Air Force project by Sheldon et al.  are related to requirements. The goal of this paper is to assess the relevant user and software requirements which are the basis for an electronic quality management system selection in medical device companies. This paper describes the structured evaluation and selection process of different quality management software tools that shall support business processes. The purpose of this paper is to help the small to medium size medical device companies to choose the right quality management software which meets the company's business needs.
Full Text Available Quality is an important factor in software industry. Software quality depends upon the customer satisfaction which can be achieved through applying standards. In this era achieving quality software is very important because of the high customer demands. Developed countries are excelling in software industry and improving day by day. Meanwhile developing countries like Pakistan are struggling with software quality and cannot maintain reputation in International Market. Software Quality lacks due tomany reasons. This paper will address the problems for lacking interest in improving the software quality by higher authorities and software assurance team. We have provided solution to the addressed problems also.
Software applications play an increasingly relevant role in nuclear power plant systems. This is particularly true of software important to safety used in both: calculations for the design, testing and analysis of nuclear reactor systems (design, engineering and analysis software); and monitoring, control and safety functions as an integral part of the reactor systems (monitoring, control and safety system software). Computer technology is advancing at a fast pace, offering new possibilities in nuclear reactor design, construction, commissioning, operation, maintenance and decommissioning. These advances also present new issues which must be considered both by the utility and by the regulatory organization. Refurbishment of ageing instrumentation and control systems in nuclear power plants and new safety related application areas have emerged, with direct (e.g. interfaces with safety systems) and indirect (e.g. operator intervention) implications for safety. Currently, there exist several international standards and guides on quality assurance for software important to safety. However, none of the existing documents provides comprehensive guidance to the developer, manager and regulator during all phases of the software life-cycle. The present publication was developed taking into account the large amount of available documentation, the rapid development of software systems and the need for updated guidance on how to do it. It provides information and guidance for defining and implementing quality assurance programmes covering the entire life-cycle of software important to safety. Expected users are managers, performers and assessors from nuclear utilities, regulatory bodies, suppliers and technical support organizations involved with the development and use of software applied in nuclear power plants
he EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that n...
MCNP is a computer code that models the interaction of radiation with matter. MCNP is developed and maintained by the Transport Methods Group (XTM) of the Los Alamos National Laboratory (LANL). This plan describes the Software Quality Assurance (SQA) program applied to the code. The SQA program is consistent with the requirements of IEEE-730.1 and the guiding principles of ISO 900
Macdonald, John M [Los Alamos National Laboratory; Lloyd, Jane A [Los Alamos National Laboratory; Turner, Cameron J [COLORADO SCHOOL OF MINES
Ideally, quality is designed into software, just as quality is designed into hardware. However, when dealing with legacy systems, demonstrating that the software meets required quality standards may be difficult to achieve. As the need to demonstrate the quality of existing software was recognized at Los Alamos National Laboratory (LANL), an effort was initiated to uncover and demonstrate that legacy software met the required quality standards. This effort led to the development of a reverse engineering approach referred to as software archaeology. This paper documents the software archaeology approaches used at LANL to document legacy software systems. A case study for the Robotic Integrated Packaging System (RIPS) software is included.
Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.
Physicists use self-written software as a tool to fulfill their tasks and often the developed software is used for several years or even decades. If a software product lives for a long time, it has to be changed and adapted to external influences. This implies that the source code has to be read, understood and modified. The same applies to the software of the Optics Measurements and Corrections (OMC) team at CERN. Their task is to track, analyze and correct the beams in the LHC and other accelerators. To solve this task, they revert to a self-written software base with more than 150,000 physical lines of code. The base is subject to continuous changes as well. Their software does its job and is effective, but runs regrettably not efficient because some parts of the source code are in a bad shape and has a low quality. The implementation could be faster and more memory efficient. In addition it is difficult to read and understand the code. Source code files and functions are too big and identifiers do not rev...
Software quality is one of the primary determinants of project management success. Stakeholders involved in software development widely agree that quality is important (Barney and Wohlin 2009). However, they may differ on what constitutes software quality, and which of its attributes are more important than others. Although, software quality…
Software quality is often considered in terms of the contractual requirements between the supplier and acquirer as described in ISO/IEC 12207 and focuses on software life cycle processes. However, beyond these processes acquirer organisations need to address other issues like complying with new legislation, securing return on investment, and achieving competitive support from their new software investments. Supplier organisations also have issues that they must manage. This paper addresses al...
Kurt G. Vedros
The purpose of this review of software metrics is to examine the quality of the metrics gathered in the 2010 IV&V and to set an outline for results of updated metrics runs to be performed. We find from the review that the maintenance of accepted quality standards presented in the SAPHIRE 8 initial Independent Verification and Validation (IV&V) of April, 2010 is most easily achieved by continuing to utilize the tools used in that effort while adding a metric of bug tracking and resolution. Recommendations from the final IV&V were to continue periodic measurable metrics such as McCabe's complexity measure to ensure quality is maintained. The four software tools used to measure quality in the IV&V were CodeHealer, Coverage Validator, Memory Validator, Performance Validator, and Thread Validator. These are evaluated based on their capabilities. We attempted to run their latest revisions with the newer Delphi 2010 based SAPHIRE 8 code that has been developed and was successful with all of the Validator series of tools on small tests. Another recommendation from the IV&V was to incorporate a bug tracking and resolution metric. To improve our capability of producing this metric, we integrated our current web reporting system with the SpiraTest test management software purchased earlier this year to track requirements traceability.
Smitha Chowdary. Ch
Full Text Available The ability of software in satisfying its functional requirements successfully is measured as software reliability, making it one of the most important characteristics of software quality. Improving software processes employed during the software development life cycle is essential to produce reliable software systems of assured quality. Software Reliability Growth Models (SRGMs aid software engineers and managers in tracking and measuring the growth in reliability as software is being developed for quality assurance. Software quality is improved by continuously monitoring and controlling the software process. Statistical Process Control (SPC is the application of statistical methods on software process data presented graphically to quickly and easily identify anomalies that enable the developer to address software failures. In this paper we proposed a SPC mechanism of control charts for time domain data using Burr Type III based on Non Homogenous Poisson Process (NHPP and parameters are estimated by Maximum likelihood Estimation (MLE method.
The in situ viscometer is a portable instrument designed to raise and lower a sphere (rheometer ball) through layers of tank waste material while recording ball position, velocity, and cable tension. In the field, the viscometer attaches to a decontamination spool piece which in turn is designed to attach to any 4-inch, 150-pound flange (typical of many available tank risers). The motion of the ball and collection of data is controlled by instrumentation and control equipment housed in a separate remote control console. This document covers the product, Viscometer Data Acquisition Software. This document provides the software quality assurance plan, verification and validation plan, and configuration management plan for developing the software for the instrumentation that will be used to obtain rheology data from Tank SY-101.
The in situ viscometer is a portable instrument designed to raise and lower a sphere (rheometer ball) through layers of tank waste material while recording ball position, velocity, and cable tension. In the field, the viscometer attaches to a decontamination spool piece which in turn is designed to attach to any 4-inch, 150-pound flange (typical of many available tank risers). The motion of the ball and collection of data is controlled by instrumentation and control equipment housed in a separate remote control console. This document covers the product, Viscometer Data Acquisition Software. This document provides the software quality assurance plan, verification and validation plan, and configuration management plan for developing the software for the instrumentation that will be used to obtain rheology data from Tank SY-101
Liviu ILIES; Catalin AFRASINEI - ZEVOIANU
Very often IT domain, with its outcomes, through its multidisciplinary orientation, is an essential contributor to quality assurance of economic bodies and not only. It is difficult nowadays to find out an activity sector or even a sub-sector where software applications, regardless their nature, hadn’t marked out their place and contribution to its good economic and social development. In order to contribute as a tool toward economic and qualitative increasing of performance, the tool itself ...
Identifying and specifying user requirements is an integral part of information systems design and is critical for the project success. More than 50% of the reasons for the project failure presented in the CHAOS report  and study of a US Air Force project by Sheldon et al.  are related to requirements. The goal of this paper is to assess the relevant user and software requirements which are the basis for an electronic quality management system selection in medical device companies. Th...
The software industry is plagued by cost-overruns, delays, poor customer satisfaction and quality issues that are costing clients and customers world-wide billions of dollars each year. The phenomenon is coined The Software Crisis", and poses a huge challenge for software project management. This thesis addresses one of the core issues of the software crisis, namely software quality. The challenges of software quality are central for understanding the other symptoms of the software crisis. Th...
The definition of software quality and how one experiences quality is a multifaceted matter and usually totally dependent on the user group that observes the quality from different perspectives. A common way to analyse software product’s quality is to measure software product’s characteristics like usability, reliability, efficiency, expandability, testability and maintainability. For analysing software product’s quality, many processes have been developed. Using these processes and acting ac...
Atoum, Issa; Bong, Chih How; Kulathuramaiyer, Narayanan
Software quality-in-use comprehends the quality from user's perspectives. It has gained its importance in e-learning applications, mobile service based applications and project management tools. User's decisions on software acquisitions are often ad hoc or based on preference due to difficulty in quantitatively measure software quality-in-use. However, why quality-in-use measurement is difficult? Although there are many software quality models to our knowledge, no works surveys the challenges...
Atoum, Issa; Otoom, Ahmed
Software review text fragments have considerably valuable information about users experience. It includes a huge set of properties including the software quality. Opinion mining or sentiment analysis is concerned with analyzing textual user judgments. The application of sentiment analysis on software reviews can find a quantitative value that represents software quality. Although many software quality methods are proposed they are considered difficult to customize and many of them are limited...
Full Text Available This paper describes some of the findings, of an ongoing ethnographic study of a computer operations section in an Information Technology Centre. The study finds that after an initial period of staff acceptance of prescribed quality management procedures, certain features of organizational culture, structure and power, work against continued conformance. Procedures will be modified, firstly to resolve any inconsistencies between the prescribed procedures and strongly held beliefs and values about work practices and organization, and secondly to reduce or eliminate perceived threats. The paper argues that software quality management is based on a Unitarian approach to organization, which ignores the plurality of beliefs and work contexts which exist in an organization, and which assumes that organizational features can be managed and changed in predictable ways. This paper suggests that a pluralist approach to organizational analysis helps to reveal the nature and extent of changes required to the quality management system and the requirements for implementing changes.
Beaver, Justin M.; Schiavone, Guy A.
This paper provides an analysis of the effect of the skill/experience of the software development team on the quality of the final software product. A method for the assessment of software development team skill and experience is proposed, and was derived from a workforce management tool currently in use by the National Aeronautics and Space Administration. Using data from 26 smallscale software development projects, the team skill measures are correlated to 5 software product quality metrics from the ISO/IEC 9126 Software Engineering Product Quality standard. in the analysis of the results, development team skill is found to be a significant factor in the adequacy of the design and implementation. In addition, the results imply that inexperienced software developers are tasked with responsibilities ill-suited to their skill level, and thus have a significant adverse effect on the quality of the software product. Keywords: software quality, development skill, software metrics
Software quality has always been described as a poorly developed construct. Several reports and much evidence show clear problems related to software quality outputs. Therefore, software quality problems constitute the phenomenon investigated in this research question: Why does quality management not achieve its anticipated outcomes in the software industry? This research empirically tests if a possible existence of knowledge management gaps can be a reason behind a possible existence of qual...
This report contains viewgraphs on operational excellence philosophy of six sigma applied to software quality assurance. This report outlines the following: goal of six sigma; six sigma tools; manufacturing vs administrative processes; Software quality assurance document inspections; map software quality assurance requirements document; failure mode effects analysis for requirements document; measuring the right response variables; and questions.
Full Text Available Quality is the process of ensuring that software developed satisfies customer requirements. Among the many software quality attributes like functionality, usability, capability, maintainability, etc., reliability is a major factor to assure quality of the software. Reliability ensures that software is failure free. In this work, we propose to ensure quality through a behavioral model that evaluates business requirements and gives priority for quality attributes. The model consists of behavioural and human perspectives in assessment. It is used for assessment of software developed.
Full Text Available Quality has been one of the most important factors in judging any product. Quality means “a degree or grade of excellence or worth”. Quality is a term that is usually described using adjectives. Quality has several attributes to it, some of which can be quantified using metrics. These attributes such as usability, portability, security, performance, reliability etc have different importance in different projects. Different software quality assurance methods & practices have been used in different software projects to attain the true value. Quality is an attribute which is a distinct feature and it differs with people’s perception. Achieving high software quality involves measurement of software metrics and optimization based on estimated values. As the software systems grow larger, complexity ofdesign and implementation increases, and this in turn is more prone to defects and hence directly affect the quality of the systems. However, in any software project, high quality is always desirable, and many projects have specific quality requirements. Achieving high software quality involves measurement of software metrics and optimization based on estimated values. Developing high quality software is governed by factors such as people, process, technology and cost. This paper attempts to present a novel approach towards achieving high software quality in various kinds of projects under given constraints.
Software architecture has been identified as an increasingly important part of software development. The software architecture helps the developer of a software system to define the internal structure of the system. Several methods for evaluating software architectures have been proposed in order to assist the developer in creating a software architecture that will have a potential to fulfil the requirements on the system. Many of the evaluation methods focus on evaluation of a single quality...
At the moment it is very difficult to din any process in the industry where software is not involved. We trust software does minimize the possibility of process failures. In parallel, the quality and safety requirements of our processes have been improved to satisfactory levels. Let's look around us. Every day, thousands of calculations are carried out by our engineers using computer programs. Hundreds of processes are controlled automatically. Safety marging, limits, operation controls..., are derived from them. The tools begin to control our processes but, Who does control the tool? Once they have been installed and once they are running, are they always reliable? NO If you think that your current system are satisfactory, we propose you a game in this report. It is just a test. Which is your score?. Then we revise the concept of Configuration Management and we describe an ideal machine; the ''Perpetuum Mobile'' of the Configuration. We describe some rules to implement and improvement and we comment on the operative experience in ENUSA. (Author)
This Quality Assurance (QA) Plan documents the QA activities that will be managed by the INL related to JCN N6423. The NRC developed the SAPHIRE computer code for performing probabilistic risk assessments (PRAs) using a personal computer (PC) at the Idaho National Laboratory (INL) under Job Code Number (JCN) L1429. SAPHIRE started out as a feasibility study for a PRA code to be run on a desktop personal PC and evolved through several phases into a state-of-the-art PRA code. The developmental activity of SAPHIRE was the result of two concurrent important events: The tremendous expansion of PC software and hardware capability of the 90s and the onset of a risk-informed regulation era.
Otto Preiss; Alain Wegmann
In this paper we present our rational for proposing a conceptual model for the description of quality attributes of software artifacts, in particular suited to software components. The scientific foundations for our quality description model are derived from researching systems science for its value to software engineering. In this work we realized that software engineering is concerned with a number of interrelated conceptual as well as concrete systems. Each of them exhibits the basic syste...
Optimizing the quality of software is a function of the degree of reviews made during the early life of a software development process. Reviews detect errors and potential errors early in the software development process. The errors detected during the early life cycle of software are least expensive to correct. Efficient involvement in software inspections and technical reviews, help developers improve their own skills, thereby mitigating the occurrence of errors in the later stage of softwa...
Prem Parashar; Arvind Kalia; Rajesh Bhatia
Software testing is performed to validate that software under test meets all requirements. With the increase in software developing platforms, developers may commit those errors, which, if not tested with appropriate test cases, may lead to false confidence in software testing. In this paper, we proposed that building quality source code documentation can help in predicting such errors. To validate this proposal, we performed an initial study and found that if software is well documented, a t...
The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project to extract “code metrics” on the status of the software products and “process metrics” related to the quality of the development and support process such as reaction time to critical bugs, requirements tracking and delays in product releases.
Duc, Anh Nguyen
Context: Early prediction of software cost and quality is important for better software planning and controlling. In early development phases, design complexity metrics are considered as useful indicators of software testing effort and some quality attributes. Although many studies investigate the relationship between design complexity and cost and quality, it is unclear what we have learned from these studies, because no systematic synthesis exists to date. Aim: The research presented in thi...
Due to the large number of product, project, and people parameters which impact large custom software development efforts, measurement of software product quality is a complex undertaking. Furthermore, the absolute perspective from which quality is measured (customer satisfaction) is intangible. While we probably can't say what the absolute quality of a software product is, we can determine the relative quality, the adequacy of this quality with respect to pragmatic considerations, and identify good and bad trends during development. While no two software engineers will ever agree on an optimum definition of software quality, they will agree that the most important perspective of software quality is its ease of change. We can call this flexibility, adaptability, or some other vague term, but the critical characteristic of software is that it is soft. The easier the product is to modify, the easier it is to achieve any other software quality perspective. This paper presents objective quality metrics derived from consistent lifecycle perspectives of rework which, when used in concert with an evolutionary development approach, can provide useful insight to produce better quality per unit cost/schedule or to achieve adequate quality more efficiently. The usefulness of these metrics is evaluated by applying them to a large, real world, Ada project.
Ichu, Emmanuel A.
Software quality is perhaps one of the most sought-after attributes in product development, however; this goal is unattained. Problem factors in software development and how these have affected the maintainability of the delivered software systems requires a thorough investigation. It was, therefore, very important to understand software…
The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project t...
Alandes, M; Meneses, D; Pucciani, G
The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project to ...
Wieczorek, Martin; Bons, Heinz
Software and systems quality is playing an increasingly important role in the growth of almost all - profit and non-profit - organisations. Quality is vital to the success of enterprises in their markets. Most small trade and repair businesses use software systems in their administration and marketing processes. Every doctor's surgery is managing its patients using software. Banking is no longer conceivable without software. Aircraft, trucks and cars use more and more software to handle their increasingly complex technical systems. Innovation, competition and cost pressure are always present i
JT-SQE system is a software quality and measurement system. Itsdesign w a s based on the Chinese national standards of software product evaluation and qua lity characteristics. The JT-SQE system consists of two parts. One is the mode l for software quality measurement, which is of hierarchical structure. The other is the process of requirements definition, measurement and rating. The system i s a feasible model for software quality evaluation and measurement, and it has t he advantage of a friendly user interface, simple operation, ease of revision an d maintenance, and expansible measurements.
De Guio, Federico
The Data Quality Monitoring (DQM) Software proved to be a central tool in the CMS experiment. Its flexibility allowed its integration in several environments Online, for real-time detector monitoring; Offline, for the final, fine-grained Data Certification; Release Validation, to constantly validate the functionality and the performance of the reconstruction software; in Monte Carlo productions. The central tool to deliver Data Quality information is a web site for browsing data quality histograms (DQM GUI). In this contribution the usage of the DQM Software in the different environments and its integration in the CMS Reconstruction Software Framework and in all production workflows are presented.
Ronchieri, Elisabetta; Giacomini, Francesco
In the context of critical applications, such as shielding and radiation protection, ensuring the quality of simulation software they depend on is of utmost importance. The assessment of simulation software quality is important not only to determine its adoption in experimental applications, but also to guarantee reproducibility of outcome over time. In this study, we present initial results from an ongoing analysis of Geant4 code based on established software metrics. The analysis evaluates the current status of the code to quantify its characteristics with respect to documented quality standards; further assessments concern evolutions over a series of release distributions. We describe the selected metrics that quantify software attributes ranging from code complexity to maintainability, and highlight what metrics are most effective at evaluating radiation transport software quality. The quantitative assessment of the software is initially focused on a set of Geant4 packages, which play a key role in a wide...
This article proposes the 7 C's for realizing quality-oriented software engineering practices. All the desired qualities of this approach are expressed in short by the term living software. The 7 C's are: Concern-oriented processes, Canonical models, Composable models, Certiable models, Constructibl
Schofield, Joseph Richard, Jr.; Ellis, Molly A.; Williamson, Charles Michael; Bonano, Lora A.
This document describes the 2003 SNL ASCI Software Quality Engineering (SQE) assessment of twenty ASCI application code teams and the results of that assessment. The purpose of this assessment was to determine code team compliance with the Sandia National Laboratories ASCI Applications Software Quality Engineering Practices, Version 2.0 as part of an overall program assessment.
Folmer, Eelke; Boscht, Jan
The earliest design decisions often have a significant impact on software quality and are the most costly to revoke. One of the challenges in architecture design is to reduce the frequency of retrofit problems in software designs; not being able to improve the quality of a system cost effectively, a
The Biomed Control System is a hardware/software system used for the delivery, measurement and monitoring of heavy-ion beams in the patient treatment and biology experiment rooms in the Bevalac at the Lawrence Berkeley Laboratory (LBL). This paper describes some aspects of this system including historical background philosophy, configuration management, hardware features that facilitate software testing, software testing procedures, the release of new software quality assurance, safety and operator monitoring. 3 refs
Context: There is an overwhelming prevalence of companies developing software in global software development (GSD) contexts. The existing body of knowledge, however, falls short of providing comprehensive empirical evidence on the implication of GSD contexts on software quality for evolving software systems. Therefore there is limited evidence to support practitioners that need to make informed decisions about ongoing or future GSD projects. Objective: This thesis work seeks to explore change...
Hyatt, L.; Rosenberg, L.
A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.
Cleve, A.; Vinju, J.J.
The introduction of fast and cheap computer and networking hardware enables the spread of software. Software, in a nutshell, represents an unprecedented ability to channel creativity and innovation. The joyful act of simply writing computer programs for existing ICT infrastructure can change the wor
Lutsenko Y. V.
Full Text Available The quality of a system is seen as an emergent property of systems, due to their composition and structure, and it reflects their functionality, reliability and cost. Therefore, when we speak about quality management, the purpose of management is the formation of pre-defined system properties of the object of management. The stronger the object of the control expresses its system properties, the stronger the nonlinearity manifests of the object: both the dependence of the management factors from each other, and the dependence of the results of the action of some factors from the actions of others. Therefore, the problem of quality management is that in the management process the management object itself changes qualitatively, i.e. it changes its level of consistency, the degree of determinism and the transfer function itself. This problem can be viewed as several tasks: First is the system identification of the condition of the object of management, 2nd – making decisions about controlling influence that changes the composition of the control object in a way its quality maximally increases at minimum costs. To solve the 2nd problem we have proposed an application of the component selection of the object by functions based on the resources allocated for the implementation of different functions; costs associated with the choice of the components and the degree of compliance of various components to their functional purpose. In fact, we have proposed a formulation and a solution of the new generalization of a variant of the assignment problem: "multi backpack", which differs from the known with the fact that the selection has been based not only on the resources and costs, but also with taking into account the degree of compliance of the components to their functional purpose. A mathematical model, which provides a solution to the 1st problem, and reflecting the degree of compliance of the components to their functionality, as well as the entire
YAO Lan; YANG Bo
Due to the rapid development of computers and their applications, early software quality prediction in software industry becomes more and more crucial. Software quality prediction model is very helpful for decision-makings such as the allocation of resource in module verification and validation. Nevertheless, due to the complicated situations of software development process in the early stage, the applicability and accuracy of these models are still under research. In this paper, a software quality prediction model based on a fuzzy neural network is presented, which takes into account both the internal factors and external factors of software. With hybrid-learning algorithm, the proposed model can deal with multiple forms of data as well as incomplete information, which helps identify design errors early and avoid expensive rework.
Previously, several research tasks have been conducted, some observations were obtained, and several possible suggestions have been contemplated involving software quality assurance engineering at NASA Johnson. These research tasks are briefly described. Also, a brief discussion is given on the role of software quality assurance in software engineering along with some observations and suggestions. A brief discussion on a training program for software quality assurance engineers is provided. A list of assurance factors as well as quality factors are also included. Finally, a process model which can be used for searching and collecting software quality assurance tools is presented.
Software quality assessment monitors and guides the evolution of a software system based on quality measurements. Continuous Integration (CI) environ- ments can provide measurement data to feed such continuous assessments. However, in modern CI environments, data is scattered across multiple CI tools (e.g., build tool, version control system). Even small quality assessments can become extremely time-consuming, because each stakeholder has to seek for the data she needs. In this thesis, we int...
Sahar Reda; Hany Ammar; Osman Hegazy
The most important measure that must be considered in anysoftware product is its design quality. Measuring of the designquality in the early stages of software development is the key todevelop and enhance quality software. Research on objectoriented design metrics has produced a large number of metricsthat can be measured to identify design problems and assessdesign quality attributes. However the use of these design metricsis limited in practice due to the difficulty of measuring and usinga ...
Not all software used in the nuclear industry needs the rigorous formal verification, reliability testing and quality assessment that are being applied to safety critical software. Recently, however, there is increasing recognition that systematic and objective quality assessment of the scientific software used in design and safety analyses of nuclear facilities is necessary to support safety and licensing decisions. Because of the complexity and large size of these programs and the resource constraints faced by the AECB reviewer, it is desirable that appropriate automated tools are used wherever practical. To objectively assess the quality of software, a set of attributes of a software product by which its quality is described and evaluated must be established. These attributes must be relevant to the application domain of software under evaluation. To effectively assess the quality of software, metrics defining quantitative scale and method appropriate to determine the value of attributes need to be applied. To cost-effectively perform the evaluation, use of suitable automated tools is desirable. In this project, criteria for evaluating the quality of scientific software are presented; metrics for which those criteria can be evaluated are identified; a survey of automated tools to measure those metrics was conducted and the most appropriate tool (QA Fortran) was acquired; and the tool usage was demonstrated on three sample programs. (author) 5 refs
Full Text Available Launch vehicle projects now depend on software, more than ever before, to ensure safetyand efficiency. Such critical software syfiems, which can lead to injury, destruction or loss ofvital equipment, human lives, and damage to environment, must be developed and verified withhigh level of quality and reliability. An overview of current quality practices pursued in launchvehicle projects is presented in this paper. These practices have played a vital role in the successfullaunch vehicle missions of Indian Space Research Organisation. As complexity of softwareincreases, the activity that gets affected is nothing but, software quality assurance (SQA. TheSQA team is facing a lot of challenges in current practices. This paper brings out such challengesin different phases of software life cycle. A set of key points to some techniques and tools, thatcould contribute to meet the software quality 'assurance challenges in launch vehicle projects,are also discussed.
Quality achievement for the National Ignition Facility (NIF) and the National Ignition Campaign (NIC) is the responsibility of the NIF Projects line organization as described in the NIF and Photon Science Directorate Quality Assurance Plan (NIF QA Plan). This Software Quality Assurance Plan (SQAP) is subordinate to the NIF QA Plan and establishes quality assurance (QA) activities for the software subsystems within Controls and Information Systems (CIS). This SQAP implements an activity level software quality assurance plan for NIF Projects as required by the LLNL Institutional Software Quality Assurance Program (ISQAP). Planned QA activities help achieve, assess, and maintain appropriate quality of software developed and/or acquired for control systems, shot data systems, laser performance modeling systems, business applications, industrial control and safety systems, and information technology systems. The objective of this SQAP is to ensure that appropriate controls are developed and implemented for management planning, work execution, and quality assessment of the CIS organization's software activities. The CIS line organization places special QA emphasis on rigorous configuration control, change management, testing, and issue tracking to help achieve its quality goals.
Quality achievement for the National Ignition Facility (NIF) and the National Ignition Campaign (NIC) is the responsibility of the NIF Projects line organization as described in the NIF and Photon Science Directorate Quality Assurance Plan (NIF QA Plan). This Software Quality Assurance Plan (SQAP) is subordinate to the NIF QA Plan and establishes quality assurance (QA) activities for the software subsystems within Controls and Information Systems (CIS). This SQAP implements an activity level software quality assurance plan for NIF Projects as required by the LLNL Institutional Software Quality Assurance Program (ISQAP). Planned QA activities help achieve, assess, and maintain appropriate quality of software developed and/or acquired for control systems, shot data systems, laser performance modeling systems, business applications, industrial control and safety systems, and information technology systems. The objective of this SQAP is to ensure that appropriate controls are developed and implemented for management planning, work execution, and quality assessment of the CIS organization's software activities. The CIS line organization places special QA emphasis on rigorous configuration control, change management, testing, and issue tracking to help achieve its quality goals.
Tan, Y.; Yu, H; Leung, K; Sivaraman, J; Mok, Y
In the type III secretion system (T3SS) of Aeromonas hydrophila, the putative needle complex subunit AscF requires both putative chaperones AscE and AscG for formation of a ternary complex to avoid premature assembly. Here we report the crystal structure of AscE at 2.7 A resolution and the mapping of buried regions of AscE, AscG, and AscF in the AscEG and AscEFG complexes using limited protease digestion. The dimeric AscE is comprised of two helix-turn-helix monomers packed in an antiparallel fashion. The N-terminal 13 residues of AscE are buried only upon binding with AscG, but this region is found to be nonessential for the interaction. AscE functions as a monomer and can be coexpressed with AscG or with both AscG and AscF to form soluble complexes. The AscE binding region of AscG in the AscEG complex is identified to be within the N-terminal 61 residues of AscG. The exposed C-terminal substrate-binding region of AscG in the AscEG complex is induced to be buried only upon binding to AscF. However, the N-terminal 52 residues of AscF remain exposed even in the ternary AscEFG complex. On the other hand, the 35-residue C-terminal region of AscF in the complex is resistant to protease digestion in the AscEFG complex. Site-directed mutagenesis showed that two C-terminal hydrophobic residues, Ile83 and Leu84, of AscF are essential for chaperone binding.
Purpose: A growing number of open source software emerges in many segments of the software market. In addition, software products usually exhibit network externalities. The purpose of this paper is to study the impact of open source software on the quality choices of proprietary software vendors when the market presents positive network externalities. Design/methodology: To analyze how open source software affects the optimal quality of proprietary software, this paper constructs two vertical...
Shigeta, Denise; Port, Dan; Nikora, Allen P.; Wilf, Joel
As space mission software systems become larger and more complex, it is increasingly important for the software assurance effort to have the ability to effectively assess both the artifacts produced during software system development and the development process itself. Conceptually, assurance is a straightforward idea - it is the result of activities carried out by an organization independent of the software developers to better inform project management of potential technical and programmatic risks, and thus increase management's confidence in the decisions they ultimately make. In practice, effective assurance for large, complex systems often entails assessing large, complex software artifacts (e.g., requirements specifications, architectural descriptions) as well as substantial amounts of unstructured information (e.g., anomaly reports resulting from testing activities during development). In such an environment, assurance engineers can benefit greatly from appropriate tool support. In order to do so, an assurance organization will need accurate and timely information on the tool support available for various types of assurance activities. In this paper, we investigate the current use of tool support for assurance organizations within NASA, and describe on-going work at JPL for providing assurance organizations with the information about tools they need to use them effectively.
Quality assurance makes sure the project will be completed based on the previously approved specifications, standards and functionality. It is required without defects and possible problems. It monitors and tries to progress the development process from the start of the project. Software Quality Assurance (SQA) is the combination of the entire software development process, which includes software design, coding, source code control, code review, change management, configuration management and release management. In this paper we describe the solution for the key problems of software testing in quality assurance. The existing software practices have some problems such as testing practices, attitude of users and culture of organizations. All these tree problems have some combined problems such as shortcuts in testing, reduction in testing time, poor documentation etc. In this paper we are recommending strategies to provide solution of the said problems mentioned above.
Sunil L. Bangare
Full Text Available We proposed a System to measure the quality of modularization of object-oriented software system. Our work is proposed in three Parts as follows:MODULE 1: DEFINING METRICS FOR OBJECT ORIENTED SOFTWARE AND ALGORITHMMODULE 2: CODE PARSERMODULE 3: CODE ANALYZERIn this paper we are focusing on Module 1 of our work that is defining metrics for object oriented software modularization and providing algorithm for it.
Martins, Pedro; Fernandes, João Paulo; Saraiva, João
Quality assessment of open source software is becoming an important and active research area. One of the reasons for this recent interest is the consequence of Internet popularity. Nowadays, programming also involves looking for the large set of open source libraries and tools that may be reused when developing our software applications. In order to reuse such open source software artifacts, programmers not only need the guarantee that the reused artifact is certified, but a...
Based on the results of an assessment of 241 teaching-learning programs in biology, chemistry, physics, general science (primary level), and informatics, this article discusses possible criteria for the assessment of educational software. Offers suggestions for potential users, for practitioners of teacher education, and for the developers of…
Lalli, Vincent R.; Packard, Michael H.; Ziemianski, Tom
The application of assurance engineering techniques improves the duration of failure-free performance of software. The totality of features and characteristics of a software product are what determine its ability to satisfy customer needs. Software in safety-critical systems is very important to NASA. We follow the System Safety Working Groups definition for system safety software as: 'The optimization of system safety in the design, development, use and maintenance of software and its integration with safety-critical systems in an operational environment. 'If it is not safe, say so' has become our motto. This paper goes over methods that have been used by NASA to make software design improvements by focusing on software quality and the design and inspection process.
Mohammed Khalaf M Alshammri
This paper is aimed at highlighting the problems which has been faced by the project managers as well as the companies regarding the quality assurance. It has been seen that people do not pay much attention towards the quality assurance issues and thus eventually end up with wasting their money as well as time. Thats why it is important to make sure that the project meets the quality requirements.
The IT industry tries to employ a number of models to identify the defects in the construction of software projects. In this paper, we present COQUALMO and its limitations and aim to increase the quality without increasing the cost and time. The computation time, cost, and effort to predict the residual defects are very high; this was overcome by developing an appropriate new quality model named the software testing defect corrective model (STDCM). The STDCM was used to estimate the number of...
Poofa Gopalan; S.S. Uma Sankari; D. Mohan Kumar; R. Vikraman Nair
Launch vehicle projects now depend on software, more than ever before, to ensure safetyand efficiency. Such critical software syfiems, which can lead to injury, destruction or loss ofvital equipment, human lives, and damage to environment, must be developed and verified withhigh level of quality and reliability. An overview of current quality practices pursued in launchvehicle projects is presented in this paper. These practices have played a vital role in the successfullaunch vehicle mission...
Singh, Manoranjan Kumar
Mathematics has many useful properties for developing of complex software systems. One is that it can exactly describe a physical situation of the object or outcome of an action. Mathematics support abstraction and this is an excellent medium for modeling, since it is an exact medium there is a little possibility of ambiguity. This paper demonstrates that mathematics provides a high level of validation when it is used as a software medium. It also outlines distinguishing characteristics of structural testing which is based on the source code of the program tested. Structural testing methods are very amenable to rigorous definition, mathematical analysis and precise measurement. Finally, it also discusses functional and structural testing debate to have a sense of complete testing. Any program can be considered to be a function in the sense that program input forms its domain and program outputs form its range. In general discrete mathematics is more applicable to functional testing, while graph theory pertain...
Tsatsaronis, George; Halkidi, Maria; Giakoumakis, Emmanouel A.
Open Source Software (OSS) often relies on large repositories, like SourceForge, for initial incubation. The OSS repositories offer a large variety of meta-data providing interesting information about projects and their success. In this paper we propose a data mining approach for training classifiers on the OSS meta-data provided by such data repositories. The classifiers learn to predict the successful continuation of an OSS project. The `successfulness' of projects is defined in terms of th...
Liisa von Hellens
Software houses are taking steps towards the implementation of quality management systems (QMS) and achieving certification to international quality standards. There is an increasing tendency to require quality certificates from system suppliers before business can be even considered. The QMS is seen as a way of avoiding personnel risk if product and market knowledge remains in the possession of individuals. It is also felt that quality procedures in place will improve the company's image, at...
ZEPPER, JOHN D.; ARAGON, KATHRYN MARY; ELLIS, MOLLY A.; BYLE, KATHLEEN A.; EATON, DONNA SUE
This document provides a guide to the deployment of the software verification activities, software engineering practices, and project management principles that guide the development of Accelerated Strategic Computing Initiative (ASCI) applications software at Sandia National Laboratories (Sandia). The goal of this document is to identify practices and activities that will foster the development of reliable and trusted products produced by the ASCI Applications program. Document contents include an explanation of the structure and purpose of the ASCI Quality Management Council, an overview of the software development lifecycle, an outline of the practices and activities that should be followed, and an assessment tool.
ZEPPER, JOHN D.; ARAGON, KATHRYN MARY; ELLIS, MOLLY A.; BYLE, KATHLEEN A.; EATON, DONNA SUE
This document provides a guide to the deployment of the software verification activities, software engineering practices, and project management principles that guide the development of Accelerated Strategic Computing Initiative (ASCI) applications software at Sandia National Laboratories (Sandia). The goal of this document is to identify practices and activities that will foster the development of reliable and trusted products produced by the ASCI Applications program. Document contents include an explanation of the structure and purpose of the ASCI Quality Management Council, an overview of the software development lifecycle, an outline of the practices and activities that should be followed, and an assessment tool. These sections map practices and activities at Sandia to the ASCI Software Quality Engineering: Goals, Principles, and Guidelines, a Department of Energy document.
Rodríguez Martínez, Cecilia
Actualmente las empresas de ingeniería derivan una gran cantidad de recursos a la detección y corrección de errores en sus códigos software. Estos errores se deben generalmente a los errores cometidos por los desarrolladores cuando escriben el código o sus especificaciones. No hay ninguna herramienta capaz de detectar todos estos errores y algunos de ellos pasan desapercibidos tras el proceso de pruebas. Por esta razón, numerosas investigaciones han intentado encontrar indicadores en los cód...
Abdelkareem M. Alashqar
Full Text Available Evaluating software quality is an important and essential issue in the development process because it helps to deliver a competitive software product. A decision of selecting the best software based on quality attributes is a type of multi-criteria decision-making (MCDM processes where interactions among criteria should be considered. This paper presents and develops quantitative evaluations by considering interactions among criteria in the MCDM problems. The aggregator methods such as Arithmetic Mean (AM and Weighted Arithmetic Mean (WAM are introduced, described and compared to Choquet Integral (CI approach which is a type of fuzzy measure used as a new method for MCDM. The comparisons are shown by evaluating and ranking software alternatives based on six main quality attributes as identified by the ISO 9126-1 standard. The evaluation experiments depend on real data collected from case studies.
Farzaneh Hoseini Jabali
Full Text Available In order to produce and develop a software system, it is necessary to have a method of choosing a suitable software architecture which satisfies the required quality attributes and maintains a trade-off between sometimes conflicting ones. Each software architecture includes a set of design decisions for each of which there are various alternatives, satisfying the quality attributes differently. At the same time various stakeholders with various quality goals participate in decision-making. In this paper a numerical method is proposed that based on the quality attributes selects the suitable software architecture for a certain software. In this method, for each design decision, different alternatives are compared in view of a certain quality attribute, and the other way around. Multi-criteria decision-making methods are used and, at the same time, time and cost constraints are considered in decision-making, too. The proposed method applies the stakeholders' opinions in decision-making according to the degree of their importance and helps the architect to select the best software architecture with more certainty.
Full Text Available Nowadays, software development has become more complex and dynamic; they are expected more flexible, scalable and reusable. Under the umbrella of aspect, Aspect-Oriented Software Development (AOSD is relatively a modern programming paradigm to improve modularity in software development. Using Aspect-Oriented Programming (AOP language to implements crosscutting concerns through the introduction of a new construct Aspect like Class is defined as a modular unit of crosscutting behavior that affect multiple classes into reusable modules. Several quality models to measure the quality of software are available in literature. However, keep on developing software, and acceptance of new environment (i.e. AOP under conditions that give rise to an issue of evolvability. After the evolution of system, we have to find out how the new system needs to be extensible? What is the configurable status? Is designed pattern stable for new environment and technology? How the new system is sustainable? The objective of this paper is to propose a new quality model for AOSD to integrating some new qualityattributes in AOSQUAMO Model based which is based on ISO/IEC 9126 Quality Model, is called AspectOriented Quality (AOSQ Model. Analytic Hierarchy Process (AHP is used to evaluate an improved hierarchical quality model for AOSD.
Kurt G. Vedros
The Software Quality Assurance engineer position was created in fiscal year 2011 to better maintain and improve the quality of the SAPHIRE 8 development program. This year's Software Quality Assurance tasks concentrated on developing the framework of the SQA program. This report reviews the accomplishments and recommendations for each of the subtasks set forth for JCN V6059: (1) Reviews, Tests, and Code Walkthroughs; (2) Data Dictionary; (3) Metrics; (4) Requirements Traceability Matrix; (5) Provide Oversight on SAPHIRE QA Activities; and (6) Support NRC Presentations and Meetings.
Full Text Available The most important measure that must be considered in anysoftware product is its design quality. Measuring of the designquality in the early stages of software development is the key todevelop and enhance quality software. Research on objectoriented design metrics has produced a large number of metricsthat can be measured to identify design problems and assessdesign quality attributes. However the use of these design metricsis limited in practice due to the difficulty of measuring and usinga large number of metrics. This paper presents a methodology forsoftware design quality assessment. This methodology helps thedesigner to measure and assess the changes in design due todesign enhancements. The goal of this paper is to illustrate themethodology using practical software design examples andanalyze its utility in industrial projects. Finally, we present a casestudy to illustrate the methodology.
Kaiya, Haruhiko; Umemura, Masahiro; Ogata, Shinpei; Kaijiri, Kenji
Abstract Software quality requirements defined in the requirements analysis stage should be implemented in the final products, such as source codes and system deployment. To guarantee this meta-requirement, quality requirements should be considered in the intermediate stages, such as the design stage or the architectural definition stage. We propose a novel method for checking whether quality requirements are considered in the design stage. In this method, a technique called “spectrum analysi...
Full Text Available One of the basis of a series of standards JUS ISO 9000 is quality system documentation. An architecture of the quality system documentation depends on the complexity of business system. An establishment of an efficient management documentation of system of quality is of a great importance for the business system, as well as in the phase of introducing the quality system and in further stages of its improvement. The study describes the architecture and capability of software solutions to support and manage the quality system documentation in accordance with the requirements of standards ISO 9001:2001, ISO 14001:2005 HACCP etc.
Full Text Available A novel methodology, result of 10 years of in-field testing, which makes possible the convergence of different types of models and quality standards for Engineering and Computer Science Faculties, is presented. Since most software-developing companies are small and medium sized, the projects developed must focuson SCRUM and Extreme Programming (XP, opposed to a RUP, which is quite heavy, as well as on Personal Software Process (PSP and Team Software Process (TSP, which provide students with competences and a structured framework. ISO 90003:2004 norm is employed to define the processes by means of a quality system without new requirements or changing the existing ones. Also, the model is based on ISO/IEC 25000 (ISO (IEC 9126 – ISO/IEC 14598 to allow comparing software built by different metrics.
Full Text Available The software that is based on component is aimed at developing large software systems thorough combining the existing software components. Before integrate different components, first one need to identify whether functional and non functional properties of different components are feasible and required to be integrated to developnew system or software. Deriving a quality measure for reusable components has proven to be challenging task now a days. This paper proposes a quality metric that provides benefit at both project and process level, namely Fault Clearance Effectiveness (FCE. This paper identifies the different characteristics that component should have so that it can be used again and again. Component qualification is a system of finding out the fitness for use of existing components that will be used to develop a new system.
María S. Millán
Full Text Available The color image quality of presentation programs is evaluated and measured using S-CIELAB and CIEDE2000 color difference formulae. A color digital image in its original format is compared with the same image already imported by the program and introduced as a part of a slide. Two widely used presentation programsÃ¢Â€Â”Microsoft PowerPoint 2004 for Mac and Apple's Keynote 3.0.2Ã¢Â€Â”are evaluated in this work.
Edison Valencia; María S. Millán
The color image quality of presentation programs is evaluated and measured using S-CIELAB and CIEDE2000 color difference formulae. A color digital image in its original format is compared with the same image already imported by the program and introduced as a part of a slide. Two widely used presentation programsÃ¢Â€Â”Microsoft PowerPoint 2004 for Mac and Apple's Keynote 3.0.2Ã¢Â€Â”are evaluated in this work.
HUSSEY Matt; WU Bing
This paper presents a broad range of suggestions on the concept of quality-assured industry-oriented higher education in software engineering, a central theme of the annual CEISIE (CEISEE this year) workshops since the first one held in Harbin, China, in 2005. It draws on the lessons of collaborative experiences involving academics and industrialists from Europe and China. These experiences make the case for a strong role for software industry- oriented higher education in the production of the software architects, developers and engineers required for the future.
Full Text Available Introduction: Various types of software are used in health care organizations to manage information and care processes. The quality of software has been an important concern for both health authorities and designers of Health Information Technology. Thus, assessing the effect of software quality on the performance quality of healthcare institutions is essential. Method: The most important health care quality indicators in relation to software quality characteristics are provided via an already performed literature review. ISO 9126 standard model is used for definition and integration of various characteristics of software quality. The effects of software quality characteristics and sub-characteristics on the healthcare indicators are evaluated through expert opinion analyses. A questionnaire comprising of 126 questions of 10-point Likert scale was used to gather opinions of experts in the field of Medical/Health Informatics. The data was analyzed using Structural Equation Modeling. Results: Our findings showed that software Maintainability was rated as the most effective factor on user satisfaction (R2 =0.89 and Functionality as the most important and independent variable affecting patient care quality (R2 =0.98. Efficiency was considered as the most effective factor on workflow (R2 =0.97, and Maintainability as the most important factor that affects healthcare communication (R2 =0.95. Usability and Efficiency were rated as the most effectual factor affecting patient satisfaction (R2 =0.80, 0.81. Reliability, Maintainability, and Efficiency were considered as the main factors affecting care costs (R2 =0.87, 0.74, 0.87. Conclusion: We presented a new model based on ISO standards. The model demonstrates and weighs the relations between software quality characteristics and healthcare quality indicators. The clear relationships between variables and the type of the metrics and measurement methods used in the model make it a reliable method to assess
Full Text Available It is conventional wisdom in defence systems that electronic brains are where much of the present and future weapons system capability is developed. Electronic hardware advances, particularly in microprocessor, allow highly complex and sophisticated software to provide high degree of system autonomy and customisation to mission at hand. Since modern military systems are so much dependent on the proper functioning of electronics, the quality and reliability of electronic hardware and software have a profound impact on defensive capability and readiness. At the hardware level, due to the advances in microelectronics, functional capabilities of today's systems have increased. The advances in the hardware field have an impact on software also. Now a days, it is possible to incorporate more and more system functions through software, rather than going for a pure hardware solution. On the other hand complexities the systems are increasing, working energy levels of the systems are decreasing and the areas of reliability and quality assurance are becoming more and more wide. This paper covers major failure modes in microelectronic devices. The various techniques used to improve component and system reliability are described. The recent trends in expanding the scope of traditional quality assurance techniques are also discussed, considering both hardware and software.
Waste Tank SY-101 has been the focus of extensive characterization work over the past few years. The waste continually generates gases, most notably hydrogen, which are periodically released from the waste. Gas can be trapped in tank waste in three forms: as void gas (bubbles), dissolved gas, or absorbed gas. Void fraction is the volume percentage of a given sample that is comprised of void gas. The void fraction instrument (VFI) acquires the data necessary to calculate void fraction. This document covers the product, Void Fraction Data Acquisition Software. The void fraction software being developed will have the ability to control the void fraction instrument hardware and acquire data necessary to calculate the void fraction in samples. This document provides the software quality assurance plan, verification and validation plan, and configuration management plan for developing the software for the instrumentation that will be used to obtain void fraction data from Tank SY-101
Full Text Available The growing popularity of highly iterative, agile processes creates increasing need for automated monitoring of the quality of software artifacts, which would be focused on short terms (in the case of eXtreme Programming process iteration can be limited to one week. This paper presents a framework that calculates software metrics and cooperates with development tools (e.g. source version control system and issue tracking system to describe current state of a software project with regard to its quality. The framework is designed to support high level of automation of data collection and to be useful for researchers as well as for industry. The framework is currently being developed hence the paper reports already implemented features as well as future plans. The first release is scheduled for July.
Software Quality Attribute Analysis by Architecture Reconstruction (SQUA3RE) is a method that fosters a goal-driven process to evaluate the impact of what-if scenarios on existing systems. The method is partitioned into SQA2 and ARE. The SQA2 part provides the analysis models that can be used for q
... Federal Housing Finance Agency. The ASC Rules of Operation serve as corporate bylaws outlining the ASC's... amended numerous provisions in Title XI. The ASC Rules of Operation serve as corporate bylaws...
Purpose: To evaluate whether a new software from the working group for interventional radiology (AGIR) is an appropriate tool for quality assurance in interventional radiology, and presentation of results acquired within the quality improvement process in 1999. Patients and methods: AGIR-defined parameters such as patient data, risk profile, given interventions as well as complications were registered by a recently developed software. Based on monthly data analyses, possible complications were identified and discussed in morbidity and mortality conferences. Results: 1014 interventions were performed in our institution in 1999. According to criteria established by AGIR, the complication rate was 2.7%. In addition and according to SCVIR criteria, complications were distinguished quantitatively in five classes and semiquantitatively in minor and major groups. The result was a minor complication rate of 1.8%, and a major rate of 0.9%. There were no cases of death associated with the intervention. Further strategies were developed in order to reduce the complication rate. Conclusion: Extensive quality assurance methods can be integrated in daily routine work. These methods lead to an intensive transparency of treatment results, and allow the implementation of continuous quality improvements. The development of the software is a first step in establishing a nation-wide quality assurance system. Nevertheless, modification and additional definition of the AGIR predefined parameters are required, for example, to avoid unnecessary procedures. (orig.)
Full Text Available this paper presents an approach for evaluating and confirming the quality of the external software documentation using topic modeling. Typically, the quality of the external documentation has to mirror precisely the organization of the source code. Therefore, the elements of such documentation should be strongly written, associated, and presented. In this paper, we use Latent Dirichlet Allocation (LDA and HELLINGER DISTANCE to compute the similarities between the fragments of source code and the external documentation topics. These similarities are used in this paper to improve and advance the existing external documentation. Furthermore, these similarities can also be used for evaluating the new documenting process during the evolution phase of the software. The results show that the new approach yields state-of-the-art performance in evaluating and confirming the existing external documentations quality and superiority.
Bush, Marilyn W.
The findings are reported of the Jet Propulsion Laboratory (JPL)/Software Product Assurance (SPA) Metrics Study, conducted as part of a larger JPL effort to improve software quality and productivity. Until recently, no comprehensive data had been assembled on how JPL manages and develops software-intensive systems. The first objective was to collect data on software development from as many projects and for as many years as possible. Results from five projects are discussed. These results reflect 15 years of JPL software development, representing over 100 data points (systems and subsystems), over a third of a billion dollars, over four million lines of code and 28,000 person months. Analysis of this data provides a benchmark for gauging the effectiveness of past, present and future software development work. In addition, the study is meant to encourage projects to record existing metrics data and to gather future data. The SPA long term goal is to integrate the collection of historical data and ongoing project data with future project estimations.
A pilot study was initiated to obtain and implement a similar set of clinical dynamic cardiac studies (software phantoms) on different computer systems for the purpose of quality control of analysis software. Normal and abnormal gated blood pool studies were collected and transferred between six computer systems using serial transmission. Major impediments in attempting to analyse the transferred data files were incomplete or missing data records required for the calculations. Only the left ventricular ejection fraction (LVEF) parameter could be analysed on all six computers. The LVEF results obtained for 10 software phantoms using the commercial software were similar in some phantoms but widely divergent in others. Development of software phantoms still requires improvement in data transfer between computers in order to ensure a complete file content in the transferred study, and a solution for the differences in acquisition protocols. In the meantime users can start to obtain their own set of standard studies illustrative of various clinical disorders, and share these with other users with the same computer type and analysis software. (author). 4 refs, 1 tab
Wong, Wayne A.; Wilson, Scott; Collins, Josh; Wilson, Kyle
The Advanced Stirling Convertor (ASC) development effort was initiated by NASA Glenn Research Center with contractor Sunpower, Inc., to develop high-efficiency thermal-to-electric power conversion technology for NASA Radioisotope Power Systems (RPSs). Early successful performance demonstrations led to the expansion of the project as well as adoption of the technology by the Department of Energy (DOE) and system integration contractor Lockheed Martin Space Systems Company as part of the Advanced Stirling Radioisotope Generator (ASRG) flight project. The ASRG integrates a pair of ASCs to convert the heat from a pair of General Purpose Heat Source (GPHS) modules into electrical power. The expanded NASA ASC effort included development of several generations of ASC prototypes or engineering units to help prepare the ASC technology and Sunpower for flight implementation. Sunpower later had two parallel contracts allowing the last of the NASA engineering units called ASC-E3 to serve as pathfinders for the ASC-F flight convertors being built for DOE. The ASC-E3 convertors utilized the ASC-F flight specifications and were built using the ASC-F design and process documentation. Shortly after the first ASC-F pair achieved initial operation, due to budget constraints, the DOE ASRG flight development contract was terminated. NASA continues to invest in the development of Stirling RPS technology including continued production of the ASC-E3 convertors, seven of which have been delivered with one additional unit in production. Starting in fiscal year 2015, Stirling Convertor Technology Maturation has been reorganized as an element of the RPS Stirling Cycle Technology Development (SCTD) Project and long-term plans for continued Stirling technology advancement are in reformulation. This paper provides a status on the ASC project, an overview of advancements made in the design and production of the ASC at Sunpower, and a summary of acceptance tests, reliability tests, and tactical
Many problems, related with safety, frequently occur because Digital Instrument and Control Systems are widely used and expanding their ranges to many applications in Nuclear Power Plants. It, however, does not hold a general position to estimate an appropriate software quality. Thus, the Quality Characteristic Value, a software quality factor through each software life cycle, is suggested in this paper. The Quality Characteristic Value is obtained as following procedure: 1) Scoring Quality Characteristic Factors (especially correctness, traceability, completeness, and understandability) onto Software Verification and Validation results, 2) Deriving the diamond-shaped graphs by setting values of Factors at each axis and lining every points, and lastly 3) Measuring the area of the graph for Quality Characteristic Value. In this paper, this methodology is applied to Plant Control System. In addition, the series of quantification frameworks exhibit some good characteristics in the view of software quality factor. More than any thing else, it is believed that introduced framework may be applicable to regulatory guide, software approval procedures, due to its soundness and simple characteristics. (authors)
Full Text Available Fault proneness data available in the early software life cycle from previous releases or similar kind of projects will aid in improving software quality estimations. Various techniques have been proposed in the literature which includes statistical method, machine learning methods, neural network techniques and clustering techniques for the prediction of faulty and non faulty modules in the project. In this study, Hierarchical clustering algorithm is being trained and tested with lifecycle data collected from NASA projects namely, CM1, PC1 and JM1 as predictive models. These predictive models contain requirement metrics and static code metrics. We have combined requirement metric model with static code metric model to get fusion metric model. Further we have investigated that which of the three prediction models is found to be the best prediction model on the basis of fault detection. The basic hypothesis of software quality estimation is that automatic quality prediction models enable verificationexperts to concentrate their attention and resources at problem areas of the system under development. The proposed approach has been implemented in MATLAB 7.4. The results show that when all the prediction techniques are evaluated, the best prediction model is found to be the fusion metric model. This proposed model is also compared with other quality models available in the literature and is found to be efficient for predicting faulty modules.
There are over 1600 superconducting magnet circuits in the LHC machine. Many of them consist of a large number of components electrically connected in series. This enhances the sensitivity of the whole circuits to electrical faults of individual components. Furthermore, circuits are equipped with a large number of instrumentation wires, which are exposed to accidental damage or swapping. In order to ensure safe operation, an Electrical Quality Assurance (ELQA) campaign is needed after each thermal cycle. Due to the complexity of the circuits, as well as their distant geographical distribution (tunnel of 27km circumference divided in 8 sectors), suitable software and hardware platforms had to be developed. The software combines an Oracle database, LabView data acquisition applications and PHP-based web follow-up tools. This paper describes the software used for the ELQA of the LHC.
Ronchieri, Elisabetta; Grazia Pia, Maria; Giacomini, Francesco
Geant4 is a simulation system of particle transport through matter, widely used in several experimental areas from high energy physics and nuclear experiments to medical studies. Some of its applications may involve critical use cases; therefore they would benefit from an objective assessment of the software quality of Geant4. In this paper, we provide a first statistical evaluation of software metrics data related to a set of Geant4 physics packages. The analysis aims at identifying risks for Geant4 maintainability, which would benefit from being addressed at an early stage. The findings of this pilot study set the grounds for further extensions of the analysis to the whole of Geant4 and to other high energy physics software systems.
Schumann, Johann; Swanson, Keith (Technical Monitor)
The amount and complexity of software developed during the last few years has increased tremendously. In particular, programs are being used more and more in embedded systems (from car-brakes to plant-control). Many of these applications are safety-relevant, i.e. a malfunction of hardware or software can cause severe damage or loss. Tremendous risks are typically present in the area of aviation, (nuclear) power plants or (chemical) plant control. Here, even small problems can lead to thousands of casualties and huge financial losses. Large financial risks also exist when computer systems are used in the area of telecommunication (telephone, electronic commerce) or space exploration. Computer applications in this area are not only subject to safety considerations, but also security issues are important. All these systems must be designed and developed to guarantee high quality with respect to safety and security. Even in an industrial setting which is (or at least should be) aware of the high requirements in Software Engineering, many incidents occur. For example, the Warshaw Airbus crash, was caused by an incomplete requirements specification. Uncontrolled reuse of an Ariane 4 software module was the reason for the Ariane 5 disaster. Some recent incidents in the telecommunication area, like illegal "cloning" of smart-cards of D2GSM handies, or the extraction of (secret) passwords from German T-online users show that also in this area serious flaws can happen. Due to the inherent complexity of computer systems, most authors claim that only a rigorous application of formal methods in all stages of the software life cycle can ensure high quality of the software and lead to real safe and secure systems. In this paper, we will have a look, in how far automated theorem proving can contribute to a more widespread application of formal methods and their tools, and what automated theorem provers (ATPs) must provide in order to be useful.
Considering the fact that the Quality Control of the technological process of the mammographic system involves testing of a large number of parameters, it is clearly evident that there is a need for using the information technology for gathering, processing and storing of all the parameters that are result of this process. The main goal of this software application is facilitation and automation of the gathering, processing, storing and presenting process of the data related to the qualification of the physical and technical parameters during the quality control of the technological process of the mammographic system. The software application along with its user interface and database has been made with the Microsoft Access 2003 application which is part of the Microsoft Office 2003 software packet and has been chosen as a platform for developing because it is the most commonly used office application today among the computer users in the country. This is important because it will provide the end users a familiar environment to work in, without the need for additional training and improving the computer skills that they posses. Most importantly, the software application is easy to use, fast in calculating the parameters needed and it is an excellent way to store and display the results. There is a possibility for up scaling this software solution so it can be used by many different users at the same time over the Internet. It is highly recommended that this system is implemented as soon as possible in the quality control process of the mammographic systems due to its many advantages.(Author)
Image viewing and processing software in computed radiography manipulates image contrast in such a way that all relevant image features are rendered to an appropriate degree of visibility, and improves image quality using enhancement algorithms. The purpose of this study was to investigate procedures for the quality assessment of image processing software for computed radiography with the use of existing test objects and to assess the influence that processing introduces on physical image quality characteristics. Measurements of high-contrast resolution, low-contrast resolution, spatial resolution, grey scale (characteristic curve) and geometric distortion were performed 'subjectively' by three independent observers and 'objectively' by the use of criteria based on pixel intensity values. Results show quality assessment is possible without the need for human evaluators, using digital images. It was discovered that the processing software evaluated in this study was able to improve some aspects of image quality, without introducing geometric distortion. (authors)
Shepperd, M; Song, Q.; Sun, Z.; Mair, C.
Background-Self-evidently empirical analyses rely upon the quality of their data. Likewise, replications rely upon accurate reporting and using the same rather than similar versions of datasets. In recent years, there has been much interest in using machine learners to classify software modules into defect-prone and not defect-prone categories. The publicly available NASA datasets have been extensively used as part of this research. Objective-This short note investigates the extent to which p...
Miguel Cabello, Miguel Angel de; Massonet, Philippe; Silva Gallino, Juan Pedro; Fernández Briones, Javier
Modelling languages and development frameworks give support for functional and structural description of software architectures. But quality-aware applications require languages which allow expressing QoS as a first-class concept during architecture design and service composition, and to extend existing tools and infrastructures adding support for modelling, evaluating, managing and monitoring QoS aspects. In addition to its functional behaviour and internal structure, the developer of each s...
Muhammad Abdullah Awais
Every organization is aware of the consequences and importance of requirements for the development of quality software product whether local or global. Requirement engineering phase of development with focus on the prioritization of requirements is going under huge research every day because in any development methodology, all requirements cannot be implemented at same time so requirements are prioritized to be implemented to give solution as early as possible in phases as scheduled in increm...
Software quality assurance (SQA) for robotic systems used in nuclear waste applications is vital to ensure that the systems operate safely and reliably and pose a minimum risk to humans and the environment. This paper describes the SQA approach for the control and data acquisition system for a robotic system being developed for remote surveillance and inspection of underground storage tanks (UST) at the Hanford Site
Jie Xu; Danny Ho; Luiz Fernando Capretz
Software quality assurance has been a heated topic for several decades. If factors that influence software quality can be identified, they may provide more insight for better software development management. More precise quality assurance can be achieved by employing resources according to accurate quality estimation at the early stages of a project. In this paper, a general procedure is proposed to derive software quality estimation models and various techniques are presented to accomplish t...
Jones, C. [Software Productivity Research, Burlington, MA (United States)
This presentation provides a view of software quality for 1997 - what works and what doesn`t. For many years, software quality assurance lagged behind hardware quality assurance in terms of methods, metrics, and successful results. New approaches such as Quality Function Development (WFD) the ISO 9000-9004 standards, the SEI maturity levels, and Total Quality Management (TQM) are starting to attract wide attention, and in some cases to bring software quality levels up to a parity with manufacturing quality levels.
Full Text Available Quality management is a management domain very discussed and disputed nowadays and this is the first sign it is a very modern, needed and present concept in theory and practice. Some are seeing it as a solution to prepare things in the way they are needed, and the instrument which might guarantee a proper environment of keeping them in a specified and constant form. The application of quality management is a quality management system that has to be designed, developed and implemented to achieve the aim of quality. The article has the purpose to briefly present what a quality management system should mean in a software company, why it should be periodically evaluated and how it might be done. In the second part it points out the characteristics of the audit as a general evaluation instrument and the main contribution consists on the author’s endeavor to mark out the particularities of an audit process carried out on a software company, considering the fact that particularization increases the changes to easier and earlier succeed with such an activity on a practical basis.
U.S. Department of Health & Human Services — This file contains a summary of service utilization by ASC supplier and is derived from 2011 ASC line item level data, updated through June 2012, that is, line...
Etemadi Idgahi (Etemaadi), Ramin
Software architecting is a non-trivial and demanding task for software engineers to perform. The architecture is a key enabler for software systems. Besides being crucial for user functionality, the software architecture has deep impact on software qualities such as performance, safety, and cost. In this dissertation, an automated approach for software architecture design is proposed that supports analysis and optimization of multiple quality attributes: First of all, we demonstrate an optimi...
The inclusion of software in safety related systems for nuclear power plants, makes it necessary to include the software quality assurance concept. The software quality can be defined as the adjustment degree between the software and the specified requirements and user expectations. To guarantee a certain software quality level it is necessary to make a systematic and planned set of tasks, that constitute a software quality guaranty plan. The application of such a plan involves activities that should be performed all along the software life cycle, and that can be evaluated through the so called quality factors, due to the fact that the quality itself cannot be directly measured, but indirectly as some of it manifestations. In this work, a software life cycle model is proposed, for nuclear power plant safety related systems. A set os software quality factors is also proposed , with its corresponding classification according to the proposed model. (author)
Walker, H; Nasstrom, J S; Homann, S G
The National Atmospheric Release Advisory Center (NARAC) provides tools and services that predict and map the spread of hazardous material accidentally or intentionally released into the atmosphere. NARAC is a full function system that can meet a wide range of needs with a particular focus on emergency response. The NARAC system relies on computer software in the form of models of the atmosphere and related physical processes supported by a framework for data acquisition and management, user interface, visualization, communications and security. All aspects of the program's operations and research efforts are predicated to varying degrees on the reliable and correct performance of this software. Consequently, software quality assurance (SQA) is an essential component of the NARAC program. The NARAC models and system span different levels of sophistication, fidelity and complexity. These different levels require related but different approaches to SQA. To illustrate this, two different levels of software complexity are considered in this paper. As a relatively simple example, the SQA procedures that are being used for HotSpot, a straight-line Gaussian model focused on radiological releases, are described. At the other extreme, the SQA issues that must be considered and balanced for the more complex NARAC system are reviewed.
Etemadi Idgahi (Etemaadi), Ramin
Software architecting is a non-trivial and demanding task for software engineers to perform. The architecture is a key enabler for software systems. Besides being crucial for user functionality, the software architecture has deep impact on software qualities such as performance, safety, and cost.
K. Nageswara Rao
Full Text Available Maintaining the quality of the software is the major challenge in the process of software development.Software inspections which use the methods like structured walkthroughs and formal code reviews involvecareful examination of each and every aspect/stage of software development. In Agile softwaredevelopment, refactoring helps to improve software quality. This refactoring is a technique to improvesoftware internal structure without changing its behaviour. After much study regarding the ways toimprove software quality, our research proposes an object oriented software metric tool called“MetricAnalyzer”. This tool is tested on different codebases and is proven to be much useful.
Background Second generation technologies have advantages over Sanger; however, they have resulted in new challenges for the genome construction process, especially because of the small size of the reads, despite the high degree of coverage. Independent of the program chosen for the construction process, DNA sequences are superimposed, based on identity, to extend the reads, generating contigs; mismatches indicate a lack of homology and are not included. This process improves our confidence in the sequences that are generated. Findings We developed Quality Assessment Software, with which one can review graphs showing the distribution of quality values from the sequencing reads. This software allow us to adopt more stringent quality standards for sequence data, based on quality-graph analysis and estimated coverage after applying the quality filter, providing acceptable sequence coverage for genome construction from short reads. Conclusions Quality filtering is a fundamental step in the process of constructing genomes, as it reduces the frequency of incorrect alignments that are caused by measuring errors, which can occur during the construction process due to the size of the reads, provoking misassemblies. Application of quality filters to sequence data, using the software Quality Assessment, along with graphing analyses, provided greater precision in the definition of cutoff parameters, which increased the accuracy of genome construction. PMID:21501521
Keegstra, P. B.; Smoot, G. F.; Bennett, C. L.; Aymon, J.; Backus, C.; Deamici, G.; Hinshaw, G.; Jackson, P. D.; Kogut, A.; Lineweaver, C.
Six Differential Microwave Radiometers (DMR) on COBE (Cosmic Background Explorer) measure the large-angular-scale isotropy of the cosmic microwave background (CMB) at 31.5, 53, and 90 GHz. Quality assurance software analyzes the daily telemetry from the spacecraft to ensure that the instrument is operating correctly and that the data are not corrupted. Quality assurance for DMR poses challenging requirements. The data are differential, so a single bad point can affect a large region of the sky, yet the CMB isotropy requires lengthy integration times (greater than 1 year) to limit potential CMB anisotropies. Celestial sources (with the exception of the moon) are not, in general, visible in the raw differential data. A 'quicklook' software system was developed that, in addition to basic plotting and limit-checking, implements a collection of data tests as well as long-term trending. Some of the key capabilities include the following: (1) stability analysis showing how well the data RMS averages down with increased data; (2) a Fourier analysis and autocorrelation routine to plot the power spectrum and confirm the presence of the 3 mK 'cosmic' dipole signal; (3) binning of the data against basic spacecraft quantities such as orbit angle; (4) long-term trending; and (5) dipole fits to confirm the spacecraft attitude azimuth angle.
Taghi M. Khoshgoftaar; Pierre Rebours
Accuracy of machine learners is affected by quality of the data the learners are induced on. In this paper,quality of the training dataset is improved by removing instances detected as noisy by the Partitioning Filter. The fit datasetis first split into subsets, and different base learners are induced on each of these splits. The predictions are combined insuch a way that an instance is identified as noisy if it is misclassified by a certain number of base learners. Two versionsof the Partitioning Filter are used: Multiple-Partitioning Filter and Iterative-Partitioning Filter. The number of instancesremoved by the filters is tuned by the voting scheme of the filter and the number of iterations. The primary aim of thisstudy is to compare the predictive performances of the final models built on the filtered and the un-filtered training datasets.A case study of software measurement data of a high assurance software project is performed. It is shown that predictiveperformances of models built on the filtered fit datasets and evaluated on a noisy test dataset are generally better than thosebuilt on the noisy (un-filtered) fit dataset. However, predictive performance based on certain aggressive filters is affected bypresence of noise in the evaluation dataset.
Montagud Gregori, Sonia; Abrahao Gonzales, Silvia Mara; Insfrán Pelozo, César Emilio
It is widely accepted that software measures provide an appropriate mechanism for understanding, monitoring, controlling, and predicting the quality of software development projects. In software product lines (SPL), quality is even more important than in a single software product since, owing to systematic reuse, a fault or an inadequate design decision could be propagated to several products in the family. Over the last few years, a great number of quality attributes and measures for assessi...
Henry, Sallie M.; Goff, Roger
For many years the software engineering community has been attacking the software reliability problem on two fronts. First via design methodologies, languages and tools as a precheck on quality and second by measuring the quality of produced software as a postcheck. This research attempts to unify the approach to creating reliable software by providing the ability to measure the quality of a design prior to its implementation. A comparison of a graphical and a textual design language is pres...
In October 2006, the IAEA published the TECDOC 1517 Quality Control in Mammography, whose main purpose was to give Latin American countries a protocol in Spanish with all necessary QC. This protocol harmonizes the tests and evaluation criteria of mammography equipment and its complementary equipment; it states the different responsibilities on all personnel and gives guidance on radiographic techniques. It was the joint effort of two ARCAL projects: RLA/6/043 and RLA/9/035. QC programs are needed to assure the final quality of mammography images and to optimize the radiation dose to the patients. Countries where national campaigns are used to improve the early detection of breast cancer among asymptomatic women require the establishment of QC programs. Specific software has been developed based on the TECDOC 1517 to facilitate the technologist, medical physicist and physician its implementation. It has a main menu bar tool and icons for rapid access to the different tests. The help option in each test pops a window with the same procedure written on the TECDOC for the user's convenience. The tests are divided on the same sections as in the document: visual inspection, storage of films, dark room, image system, radiological equipment, automatic exposure control, geometry, collimation, image visualization, film rejection analysis, image quality and dosimetry. On each test, data is introduced on specific color cells and when the user activates the calculation button, the results are compared against its tolerance levels, and indication of pass/fail is finally displayed. This software, available to all Member States, adds extra value to the TECDOC 1517 since errors in calculations will be reduced by its use. It will harmonize the way results are presented, it will facilitate comparisons, it reduces the time to evaluate the results of the test and finally it becomes a teaching tool for the TECDOC. (author)
Donald E. Harter; Slaughter, Sandra A.
This study draws upon theories of task interdependence and organizational inertia to analyze the effect of quality improvement on infrastructure activity costs in software development. Although increasing evidence indicates that quality improvement reduces software development costs, the impact on infrastructure activities is not known. Infrastructure activities include services like computer operations, data integration, and configuration management that support software development. Because...
The Yucca Mountain Site Characterization Project (YMP) has been involved over the years in the continuing struggle with establishing acceptable Software Quality Assurance (SQA) requirements for the development, modification, and acquisition of computer programs used to support the Mined Geologic Disposal System. These computer programs will be used to produce or manipulate data used directly in site characterization, design, analysis, performance assessment, and operation of repository structures, systems, and components. Scientists and engineers working on the project have claimed that the SQA requirements adopted by the project are too restrictive to allow them to perform their work. This paper will identify the source of the original SQA requirements adopted by the project. It will delineate the approach used by the project to identify concerns voiced by project engineers and scientists regarding the original SQA requirements. It will conclude with a discussion of methods used to address these problems in the rewrite of the original SQA requirements
Adams, K.; Matthews, S. D.; McQueen, M. A.
The Nuclear Materials Technology (NMT) organizations 1 and 3 within the Chemical and Metallurgical Research (CMR) facility at the Los Alamos National Laboratory are working to achieve Waste Isolation Pilot Plant (WIPP) certification to enable them to transport their TRU waste to WIPP. This document is intended to provide not only recommendations to address the necessary software quality assurance activities to enable the NMT-1 and NMT-3 organizations to be WIPP compliant but is also meant to provide a template for the final Software Quality Assurance Plan (SQAP). This document specifically addresses software quality assurance for all software used in support of waste characterization and analysis. Since NMT-1 and NMT-3 currently have several operational software products that are used for waste characterization and analysis, these software quality assurance recommendations apply to the operations, maintenance and retirement of the software and the creation and development of any new software required for waste characterization and analyses.
The Nuclear Materials Technology (NMT) organizations 1 and 3 within the Chemical and Metallurgical Research (CMR) facility at the Los Alamos National Laboratory are working to achieve Waste Isolation Pilot Plant (WIPP) certification to enable them to transport their TRU waste to WIPP. This document is intended to provide not only recommendations to address the necessary software quality assurance activities to enable the NMT-1 and NMT-3 organizations to be WIPP compliant but is also meant to provide a template for the final Software Quality Assurance Plan (SQAP). This document specifically addresses software quality assurance for all software used in support of waste characterization and analysis. Since NMT-1 and NMT-3 currently have several operational software products that are used for waste characterization and analysis, these software quality assurance recommendations apply to the operations, maintenance and retirement of the software and the creation and development of any new software required for waste characterization and analyses
M. Sangeetha; K.M.SenthilKumar; Dr.C.Arumugam; K. Akila
In our lives are governed by large, complex systems with increasingly complex software, and the safety, security, and reliability of these systems has become a major concern. As the software in today’ssystems grows larger, it has more defects, and these defects adversely affect the safety, security, and reliability of the systems. Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, andmaintenance of software. Software divi...
Full Text Available The apoptosis-associated speck-like protein containing a caspase recruit domain (ASC is involved in apoptosis and innate immunity and is a major adaptor molecule responsible for procaspase-1 activation. ASC mRNA is encoded by three exons: exons 1 and 3 encode a pyrin domain (PYD and caspase recruit domain (CARD, respectively, and exon 2 encodes a proline and glycine-rich (PGR domain. Here, we identified a variant ASC protein (vASC lacking the PGR domain that was smaller than full length ASC (fASC derived from fully transcribed mRNA and searched for differences in biochemical and biological nature. Both fASC and vASC were found to activate procaspase-1 to a similar degree, but the efficiency of IL-1β excretion was significantly higher for vASC. There was also a marked structural difference observed in the fibrous aggregates formed by fASC and vASC. These results suggest that although the PGR domain is dispensable for procaspase-1 activation, it plays an important role in the regulation of the molecular structure and activity of ASC.
This support study for the evaluation of the safety of geological disposal systems is aimed at identifying the requirements for software quality assurance procedures for radioactive waste risk assessment codes, and to recommend appropriate procedures. The research covers: (i) the analysis of existing procedures and definition of requirements; (ii) a case study of the use of some existing procedures; (iii) the definition and the implementation of procedures. The report is supported by appendices that give more detail on the procedures recommended. It is intended to provide ideas on the steps that should be taken to ensure the quality of the programs used for assessment of the safety case for radioactive waste repositories, and does not represent the introduction of wholly new ideas or techniques. The emphasis throughout is on procedures that will be easily implemented, rather than on the fully rigorous procedures that are required for some application areas. The study has concentrated on measures that will increase the confidence in repository performance assessments among the wider scientific/engineering community, and the lay public
The Applications section of the CERN accelerator Controls group has decided to apply a systematic approach to quality assurance (QA), the “Software Improvement Process”, SIP. This process focuses on three areas: the development process itself, suitable QA tools, and how to practically encourage developers to do QA. For each stage of the development process we have agreed on the recommended activities and deliverables, and identified tools to automate and support the task. For example we do more code reviews. As peer reviews are resource-intensive, we only do them for complex parts of a product. As a complement, we are using static code checking tools, like FindBugs and Checkstyle. We also encourage unit testing and have agreed on a minimum level of test coverage recommended for all products, measured using Clover. Each of these tools is well integrated with our IDE (Eclipse) and give instant feedback to the developer about the quality of their code. The major challenges of SIP have been to 1) agree on com...
Bjorlo, T.J.; Berg, O.; Pehrsen, M.; Dahll, G.; Sivertsen, T. [OECD Halden Reactor Project (Norway)
The OECD Halden Reactor Project has developed a number of software systems within the research programmes. These programmes have comprised a wide range of topics, like studies of software for safety-critical applications, development of different operator support systems, and software systems for building and implementing graphical user interfaces. The systems have ranged from simple prototypes to installations in process plants. In the development of these software systems, Halden has gained much experience in quality assurance of different types of software. This paper summarises the accumulated experience at the Halden Project in quality assurance of software systems. The different software systems being developed at the Halden Project may be grouped into three categories. These are plant-specific software systems (one-of-a-kind deliveries), generic software products, and safety-critical software systems. This classification has been found convenient as the categories have different requirements to the quality assurance process. In addition, the experience from use of software development tools and proprietary software systems at Halden, is addressed. The paper also focuses on the experience gained from the complete software life cycle, starting with the software planning phase and ending with software operation and maintenance.
Bielow, Chris; Mastrobuoni, Guido; Kempa, Stefan
Mass spectrometry-based proteomics coupled to liquid chromatography has matured into an automatized, high-throughput technology, producing data on the scale of multiple gigabytes per instrument per day. Consequently, an automated quality control (QC) and quality analysis (QA) capable of detecting measurement bias, verifying consistency, and avoiding propagation of error is paramount for instrument operators and scientists in charge of downstream analysis. We have developed an R-based QC pipeline called Proteomics Quality Control (PTXQC) for bottom-up LC-MS data generated by the MaxQuant1 software pipeline. PTXQC creates a QC report containing a comprehensive and powerful set of QC metrics, augmented with automated scoring functions. The automated scores are collated to create an overview heatmap at the beginning of the report, giving valuable guidance also to nonspecialists. Our software supports a wide range of experimental designs, including stable isotope labeling by amino acids in cell culture (SILAC), tandem mass tags (TMT), and label-free data. Furthermore, we introduce new metrics to score MaxQuant's Match-between-runs (MBR) functionality by which peptide identifications can be transferred across Raw files based on accurate retention time and m/z. Last but not least, PTXQC is easy to install and use and represents the first QC software capable of processing MaxQuant result tables. PTXQC is freely available at https://github.com/cbielow/PTXQC . PMID:26653327
Software engineering tools such as bug tracking databases and version control systems store large amounts of data about the history and evolution of software projects. In the last few years, empirical software engineering researchers have paid attention to these data to provide promising research results, for example, to predict the number of future bugs, recommend bugs to fix next, and visualize the evolution of software systems. Unfortunately, such data is not well-prepared for research pur...
Full Text Available This paper tries to introduce a new mathematical model to understand the state of quality of software by calculating parameters such as the time gap and quality gap with relation to some predefinedstandard software quality or in relation to some chalked out software quality plan. The paper also suggests methods to calculate the difference in quality of the software being developed and the modelsoftware which has been decided upon as the criteria for comparison. These methods can be employed to better understand the state of quality as compared to other standards. In order to obtain the graphical representation of data we have used Microsoft office 2007 graphical chart. Which facilitate easy simulation of time and quality gap.
The Applications section of the CERN accelerator controls group has decided to apply a systematic approach to quality assurance (QA), the 'Software Improvement Process' - SIP. This process focuses on three areas: the development process itself, suitable QA tools, and how to practically encourage developers to do QA. For each stage of the development process we have agreed on the recommended activities and deliverables, and identified tools to automate and support the task. For example we do more code reviews. As peer reviews are resource intensive, we only do them for complex parts of a product. As a complement, we are using static code checking tools, like FindBugs and Checkstyle. We also encourage unit testing and have agreed on a minimum level of test coverage recommended for all products, measured using Clover. Each of these tools is well integrated with our IDE (Eclipse) and give instant feedback to the developer about the quality of their code. The major challenges of SIP have been to 1) agree on common standards and configurations, for example common code formatting and Javadoc documentation guidelines, and 2) how to encourage the developers to do QA. To address the second point, we have successfully implemented 'SIP days', i.e. one day dedicated to QA work to which the whole group of developers participates, and 'Top/Flop' lists, clearly indicating the best and worst products with regards to SIP guidelines and standards, for example test coverage. This paper presents the SIP initiative in more detail, summarizing our experience since two years and our future plans. (authors)
Manoel Soares Soares Júnior; Marcio Caliari; Rosângela Vera; Camila Silveira Melo
O objetivo deste trabalho foi avaliar os efeitos do ácido ascórbico e do tipo de filme plástico como embalagem na qualidade do araticum minimamente processado e mantido sob refrigeração. O ácido ascórbico não evitou o escurecimento do araticum minimamente processado. Independentemente do tipo de embalagem, a acidez titulável aumentou com o tempo. A embalagem de policloreto de vinila ou polietileno de baixa densidade promoveu uma significativa perda de massa se comparada com a a laminada a vác...
Quality achievement is the responsibility of the line organizations of the National Ignition Facility (NIF) Project. This Software Quality Assurance Plan (SQAP) applies to the activities of the Integrated Computer Control System (ICCS) organization and its subcontractors. The Plan describes the activities implemented by the ICCS section to achieve quality in the NIF Project's controls software and implements the NIF Quality Assurance Program Plan (QAPP, NIF-95-499, L-15958-2) and the Department of Energy's (DOE's) Order 5700.6C. This SQAP governs the quality affecting activities associated with developing and deploying all control system software during the life cycle of the NIF Project
Since nuclear plants and facilities have made increasing use of digital technology, the safety and reliability of software is a primary concern. Software errors are more difficult to detect and handle than hardware-related failures. It is crucial to consider the a process and a product of a software life cycle to increase the quality of a software. This paper discusses the quality assurance of a process and a product of a software life cycle based on two prominent standards, ISO 9001:2000 and CMMI
The requirements towards software systems usually go beyond the correct functionality, the presence of certain quality demands are also very essential for the systems' acceptance by the stakeholders. So quality control and management must be carried out through the whole development process to ensure the implementation of required quality characteristics. This thesis focuses on the quality control of the software architecture. Several approaches for evaluating the architecture ...
The Nuclear safety depends on right behavior of 1E software, which is a important part of 1E DCS system. Nowadays, user focus on good function of 1E system, but pay little attention to quality control of 1E software. In fact, it's declared in IEC61513 and IEC60880 that 1E software should under strict quality control during all stages of development. This article is related to the practice of 1E DCS system quality control and explores the QC surveillance for 1E software from the user's point of view. (authors)
This paper describes an application of software quality assurance to a specific scientific code development program. The software quality assurance program consists of three major components: administrative control, configuration management, and user documentation. The program attempts to be consistent with existing local traditions of scientific code development while at the same time providing a controlled process of development
García Gutiérrez, Boni
Abstract The Web has become one of the most influential instruments in the history of mankind. Therefore, web applications development is a hot topic in the Software Engineering domain. In this context, the software quality is a key concept since it determines the degree in which a system meets its requirements and meets the expectations of its customers and/or users. Quality control (also known as verification and validation) is the set of activities designed to assess a software system in o...
Betto, Maurizio; Jørgensen, John Leif; Jørgensen, Finn E;
This document describes the Advanced Stellar Compass (ASC) and defines the interfaces between the instrument and the PROBA satellite. The ASC is a highly advanced and autonomous Stellar Reference Unit designed, developed and produced by the Space Instrumentation Group of the Department of...... Automation of the Technical University of Denmark. The document is structured as follows. First we present the ASC - heritage, system description, performance - then we address more specifically the environmental properties, like the EMC compatibility and thermal characteristics, and the design and...
An effective Software Quality Assurance (SQA) program provides an overall approach to software engineering and the establishment of proven methods for the production of reliable software. And, in the authors experience, the overall costs for the software life are diminished with the application of quality methods. In their experience, the issues for implementing quality standards and practices are many. This paper addresses those issues as well as the lessons learned from developing and implementing a number of software quality assurance programs. Their experience includes the development and implementation of their own NRC accepted SQA program and an SQA program for an engineering software developer, as well as developing SQA procedures, standards, and methods for utilities, medical and commercial clients. Some of the issues addressed in this paper are: setting goals and defining quality; applying the software life cycle; addressing organizational issues; providing flexibility and increasing productivity; producing effective documentation; maintaining quality records; Imposing software configuration management; conducting reviews, audits, and controls; verification and validation; and controlling software procurement
Edwards, Harold C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Computing Research Center; Trott, Christian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Computing Research Center
This report outlines the research, development, and support requirements for the Advanced Simulation and Computing (ASC ) Advanced Technology, Development, and Mitigation (ATDM) Performance Portability (a.k.a., Kokkos) project for 2015 - 2019 . The research and development (R&D) goal for Kokkos (v2) has been to create and demonstrate a thread - parallel programming model a nd standard C++ library - based implementation that enables performance portability across diverse manycore architectures such as multicore CPU, Intel Xeon Phi, and NVIDIA Kepler GPU. This R&D goal has been achieved for algorithms that use data parallel pat terns including parallel - for, parallel - reduce, and parallel - scan. Current R&D is focusing on hierarchical parallel patterns such as a directed acyclic graph (DAG) of asynchronous tasks where each task contain s nested data parallel algorithms. This five y ear plan includes R&D required to f ully and performance portably exploit thread parallelism across current and anticipated next generation platforms (NGP). The Kokkos library is being evaluated by many projects exploring algorithm s and code design for NGP. Some production libraries and applications such as Trilinos and LAMMPS have already committed to Kokkos as their foundation for manycore parallelism an d performance portability. These five year requirements includes support required for current and antic ipated ASC projects to be effective and productive in their use of Kokkos on NGP. The greatest risk to the success of Kokkos and ASC projects relying upon Kokkos is a lack of staffing resources to support Kokkos to the degree needed by these ASC projects. This support includes up - to - date tutorials, documentation, multi - platform (hardware and software stack) testing, minor feature enhancements, thread - scalable algorithm consulting, and managing collaborative R&D.
As part of its Reactor Operations Improvement Program at the Savannah River Site (SRS), Westinghouse Savannah River Company (WSRC), in cooperation with the Westinghouse Hanford Company, has developed and implemented quality assurance for safety-related software for technical programs essential to the safety and reliability of reactor operations. More specifically, the quality assurance process involved the development and implementation of quality standards and attendant procedures based on industry software quality standards. These procedures were then applied to computer codes in reactor safety and probabilistic risk assessment analyses. This paper provides a review of the major aspects of the WSRC safety-related software quality assurance. In particular, quality assurance procedures are described for the different life cycle phases of the software that include the Requirements, Software Design and Implementation, Testing and Installation, Operation and Maintenance, and Retirement Phases. For each phase, specific provisions are made to categorize the range of activities, the level of responsibilities, and the documentation needed to assure the control of the software. The software quality assurance procedures developed and implemented are evolutionary in nature, and thus, prone to further refinements. These procedures, nevertheless, represent an effective controlling tool for the development, production, and operation of safety-related software applicable to reactor safety and probabilistic risk assessment analyses
Hayhurst, Kelly J. (Editor)
The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes configuration management and quality assurance documents from the GCS project. Volume 4 contains six appendices: A. Software Accomplishment Summary for the Guidance and Control Software Project; B. Software Configuration Index for the Guidance and Control Software Project; C. Configuration Management Records for the Guidance and Control Software Project; D. Software Quality Assurance Records for the Guidance and Control Software Project; E. Problem Report for the Pluto Implementation of the Guidance and Control Software Project; and F. Support Documentation Change Reports for the Guidance and Control Software Project.
Full Text Available Problem statement: The main part of building any system is achieving high level of quality and developing qualities it is achieve. Many organizations do not take into account the highest level of quality as a main necessary part through building its systems; they think mainly on budget and reducing time to market. Approach: One of the important approached to achieved quality was used components through building products and then selecting the most appropriate component to put them into the product line according to system requirements. Results: The main result of adopting component-based approach to software product line was premise of high quality in addition to reused and reduced time to the market. Conclusion: The ultimate goal of using components through software product line is to increase quality of software as flexibility, reliability in addition to the characteristic of making software reusable in another types of business especially in electronic commerce application.
Defect prevention is the most vital but habitually neglected facet of software quality assurance in any project. If functional at all stages of software development, it can condense the time, overheads and wherewithal entailed to engineer a high quality product. The key challenge of an IT industry is to engineer a software product with minimum post deployment defects. This effort is an analysis based on data obtained for five selected projects from leading software companies of varying software production competence. The main aim of this paper is to provide information on various methods and practices supporting defect detection and prevention leading to thriving software generation. The defect prevention technique unearths 99% of defects. Inspection is found to be an essential technique in generating ideal software generation in factories through enhanced methodologies of abetted and unaided inspection schedules. On an average 13 % to 15% of inspection and 25% - 30% of testing out of whole project effort tim...
Hansen, Klaus Marius; Jonasson, Kristjan; Neukirchen, Helmut
Software architectures shift the focus of developers from lines-of-code to coarser-grained components and their interconnection structure. Unlike 2ne-grained objects, these components typically encompass business functionality and need to be aware of the underlying business processes. Hence, the...... interface of a component should re4ect relevant parts of the business process and the software architecture should emphasize the coordination among components. To shed light on these issues, we provide a framework for component-based software architectures focusing on the process perspective. The interface...
Lutsenko Y. V.
Full Text Available The article proposes using the automated system-cognitive analysis (ASC-analysis and its software tool, which is the system called "Eidos" for synthesis and application of adaptive intelligent measuring systems to measure values of parameters of objects, and for system state identification of complex multivariable nonlinear dynamic systems. The article briefly describes the mathematical method of ASC-analysis, implemented in the software tool – universal cognitive analytical system named "Eidos-X++". The mathematical method of ASC-analysis is based on system theory of information (STI which was created in the conditions of implementation of program ideas of generalizations of all the concepts of mathematics, in particularly, the information theory based on the set theory, through a total replacement of the concept of “many” with the more general concept of system and detailed tracking of all the consequences of this replacement. Due to the mathematical method, which is the basis of ASC-analysis, this method is nonparametric and allows you to process comparably tens and hundreds of thousands of gradations of factors and future conditions of the control object (class in incomplete (fragmented, noisy data numeric and non-numeric nature which are measured in different units of measurement. We provide a detailed numerical example of the application of ASC-analysis and the system of "Eidos-X++" as a synthesis of systemic-cognitive model, providing a multiparameter typization of the states of complex systems, and system identification of their states, as well as for making decisions about managing the impact of changing the composition of the control object to get its quality (level of consistency maximally increased at minimum cost. For a numerical example of a complex system we have selected the team of the company, and its component – employees and applicants (staff. However, it must be noted that this example should be considered even wider
Leidy, Frank H.
Quality assurance is a crucial function to the successful development and maintenance of a software system. Because this activity has a significant impact on the cost of software development, the cost-effectiveness of quality assurance is a major concern to the software quality manager. There are tradeoffs between the economic benefits and costs of quality assurance. Using the Dynamo model of software project management, an optimal quality assurance level and its distribution throughout a pro...
Hare, J.; Rodin, L.
This report contains viewgraphs on licensing and certifing of software professionals. Discussed in this report are: certification programs; licensing programs; why became certified; certification as a condition of empolyment; certification requirements; and examination structures.
Шеховцов, Владимир Анатольевич; Годлевский, Михаил Дмитриевич; Брагинский, Игорь Львович
We propose to formulate the problem of achieving the necessary level of maturity for the software process based on the selection among alternate variants of the process steps implementation under the resource constraints of the organization
quality control activities and it is desirable to create a quality process to integrally represent overall level of quality control activities performed while developing the software deliverables. With the quality process, it is possible to evaluate whether enough quality control activities are performed for the project officially and secure the quality of the software deliverables before it is delivered to the customers.
Software Quality Assurance (QA) is defined as the methodology and good practices for ensuring the quality of software in development. It involves in handling bug reports, bug tracking, error investigation, verification of fixed bugs, test management, test case plan and design, as well as test case execution and records. Standards such as ISO 9001 are commonly followed for software QA, which recommends using a wide range of tools to improve the existing software engineering processes (SEP) for...
Layman, Lucas; Basili, Victor R.; Zelkowitz, Marvin V.
In this study, we examine software safety risk in the early design phase of the NASA Constellation spaceflight program. Obtaining an accurate, program-wide picture of software safety risk is difficult across multiple, independently-developing systems. We leverage one source of safety information, hazard analysis, to provide NASA quality assurance managers with information regarding the ongoing state of software safety across the program. The goal of this research is two-fold: 1) to quantify the relative importance of software with respect to system safety; and 2) to quantify the level of risk presented by software in the hazard analysis. We examined 154 hazard reports created during the preliminary design phase of three major flight hardware systems within the Constellation program. To quantify the importance of software, we collected metrics based on the number of software-related causes and controls of hazardous conditions. To quantify the level of risk presented by software, we created a metric scheme to measure the specificity of these software causes. We found that from 49-70% of hazardous conditions in the three systems could be caused by software or software was involved in the prevention of the hazardous condition. We also found that 12-17% of the 2013 hazard causes involved software, and that 23-29% of all causes had a software control. Furthermore, 10-12% of all controls were software-based. There is potential for inaccuracy in these counts, however, as software causes are not consistently scoped, and the presence of software in a cause or control is not always clear. The application of our software specificity metrics also identified risks in the hazard reporting process. In particular, we found a number of traceability risks in the hazard reports may impede verification of software and system safety.
Quality assurance plan is needed to guarantee the software quality. The use of such a plan involves activities that should take place all along the life cycle, and which can be evaluated using the so called quality factors. This is due to the fact that the quality itself cannot be measured, but some of its manifestations can be used for this purpose. In the present work, a methodology to quantify a set of quality factors is proposed, for software based systems to be used in safety related areas in nuclear power plants. (author)
Full Text Available Quality is the customers perception of the value of the suppliers work output. Quality represents the properties of products and/or services that are valued by the consumer. Quality is a momentary perception that occurs when something in the environment interacts with human factor, in the pre-intellectual awareness that comesbefore rational thought takes over and begins establishing order. Judgment of the resulting order is then reported as good or bad quality value. A product or process that is Reliable, and that performs its intended function is said to be a quality product.
Calikli, Gul; Bener, Ayse Basar
The thought processes of people have a significant impact on software quality, as software is designed, developed and tested by people. Cognitive biases, which are defined as patterned deviations of human thought from the laws of logic and mathematics, are a likely cause of software defects. However, there is little empirical evidence to date to substantiate this assertion. In this research, we focus on a specific cognitive bias, confirmation bias, which is defined as the tendency of people t...
Idri, Ali; Bachiri, Mariam; Fernández-Alemán, José Luis
Stakeholders' needs and expectations are identified by means of software quality requirements, which have an impact on software product quality. In this paper, we present a set of requirements for mobile personal health records (mPHRs) for pregnancy monitoring, which have been extracted from literature and existing mobile apps on the market. We also use the ISO/IEC 25030 standard to suggest the requirements that should be considered during the quality evaluation of these mPHRs. We then go on to design a checklist in which we contrast the mPHRs for pregnancy monitoring requirements with software product quality characteristics and sub-characteristics in order to calculate the impact of these requirements on software product quality, using the ISO/IEC 25010 software product quality standard. The results obtained show that the requirements related to the user's actions and the app's features have the most impact on the external sub-characteristics of the software product quality model. The only sub-characteristic affected by all the requirements is Appropriateness of Functional suitability. The characteristic Operability is affected by 95% of the requirements while the lowest degrees of impact were identified for Compatibility (15%) and Transferability (6%). Lastly, the degrees of the impact of the mPHRs for pregnancy monitoring requirements are discussed in order to provide appropriate recommendations for the developers and stakeholders of mPHRs for pregnancy monitoring. PMID:26643080
Munson, John B.
A summary of Quality Improvement techniques, implementation, and results in the maintenance, management, and modification of large software systems for the Space Shuttle Program's ground-based systems is provided.
In order to improve quality control of Iridium-192 wires produced by IPEN, an automatic system prototype for the measurement of Iridium-192 activated wire was developed. This work shows the development of the Iridium software for such system
Bryant, J.L.; Wilburn, N.P.
Pacific Northwest Laboratory is conducting a research project to recommend good engineering practices in the application of 10 CFR 50, Appendix B requirements to assure quality in the development and use of computer software for the design and operation of nuclear power plants for NRC and industry. This handbook defines the content of a software quality assurance program by enumerating the techniques applicable. Definitions, descriptions, and references where further information may be obtained are provided for each topic.
Pacific Northwest Laboratory is conducting a research project to recommend good engineering practices in the application of 10 CFR 50, Appendix B requirements to assure quality in the development and use of computer software for the design and operation of nuclear power plants for NRC and industry. This handbook defines the content of a software quality assurance program by enumerating the techniques applicable. Definitions, descriptions, and references where further information may be obtained are provided for each topic
In this paper we focus on implementation and launch of software that allows us to compare quantitatively the two-dimensional dose distributions calculated and measured experimentally in IMRT treatment. The tool we are using to make this comparison is the free software DoseLab. This is a program written in MatLab and open source, thereby allowing in some cases adapt the program to the needs of each user. This program will be able to calculate the gamma function of these distributions, a parameter that simultaneously evaluates the difference in dose between two pixels of the image and the distance between them, giving us an objective and quantitative, allowing us to decide if both distributions are compatible or not.
Horré, Wouter; Hughes, Danny; Michiels, Sam; Joosen, Wouter
If we are to deploy sensor applications in a realistic business context, we must provide innovative middleware services to control and enforce required system behavior; in order to correctly interpret collected temperature data, for example, sensor applications require guarantees about minimal coverage and the number of available sensors. The extreme dynamism, scale and unreliability of wireless sensor networks represent major challenges in contemporary software management. This paper pres...
Guihua Zheng; Quanlong Guan
The present teaching evaluation models are researched on and the evaluation criteria is designed automatically from the perspectives of experts and students. And then that is made to be a kind of scientific and reasonable criteria. By combining the approach weighted entropy and fuzzy comprehensive evaluation, the present model proposes a teaching comprehensive evaluation model. This software model solves some problems in conducting quantitative analysis of teaching equality. And at the same t...
Signore, Oreste; Loffredo, Mario; Chericoni, Susanna
The maintenance of applications constitutes a relevant issue, as a lot of effort goes in this task. CASE tools claim to be effective in producing efficient and error free software, but in many cases the aim is not to produce new application systems, but just to modify the existing ones. Re-engineering appears to be a suitable way of getting the advantages of the automated CASE tools, without incurring the costs involved in a complete redevelopment of the existing systems, whose specifications...
Quality achievement is the responsibility of the line organizations of the National Ignition Facility (NIF) Project. This Software Quality Assurance Plan (SQAP) applies to the activities of the Integrated Computer Control System (ICCS) organization and its subcontractors. The Plan describes the activities implemented by the ICCS section to achieve quality in the NIF Project`s controls software and implements the NIF Quality Assurance Program Plan (QAPP, NIF-95-499, L-15958-2) and the Department of Energy`s (DOE`s) Order 5700.6C. This SQAP governs the quality affecting activities associated with developing and deploying all control system software during the life cycle of the NIF Project.
Pydi, Manikanta Kumar; Nakka, Annie Sushma
Context This thesis verifies a method developed on alignment issues in different data points and is useful to validate the method in those data points. To find the alignment/misalignment problems occurring within the stakeholders in a company is done through surveys using Hierarchical Cumulative Voting (HCV). This paper presents a case study to explain the importance of alignment between the stakeholders to achieve quality. Time, scope and cost are given higher priority leaving quality as it ...
Shahbaz Ahmed Khan Ghayyur
Full Text Available Mobile healthcare systems are currently considered as key research areas in the domain of software engineering. The adoption of modern technologies, for mobile healthcare systems, is a quick option for industry professionals. Software architecture is a key feature that contributes towards a software product, solution, or services. Software architecture helps in better communication, documentation of design decisions, risks identification, basis for reusability, scalability, scheduling, and reduced maintenance cost and lastly it helps to avoid software failures. Hence, in order to solve the abovementioned issues in mobile healthcare, the software architecture is integrated with personal software process. Personal software process has been applied successfully but it is unable to address the issues related to architectural design and evaluation capabilities. Hence, a new technique architecture augmented personal process is presented in order to enhance the quality of the mobile healthcare systems through the use of architectural design with integration of personal software process. The proposed process was validated by case studies. It was found that the proposed process helped in reducing the overall costs and effort. Moreover, an improved architectural design helped in development of high quality mobile healthcare system.
Peck, T; Sparkman, D; Storch, N
''The LLNL Site-Specific Advanced Simulation and Computing (ASCI) Software Quality Engineering Recommended Practices VI.I'' document describes a set of recommended software quality engineering (SQE) practices for ASCI code projects at Lawrence Livermore National Laboratory (LLNL). In this context, SQE is defined as the process of building quality into software products by applying the appropriate guiding principles and management practices. Continual code improvement and ongoing process improvement are expected benefits. Certain practices are recommended, although projects may select the specific activities they wish to improve, and the appropriate time lines for such actions. Additionally, projects can rely on the guidance of this document when generating ASCI Verification and Validation (VSrV) deliverables. ASCI program managers will gather information about their software engineering practices and improvement. This information can be shared to leverage the best SQE practices among development organizations. It will further be used to ensure the currency and vitality of the recommended practices. This Overview is intended to provide basic information to the LLNL ASCI software management and development staff from the ''LLNL Site-Specific ASCI Software Quality Engineering Recommended Practices VI.I'' document. Additionally the Overview provides steps to using the ''LLNL Site-Specific ASCI Software Quality Engineering Recommended Practices VI.I'' document. For definitions of terminology and acronyms, refer to the Glossary and Acronyms sections in the ''LLNL Site-Specific ASCI Software Quality Engineering Recommended Practices VI.I''.
Kluth, Stefan; The ATLAS collaboration; Obreshkov, Emil; Roe, Shaun; Seuster, Rolf; Snyder, Scott; Stewart, Graeme
The ATLAS experiment at CERN uses about six million lines of code and currently has about 420 developers whose background is largely from physics. In this paper we explain how the C++ code quality is managed using a range of tools from compile-time through to run time testing and reflect on the great progress made in the last year largely through the use of static analysis tools such as Coverity®, an industry-standard tool which enables quality comparison with general open source C++ code. Other tools including cppcheck, Include-What-You-Use and run-time 'sanitizers' are also discussed.
One reason for systematically applying quality assurance to computer software is the extensive use of results from computer programs to characterize potential sited for nuclear waste repositories leading ultimately to important policy making decisions. Because data from these programs characterize the likely radioactivity profile for many hundreds of years, experimental validation is not feasible. The Sensitivity and Uncertainty Analysis Methods Development Project (SUAMDP) was developed to formulate and utilize efficient and comprehensive methods for determining sensitivities of calculated results with respect to changes in all input parameters. The computerized methodology was embodied in the Gradient Enhanced Software System (GRESS). Due to the fact that GRESS was to be used in the site characterization for waste storage, stringent NQA-1 requirements were imposed by the sponsor. A working relationship between the Oak Ridge National Laboratory (ORNL) Quality Department and the research scientists developing GRESS was essential in achieving understanding and acceptance of the quality assurance requirements as applied to the SUAMDP. The relationship resulted in the SUAMDP becoming the first software project at ORNL to develop a comprehensive NQA-1 Quality Assurance Plan; this plan now serves as a model for software quality assurance at ORNL. This paper describes the evolution of this plan and its impact on the application of quality assurance procedures to software. 2 refs
It is a high-quality algorithm for hierarchical clustering of large software source code. This effectively allows to break the complexity of tens of millions lines of source code, so that a human software engineer can comprehend a software system at high level by means of looking at its architectural diagram that is reconstructed automatically from the source code of the software system. The architectural diagram shows a tree of subsystems having OOP classes in its leaves (in the other words, a nested software decomposition). The tool reconstructs the missing (inconsistent/incomplete/inexistent) architectural documentation for a software system from its source code. This facilitates software maintenance: change requests can be performed substantially faster. Simply speaking, this unique tool allows to lift the comprehensible grain of object-oriented software systems from OOP class-level to subsystem-level. It is estimated that a commercial tool, developed on the basis of this work, will reduce software mainte...
Ambrosiano, J.; Webster, R. [Los Alamos National Lab., NM (United States)
This report contains viewgraphs on the quest to develope better simulation code quality through process modeling and improvement. This study is based on the experience of the authors and interviews with ten subjects chosen from simulation code development teams at LANL. This study is descriptive rather than scientific.
Nemesio Freitas Duarte Filho
Full Text Available The market for software products offered as a service (SaaS is growing steadily and has attractedsuppliers from different segments of the global IT market. However, the use of the SaaS products brings arange of challenges,both in the organizational, cultural and technological areas. A difficulty that existstoday is the lack of methods and models for assessing the quality of these products. This document presentsa method to assess the quality of a software product offeredas a service, named SaaSQuality. Theproposed method has a quality model appropriate to the SaaS context, based on standards and models ofsoftware quality (ISO 9126 and models for IT management (ITIL and COBIT. The experimental resultsobtained througha case study show that the method offers suitable assessment practices for Software as aService.
Various computer codes employed at Israel Electricity Company for preliminary reactor design analysis and fuel cycle scoping calculations have been often subject to program source modifications. Although most changes were due to computer or operating system compatibility problems, a number of significant modifications were due to model improvement and enhancements of algorithm efficiency and accuracy. With growing acceptance of software quality assurance requirements and methods, a program of implementing extensive testing of modified software has been adopted within the regular maintenance activities. In this work survey has been performed of various software quality assurance methods of software testing which belong mainly to the two major categories of implementation ('white box') and specification-based ('black box') testing. The results of this survey exhibits a clear preference of specification-based testing. In particular the equivalence class partitioning method and the boundary value method have been selected as especially suitable functional methods for testing reactor analysis codes.A separate study of software quality assurance methods and techniques has been performed in this work objective to establish appropriate pre-test software specification methods. Two methods of software analysis and specification have been selected as the most suitable for this purpose: The method of data flow diagrams has been shown to be particularly valuable for performing the functional/procedural software specification while the entities - relationship diagrams has been approved to be efficient for specifying software data/information domain. Feasibility of these two methods has been analyzed in particular for software uncertainty analysis and overall code accuracy estimation. (author). 14 refs
Full Text Available A test case in software engineering is a set of conditions orvariables under which a tester will determine whether anapplication or software system is working correctly or not.The mechanism for determining whether a software programor system has passed or failed such a test is known as a testoracle.Defect prevention is the most vital but habitually neglectedfacet of software quality assurance in any project. Iffunctional at all stages of software development, it cancondense the time, overheads and wherewithal entailed toengineer a high quality product. The key challenge of an ITindustry is to engineer a software product with minimumpost deployment defectsThis paper will focus on finding the total number of defects ifthe test case shows that the software process not workingproperly. That has occurred in the software developmentprocess. For three similar projects and aims at classifyingvarious defects using first level of Orthogonal DefectClassification (ODC, finding base causes of the defects anduses the learning of the projects as preventive ideas. Thepaper also showcases on how the preventive ideas areimplemented in a new set of projects resulting in thereduction of the number of similar
Full Text Available Various factors affect the impact of agile factors on the continuous delivery of software projects. This is a major reason why projects perform differently- some failing and some succeeding- when they implement some agile practices in various environments. This is not helped by the fact that many projects work within limited budget while project plans also change-- making them to fall into some sort of pressure to meet deadline when they fall behind in their planned work. This study investigates the impact of pair programming, customer involvement, QA Ability, pair testing and test driven development in the pre-release and post -release quality of software projects using system dynamics within a schedule pressure blighted environment. The model is validated using results from a completed medium-sized software. Statistical results suggest that the impact of PP is insignificant on the pre-release quality of the software while TDD and customer involvement both have significant effects on the pre-release quality of software. Results also showed that both PT and QA ability had a significant impact on the post-release quality of the software.
Jørgensen, John Leif
Complex instruments like the ASC may be quite difficult to test in closed loops. This problem is augmented by the fact, that no direct stimulation of the CHU is possible that will render the full performance, noise-spectrum and real-timeliness with high fidelity. In order to circumvent this impasse...
Montero, Juan Manuel; San Segundo, Ruben; de Cordoba, Ricardo; de La Barcena, Amparo Marin; Zlotnik, Alexander
The Web is changing the way people access & exchange information. Specifically in the teaching & learning environment, we are witnessing that the traditional model of presence based magisterial classes is shifting towards Web Based Learning. This new model draws on remote access systems, knowledge sharing, and student mobility. In this context, pedagogical strategies are also changing, and for instance, Project- Based Learning (PBL) is seen as a potential driver for growth and development in this arena. This study is focused on a PBL oriented course with a Distributed Remote ACcess (DRAC) system. The objective is to analyze how quantitative methods can be leveraged to design and evaluate automatic diagnosis and feedback tools to assist students on quality-related pedagogical issues in DRAC enabled PBL courses. Main conclusions derived from this study are correlation-based and reveal that the development of automatic quality assessment and feedback requires further research.
The paper is aimed at presenting a model of transition from quality management systems to knowledge management systems in software developing organizations. The methodology focuses on presenting components of the model of transition from quality management systems to knowledge management systems. The paper defines the model of transition from the quality management systems conformable with series 9000 ISO international standards supplemented with ISO/IEC 90003:2004 to knowledge management sys...
Quality Assurance programme in diagnostic Radiology is being implemented by the Ministry of Health (MoH) in Malaysia. Under this program the performance of an x-ray machine used for diagnostic purpose is tested by using the approved procedure which is commonly known as Quality Control in diagnostic radiology. The quality control or performance tests are carried out b a class H licence holder issued the Atomic Energy Licensing Act 1984. There are a few computer applications (software) that are available in the market which can be used for this purpose. A computer application (software) using Visual Basics 6 and Microsoft Access, is being developed to expedite data handling, analysis and storage as well as report writing of the quality control tests. In this paper important features of the software for quality control tests are explained in brief. A simple database is being established for this purpose which is linked to the software. Problems encountered in the preparation of database are discussed in this paper. A few examples of practical usage of the software and database applications are presented in brief. (Author)
Saifi Bawahir , Mohsin Sheikh
Full Text Available Analysis of Data quality is an important issue which has been addressed as data warehousing, data mining and information systems. It has been agreed that poor data quality will impact the quality of results of analyses and that it will therefore impact on decisions made on the basis of these results. An attempt to improve classification accuracy by pre-clustering did not succeed. However, error rates within clusters from training sets were strongly correlated with error rates within the same clusters on the test sets. This phenomenon could perhaps be used to develop confidence levels for predictions. The main and the common problem that the software industry has to face is the maintenance cost of industrial software systems. One of the main reasons for the high cost of maintenance is the inherent difficulty of understanding software systems that are large, complex, inconsistent and integrated. The main reason behind the above phenomena is because of different size and level of arrangements. Decomposing a software system into smaller, more manageable subsystems can aid the process of understanding it signiﬁcantly. Different algorithms construct different decompositions. Therefore, it is important to have methods that evaluate the quality of such automatic decompositions. In our paper we present a brief survey on software quality prediction through clustering.
Improvements of the JT-60 control system are constantly required as the experiments go on. In order to keep the integrity of the whole system in case of modifying the control functions, the idea of quality control has been introduced into the software development. The objective of quality control in the JT-60 control system is to accelerate the software development. The QC activities lay emphasis on making the present status of the software clear and establishing the standard procedure of the software development. The support tools for grasping the present status of the control system have been developed in the general purpose large computer, where the database on the structure of programs and the relations among programs and tables are installed. Document control is also very important. This paper reports these QC activities and their problems for the JT-60 control system. (author)
Vickery, A [Department of Clinical Physiology and Nuclear Medicine, Glostrup Hospital (Denmark); Joergensen, T [Department of Clinical Physiology and Nuclear Medicine, Naestved Hospital (Denmark); De Nijs, R, E-mail: firstname.lastname@example.org [Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, Copenhagen University Hospital (Denmark)
A thorough quality assurance of gamma and SPECT cameras requires a careful handling of the measured quality control (QC) data. Most gamma camera manufacturers provide the users with camera specific QC Software. This QC software is indeed a useful tool for the following of day-to-day performance of a single camera. However, when it comes to objective performance comparison of different gamma cameras and a deeper understanding of the calculated numbers, the use of camera specific QC software without access to the source code is rather avoided. Calculations and definitions might differ, and manufacturer independent standardized results are preferred. Based upon the NEMA Standards Publication NU 1-2007, we have developed a suite of easy-to-use data handling software for processing acquired QC data providing the user with instructive images and text files with the results.
The paper describes a PC based program with database for quality control (QC). It keeps information about all surveyed equipment and measured parameters. The first function of the program is to extract information from old (existing) MS Excel spreadsheets with QC surveys. The second function is used for input of measurements which are automatically organized in MS Excel spreadsheets and built into the database. The spreadsheets are based on the protocols described in the EMERALD Training Scheme. In addition, the program can make statistics of all measured parameters, both in absolute term and in time
Arciniegas Herrera, Jose Luis
The quality of software is a key element for the successful of a system. Currently, with the advance of the technology, consumers demand more and better services. Models for the development process have also to be adapted to new requirements. This is particular true in the case of service oriented systems (domain of this thesis), where an unpredictable number of users can access to one or several services. This work proposes an improvement in the models for the software development proces...
Orr, James K.; Peltier, Daryl
Thsi slide presentation reviews the avionics software system on board the space shuttle, with particular emphasis on the quality and reliability. The Primary Avionics Software System (PASS) provides automatic and fly-by-wire control of critical shuttle systems which executes in redundant computers. Charts given show the number of space shuttle flights vs time, PASS's development history, and other charts that point to the reliability of the system's development. The reliability of the system is also compared to predicted reliability.
Rova, Andrew; Celler, Anna; Hamarneh, Ghassan
We have developed a cross-platform software application that implements all of the basic standardized nuclear medicine scintillation camera quality control analyses, thus serving as an independent complement to camera manufacturers’ software. Our application allows direct comparison of data and statistics from different cameras through its ability to uniformly analyze a range of file types. The program has been tested using multiple gamma cameras, and its results agree with comparable analysi...
Mcgarry, Frank; Valett, Jon; Hall, Dana
The availability and quality of computer resources during the software development process was speculated to have measurable, significant impact on the efficiency of the development process and the quality of the resulting product. Environment components such as the types of tools, machine responsiveness, and quantity of direct access storage may play a major role in the effort to produce the product and in its subsequent quality as measured by factors such as reliability and ease of maintenance. During the past six years, the NASA Goddard Space Flight Center has conducted experiments with software projects in an attempt to better understand the impact of software development methodologies, environments, and general technologies on the software process and product. Data was extracted and examined from nearly 50 software development projects. All were related to support of satellite flight dynamics ground-based computations. The relationship between computer resources and the software development process and product as exemplified by the subject NASA data was examined. Based upon the results, a number of computer resource-related implications are provided.
With the purpose of create the bases for quality control and quality assurance of the acquisition and processing program of gated cardiac blood-pool (MUGA) studies, we used the VENSTRA cardiac function phantom in 7 cameras (4 SOPHA- DSX-1000, 2 GE- IMAGAMMA-2001 and 1 SIEMENS- HERMES) and made 3 acquisition for each Global Left Ventricular Ejection Fraction (LVEF 30%, 60% and 80%) and for each Heart Rate (HR 40, 80 and 160 heart beat/min). The planar resolution and the planar uniformity were proper in all the equipment. Differences less than 5% were found between the acquisition and processing program. To evaluate the processing program without the acquisition parameter's influence, we used one group of these image like software phantom and test the semi-automatic software in all cameras. The semi-automatic protocol showed difference less than 3% between software. The automatic processing software of gated cardiac studies were checked with the COST-B2 software phantom; the difference between the Left Ventricle Ejection Fraction calculated by these software was less than 5% and the regional wall motion analysis was complete coincident in the 93% of the cases. The use of VENSTRA and COST- B2 phantom confirm the correct functioning of the acquisition and the LVEF calculus software of MUGA studies in the 83% of cuban nuclear medicine centers
Holme, Oliver; Dissertori, Günther; Djambazov, Lubomir; Lustermann, Werner; Zelepoukine, Serguei
The Detector Control System (DCS) software of the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment at CERN is designed primarily to enable safe and efficient operation of the detector during Large Hadron Collider (LHC) data-taking periods. Through a manual analysis of the code and the adoption of ConQAT , a software quality assessment toolkit, the CMS ECAL DCS team has made significant progress in reducing complexity and improving code quality, with observable results in terms of a reduction in the effort dedicated to software maintenance. This paper explains the methodology followed, including the motivation to adopt ConQAT, the specific details of how this toolkit was used and the outcomes that have been achieved.  ConQAT, Continuous Quality Assessment Toolkit; https://www.conqat.org/
Full Text Available Developing web applications using Geographically Distributed Team Members has seen an increased popularity during the last years mainly because the rise of Open Source technologies, fast penetration of the Internet in emerging economies, the continuous quest for reduced costs as well for the fast adoption of online platforms and services which successfully address project planning, coordination and other development tasks. This paper identifies general software process stages for both collocated and distributed development and analyses the impact the use of planning, management and testing online services has on the duration, cost and quality of each stage. Given that Quality Assurance is one of the most important concerns in Geographically Distributed Software Development (GDSD, the focus is on Software Quality Validation.