WorldWideScience

Sample records for video quality database

  1. SIRSALE: integrated video database management tools

    Science.gov (United States)

    Brunie, Lionel; Favory, Loic; Gelas, J. P.; Lefevre, Laurent; Mostefaoui, Ahmed; Nait-Abdesselam, F.

    2002-07-01

    Video databases became an active field of research during the last decade. The main objective in such systems is to provide users with capabilities to friendly search, access and playback distributed stored video data in the same way as they do for traditional distributed databases. Hence, such systems need to deal with hard issues : (a) video documents generate huge volumes of data and are time sensitive (streams must be delivered at a specific bitrate), (b) contents of video data are very hard to be automatically extracted and need to be humanly annotated. To cope with these issues, many approaches have been proposed in the literature including data models, query languages, video indexing etc. In this paper, we present SIRSALE : a set of video databases management tools that allow users to manipulate video documents and streams stored in large distributed repositories. All the proposed tools are based on generic models that can be customized for specific applications using ad-hoc adaptation modules. More precisely, SIRSALE allows users to : (a) browse video documents by structures (sequences, scenes, shots) and (b) query the video database content by using a graphical tool, adapted to the nature of the target video documents. This paper also presents an annotating interface which allows archivists to describe the content of video documents. All these tools are coupled to a video player integrating remote VCR functionalities and are based on active network technology. So, we present how dedicated active services allow an optimized video transport for video streams (with Tamanoir active nodes). We then describe experiments of using SIRSALE on an archive of news video and soccer matches. The system has been demonstrated to professionals with a positive feedback. Finally, we discuss open issues and present some perspectives.

  2. Handbook of video databases design and applications

    CERN Document Server

    Furht, Borko

    2003-01-01

    INTRODUCTIONIntroduction to Video DatabasesOge Marques and Borko FurhtVIDEO MODELING AND REPRESENTATIONModeling Video Using Input/Output Markov Models with Application to Multi-Modal Event DetectionAshutosh Garg, Milind R. Naphade, and Thomas S. HuangStatistical Models of Video Structure and SemanticsNuno VasconcelosFlavor: A Language for Media RepresentationAlexandros Eleftheriadis and Danny HongIntegrating Domain Knowledge and Visual Evidence to Support Highlight Detection in Sports VideosJuergen Assfalg, Marco Bertini, Carlo Colombo, and Alberto Del BimboA Generic Event Model and Sports Vid

  3. Blind prediction of natural video quality.

    Science.gov (United States)

    Saad, Michele A; Bovik, Alan C; Charrier, Christophe

    2014-03-01

    We propose a blind (no reference or NR) video quality evaluation model that is nondistortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine transform domain, and on a model that characterizes the type of motion occurring in the scenes, to predict video quality. We use the models to define video statistics and perceptual features that are the basis of a video quality assessment (VQA) algorithm that does not require the presence of a pristine video to compare against in order to predict a perceptual quality score. The contributions of this paper are threefold. 1) We propose a spatio-temporal natural scene statistics (NSS) model for videos. 2) We propose a motion model that quantifies motion coherency in video scenes. 3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality. The proposed algorithm, called video BLIINDS, is tested on the LIVE VQA database and on the EPFL-PoliMi video database and shown to perform close to the level of top performing reduced and full reference VQA algorithms.

  4. Video Quality Prediction over Wireless 4G

    KAUST Repository

    Lau, Chun Pong

    2013-04-14

    In this paper, we study the problem of video quality prediction over the wireless 4G network. Video transmission data is collected from a real 4G SCM testbed for investigating factors that affect video quality. After feature transformation and selection on video and network parameters, video quality is predicted by solving as regression problem. Experimental results show that the dominated factor on video quality is the channel attenuation and video quality can be well estimated by our models with small errors.

  5. Video Quality Prediction over Wireless 4G

    KAUST Repository

    Lau, Chun Pong; Zhang, Xiangliang; Shihada, Basem

    2013-01-01

    In this paper, we study the problem of video quality prediction over the wireless 4G network. Video transmission data is collected from a real 4G SCM testbed for investigating factors that affect video quality. After feature transformation and selection on video and network parameters, video quality is predicted by solving as regression problem. Experimental results show that the dominated factor on video quality is the channel attenuation and video quality can be well estimated by our models with small errors.

  6. Video quality pooling adaptive to perceptual distortion severity.

    Science.gov (United States)

    Park, Jincheol; Seshadrinathan, Kalpana; Lee, Sanghoon; Bovik, Alan Conrad

    2013-02-01

    It is generally recognized that severe video distortions that are transient in space and/or time have a large effect on overall perceived video quality. In order to understand this phenomena, we study the distribution of spatio-temporally local quality scores obtained from several video quality assessment (VQA) algorithms on videos suffering from compression and lossy transmission over communication channels. We propose a content adaptive spatial and temporal pooling strategy based on the observed distribution. Our method adaptively emphasizes "worst" scores along both the spatial and temporal dimensions of a video sequence and also considers the perceptual effect of large-area cohesive motion flow such as egomotion. We demonstrate the efficacy of the method by testing it using three different VQA algorithms on the LIVE Video Quality database and the EPFL-PoliMI video quality database.

  7. A Database Design and Development Case: Home Theater Video

    Science.gov (United States)

    Ballenger, Robert; Pratt, Renee

    2012-01-01

    This case consists of a business scenario of a small video rental store, Home Theater Video, which provides background information, a description of the functional business requirements, and sample data. The case provides sufficient information to design and develop a moderately complex database to assist Home Theater Video in solving their…

  8. Attention modeling for video quality assessment

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Perkis, Andrew

    2010-01-01

    averaged spatiotemporal pooling. The local quality is derived from visual attention modeling and quality variations over frames. Saliency, motion, and contrast information are taken into account in modeling visual attention, which is then integrated into IQMs to calculate the local quality of a video frame...... average between the global quality and the local quality. Experimental results demonstrate that the combination of the global quality and local quality outperforms both sole global quality and local quality, as well as other quality models, in video quality assessment. In addition, the proposed video...... quality modeling algorithm can improve the performance of image quality metrics on video quality assessment compared to the normal averaged spatiotemporal pooling scheme....

  9. Study of Temporal Effects on Subjective Video Quality of Experience.

    Science.gov (United States)

    Bampis, Christos George; Zhi Li; Moorthy, Anush Krishna; Katsavounidis, Ioannis; Aaron, Anne; Bovik, Alan Conrad

    2017-11-01

    HTTP adaptive streaming is being increasingly deployed by network content providers, such as Netflix and YouTube. By dividing video content into data chunks encoded at different bitrates, a client is able to request the appropriate bitrate for the segment to be played next based on the estimated network conditions. However, this can introduce a number of impairments, including compression artifacts and rebuffering events, which can severely impact an end-user's quality of experience (QoE). We have recently created a new video quality database, which simulates a typical video streaming application, using long video sequences and interesting Netflix content. Going beyond previous efforts, the new database contains highly diverse and contemporary content, and it includes the subjective opinions of a sizable number of human subjects regarding the effects on QoE of both rebuffering and compression distortions. We observed that rebuffering is always obvious and unpleasant to subjects, while bitrate changes may be less obvious due to content-related dependencies. Transient bitrate drops were preferable over rebuffering only on low complexity video content, while consistently low bitrates were poorly tolerated. We evaluated different objective video quality assessment algorithms on our database and found that objective video quality models are unreliable for QoE prediction on videos suffering from both rebuffering events and bitrate changes. This implies the need for more general QoE models that take into account objective quality models, rebuffering-aware information, and memory. The publicly available video content as well as metadata for all of the videos in the new database can be found at http://live.ece.utexas.edu/research/LIVE_NFLXStudy/nflx_index.html.

  10. No Reference Video-Quality-Assessment Model for Monitoring Video Quality of IPTV Services

    Science.gov (United States)

    Yamagishi, Kazuhisa; Okamoto, Jun; Hayashi, Takanori; Takahashi, Akira

    Service providers should monitor the quality of experience of a communication service in real time to confirm its status. To do this, we previously proposed a packet-layer model that can be used for monitoring the average video quality of typical Internet protocol television content using parameters derived from transmitted packet headers. However, it is difficult to monitor the video quality per user using the average video quality because video quality depends on the video content. To accurately monitor the video quality per user, a model that can be used for estimating the video quality per video content rather than the average video quality should be developed. Therefore, to take into account the impact of video content on video quality, we propose a model that calculates the difference in video quality between the video quality of the estimation-target video and the average video quality estimated using a packet-layer model. We first conducted extensive subjective quality assessments for different codecs and video sequences. We then model their characteristics based on parameters related to compression and packet loss. Finally, we verify the performance of the proposed model by applying it to unknown data sets different from the training data sets used for developing the model.

  11. Video Databases: An Emerging Tool in Business Education

    Science.gov (United States)

    MacKinnon, Gregory; Vibert, Conor

    2014-01-01

    A video database of business-leader interviews has been implemented in the assignment work of students in a Bachelor of Business Administration program at a primarily-undergraduate liberal arts university. This action research study was designed to determine the most suitable assignment work to associate with the database in a Business Strategy…

  12. Quality of Experience Assessment of Video Quality in Social Clouds

    Directory of Open Access Journals (Sweden)

    Asif Ali Laghari

    2017-01-01

    Full Text Available Video sharing on social clouds is popular among the users around the world. High-Definition (HD videos have big file size so the storing in cloud storage and streaming of videos with high quality from cloud to the client are a big problem for service providers. Social clouds compress the videos to save storage and stream over slow networks to provide quality of service (QoS. Compression of video decreases the quality compared to original video and parameters are changed during the online play as well as after download. Degradation of video quality due to compression decreases the quality of experience (QoE level of end users. To assess the QoE of video compression, we conducted subjective (QoE experiments by uploading, sharing, and playing videos from social clouds. Three popular social clouds, Facebook, Tumblr, and Twitter, were selected to upload and play videos online for users. The QoE was recorded by using questionnaire given to users to provide their experience about the video quality they perceive. Results show that Facebook and Twitter compressed HD videos more as compared to other clouds. However, Facebook gives a better quality of compressed videos compared to Twitter. Therefore, users assigned low ratings for Twitter for online video quality compared to Tumblr that provided high-quality online play of videos with less compression.

  13. Subjective video quality comparison of HDTV monitors

    Science.gov (United States)

    Seo, G.; Lim, C.; Lee, S.; Lee, C.

    2009-01-01

    HDTV broadcasting services have become widely available. Furthermore, in the upcoming IPTV services, HDTV services are important and quality monitoring becomes an issue, particularly in IPTV services. Consequently, there have been great efforts to develop video quality measurement methods for HDTV. On the other hand, most HDTV programs will be watched on digital TV monitors which include LCD and PDP TV monitors. In general, the LCD and PDP TV monitors have different color characteristics and response times. Furthermore, most commercial TV monitors include post-processing to improve video quality. In this paper, we compare subjective video quality of some commercial HD TV monitors to investigate the impact of monitor type on perceptual video quality. We used the ACR method as a subjective testing method. Experimental results show that the correlation coefficients among the HDTV monitors are reasonable high. However, for some video sequences and impairments, some differences in subjective scores were observed.

  14. Research on Construction of Road Network Database Based on Video Retrieval Technology

    Directory of Open Access Journals (Sweden)

    Wang Fengling

    2017-01-01

    Full Text Available Based on the characteristics of the video database and the basic structure of the video database and several typical video data models, the segmentation-based multi-level data model is used to describe the landscape information video database, the network database model and the road network management database system. Landscape information management system detailed design and implementation of a detailed preparation.

  15. Real-time video quality monitoring

    Science.gov (United States)

    Liu, Tao; Narvekar, Niranjan; Wang, Beibei; Ding, Ran; Zou, Dekun; Cash, Glenn; Bhagavathy, Sitaram; Bloom, Jeffrey

    2011-12-01

    The ITU-T Recommendation G.1070 is a standardized opinion model for video telephony applications that uses video bitrate, frame rate, and packet-loss rate to measure the video quality. However, this model was original designed as an offline quality planning tool. It cannot be directly used for quality monitoring since the above three input parameters are not readily available within a network or at the decoder. And there is a great room for the performance improvement of this quality metric. In this article, we present a real-time video quality monitoring solution based on this Recommendation. We first propose a scheme to efficiently estimate the three parameters from video bitstreams, so that it can be used as a real-time video quality monitoring tool. Furthermore, an enhanced algorithm based on the G.1070 model that provides more accurate quality prediction is proposed. Finally, to use this metric in real-world applications, we present an example emerging application of real-time quality measurement to the management of transmitted videos, especially those delivered to mobile devices.

  16. Video Measurements: Quantity or Quality

    Science.gov (United States)

    Zajkov, Oliver; Mitrevski, Boce

    2012-01-01

    Students have problems with understanding, using and interpreting graphs. In order to improve the students' skills for working with graphs, we propose Manual Video Measurement (MVM). In this paper, the MVM method is explained and its accuracy is tested. The comparison with the standardized video data software shows that its accuracy is comparable…

  17. Perceived Quality of Full HD Video - Subjective Quality Assessment

    Directory of Open Access Journals (Sweden)

    Juraj Bienik

    2016-01-01

    Full Text Available In recent years, an interest in multimedia services has become a global trend and this trend is still rising. The video quality is a very significant part from the bundle of multimedia services, which leads to a requirement for quality assessment in the video domain. Video quality of a streamed video across IP networks is generally influenced by two factors “transmission link imperfection and efficiency of compression standards. This paper deals with subjective video quality assessment and the impact of the compression standards H.264, H.265 and VP9 on perceived video quality of these compression standards. The evaluation is done for four full HD sequences, the difference of scenes is in the content“ distinction is based on Spatial (SI and Temporal (TI Index of test sequences. Finally, experimental results follow up to 30% bitrate reducing of H.265 and VP9 compared with the reference H.264.

  18. Video Quality Prediction Models Based on Video Content Dynamics for H.264 Video over UMTS Networks

    Directory of Open Access Journals (Sweden)

    Asiya Khan

    2010-01-01

    Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.

  19. Adaptive testing for video quality assessment

    NARCIS (Netherlands)

    Menkovski, V.; Exarchakos, G.; Liotta, A.; Damásio, M.J.; Cardoso, G.; Quico, C.; Geerts, D.

    2011-01-01

    Optimizing the Quality of Experience and avoiding under or over provisioning in video delivery services requires understanding of how different resources affect the perceived quality. The utility of resources, such as bit-rate, is directly calculated by proportioningthe improvement in quality over

  20. BDVC (Bimodal Database of Violent Content): A database of violent audio and video

    Science.gov (United States)

    Rivera Martínez, Jose Luis; Mijes Cruz, Mario Humberto; Rodríguez Vázqu, Manuel Antonio; Rodríguez Espejo, Luis; Montoya Obeso, Abraham; García Vázquez, Mireya Saraí; Ramírez Acosta, Alejandro Álvaro

    2017-09-01

    Nowadays there is a trend towards the use of unimodal databases for multimedia content description, organization and retrieval applications of a single type of content like text, voice and images, instead bimodal databases allow to associate semantically two different types of content like audio-video, image-text, among others. The generation of a bimodal database of audio-video implies the creation of a connection between the multimedia content through the semantic relation that associates the actions of both types of information. This paper describes in detail the used characteristics and methodology for the creation of the bimodal database of violent content; the semantic relationship is stablished by the proposed concepts that describe the audiovisual information. The use of bimodal databases in applications related to the audiovisual content processing allows an increase in the semantic performance only and only if these applications process both type of content. This bimodal database counts with 580 audiovisual annotated segments, with a duration of 28 minutes, divided in 41 classes. Bimodal databases are a tool in the generation of applications for the semantic web.

  1. Educational quality of YouTube videos on knee arthrocentesis.

    Science.gov (United States)

    Fischer, Jonas; Geurts, Jeroen; Valderrabano, Victor; Hügle, Thomas

    2013-10-01

    Knee arthrocentesis is a commonly performed diagnostic and therapeutic procedure in rheumatology and orthopedic surgery. Classic teaching of arthrocentesis skills relies on hands-on practice under supervision. Video-based online teaching is an increasingly utilized educational tool in higher and clinical education. YouTube is a popular video-sharing Web site that can be accessed as a teaching source. The objective of this study was to assess the educational value of YouTube videos on knee arthrocentesis posted by health professionals and institutions during the period from 2008 to 2012. The YouTube video database was systematically searched using 5 search terms related to knee arthrocentesis. Two independent clinical reviewers assessed videos for procedural technique and educational value using a 5-point global score, ranging from 1 = poor quality to 5 = excellent educational quality. As validated international guidelines are lacking, we used the guidelines of the Swiss Society of Rheumatology as criterion standard for the procedure. Of more than thousand findings, 13 videos met the inclusion criteria. Of those, 2 contained additional animated video material: one was purely animated, and one was a check list. The average length was 3.31 ± 2.28 minutes. The most popular video had 1388 hits per month. Our mean global score for educational value was 3.1 ± 1.0. Eight videos (62 %) were considered useful for teaching purposes. Use of a "no-touch" procedure, meaning that once disinfected the skin remains untouched before needle penetration, was present in all videos. Six videos (46%) demonstrated full sterile conditions. There was no clear preference of a medial (n = 8) versus lateral (n = 5) approach. A discreet number of YouTube videos on knee arthrocentesis appeared to be suitable for application in a Web-based format for medical students, fellows, and residents. The low-average mean global score for overall educational value suggests an improvement of future video

  2. Self-aligning and compressed autosophy video databases

    Science.gov (United States)

    Holtz, Klaus E.

    1993-04-01

    Autosophy, an emerging new science, explains `self-assembling structures,' such as crystals or living trees, in mathematical terms. This research provides a new mathematical theory of `learning' and a new `information theory' which permits the growing of self-assembling data network in a computer memory similar to the growing of `data crystals' or `data trees' without data processing or programming. Autosophy databases are educated very much like a human child to organize their own internal data storage. Input patterns, such as written questions or images, are converted to points in a mathematical omni dimensional hyperspace. The input patterns are then associated with output patterns, such as written answers or images. Omni dimensional information storage will result in enormous data compression because each pattern fragment is only stored once. Pattern recognition in the text or image files is greatly simplified by the peculiar omni dimensional storage method. Video databases will absorb input images from a TV camera and associate them with textual information. The `black box' operations are totally self-aligning where the input data will determine their own hyperspace storage locations. Self-aligning autosophy databases may lead to a new generation of brain-like devices.

  3. Predicting personal preferences in subjective video quality assessment

    DEFF Research Database (Denmark)

    Korhonen, Jari

    2017-01-01

    In this paper, we study the problem of predicting the visual quality of a specific test sample (e.g. a video clip) experienced by a specific user, based on the ratings by other users for the same sample and the same user for other samples. A simple linear model and algorithm is presented, where...... the characteristics of each test sample are represented by a set of parameters, and the individual preferences are represented by weights for the parameters. According to the validation experiment performed on public visual quality databases annotated with raw individual scores, the proposed model can predict...

  4. National Water Quality Standards Database (NWQSD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The National Water Quality Standards Database (WQSDB) provides access to EPA and state water quality standards (WQS) information in text, tables, and maps. This data...

  5. An automatic analyzer for sports video databases using visual cues and real-world modeling

    NARCIS (Netherlands)

    Han, Jungong; Farin, D.S.; With, de P.H.N.; Lao, Weilun

    2006-01-01

    With the advent of hard-disk video recording, video databases gradually emerge for consumer applications. The large capacity of disks requires the need for fast storage and retrieval functions. We propose a semantic analyzer for sports video, which is able to automatically extract and analyze key

  6. Development of a dementia assessment quality database

    DEFF Research Database (Denmark)

    Johannsen, P.; Jørgensen, Kasper; Korner, A.

    2011-01-01

    OBJECTIVE: Increased focus on the quality of health care requires tools and information to address and improve quality. One tool to evaluate and report the quality of clinical health services is quality indicators based on a clinical database. METHOD: The Capital Region of Denmark runs a quality...... database for dementia evaluation in the secondary health system. One volume and seven process quality indicators on dementia evaluations are monitored. Indicators include frequency of demented patients, percentage of patients evaluated within three months, whether the work-up included blood tests, Mini...... for the data analyses. RESULTS: The database was constructed in 2005 and covers 30% of the Danish population. Data from all consecutive cases evaluated for dementia in the secondary health system in the Capital Region of Denmark are entered. The database has shown that the basic diagnostic work-up programme...

  7. The Danish national quality database for births

    DEFF Research Database (Denmark)

    Andersson, Charlotte Brix; Flems, Christina; Kesmodel, Ulrik Schiøler

    2016-01-01

    Aim of the database: The aim of the Danish National Quality Database for Births (DNQDB) is to measure the quality of the care provided during birth through specific indicators. Study population: The database includes all hospital births in Denmark. Main variables: Anesthesia/pain relief, continuous...... Medical Birth Registry. Registration to the Danish Medical Birth Registry is mandatory for all maternity units in Denmark. During the 5 years, performance has improved in the areas covered by the process indicators and for some of the outcome indicators. Conclusion: Measuring quality of care during...

  8. No-Reference Video Quality Assessment Based on Statistical Analysis in 3D-DCT Domain.

    Science.gov (United States)

    Li, Xuelong; Guo, Qun; Lu, Xiaoqiang

    2016-05-13

    It is an important task to design models for universal no-reference video quality assessment (NR-VQA) in multiple video processing and computer vision applications. However, most existing NR-VQA metrics are designed for specific distortion types which are not often aware in practical applications. A further deficiency is that the spatial and temporal information of videos is hardly considered simultaneously. In this paper, we propose a new NR-VQA metric based on the spatiotemporal natural video statistics (NVS) in 3D discrete cosine transform (3D-DCT) domain. In the proposed method, a set of features are firstly extracted based on the statistical analysis of 3D-DCT coefficients to characterize the spatiotemporal statistics of videos in different views. These features are used to predict the perceived video quality via the efficient linear support vector regression (SVR) model afterwards. The contributions of this paper are: 1) we explore the spatiotemporal statistics of videos in 3DDCT domain which has the inherent spatiotemporal encoding advantage over other widely used 2D transformations; 2) we extract a small set of simple but effective statistical features for video visual quality prediction; 3) the proposed method is universal for multiple types of distortions and robust to different databases. The proposed method is tested on four widely used video databases. Extensive experimental results demonstrate that the proposed method is competitive with the state-of-art NR-VQA metrics and the top-performing FR-VQA and RR-VQA metrics.

  9. Quality metric for spherical panoramic video

    Science.gov (United States)

    Zakharchenko, Vladyslav; Choi, Kwang Pyo; Park, Jeong Hoon

    2016-09-01

    Virtual reality (VR)/ augmented reality (AR) applications allow users to view artificial content of a surrounding space simulating presence effect with a help of special applications or devices. Synthetic contents production is well known process form computer graphics domain and pipeline has been already fixed in the industry. However emerging multimedia formats for immersive entertainment applications such as free-viewpoint television (FTV) or spherical panoramic video require different approaches in content management and quality assessment. The international standardization on FTV has been promoted by MPEG. This paper is dedicated to discussion of immersive media distribution format and quality estimation process. Accuracy and reliability of the proposed objective quality estimation method had been verified with spherical panoramic images demonstrating good correlation results with subjective quality estimation held by a group of experts.

  10. Perceptual tools for quality-aware video networks

    Science.gov (United States)

    Bovik, A. C.

    2014-01-01

    Monitoring and controlling the quality of the viewing experience of videos transmitted over increasingly congested networks (especially wireless networks) is a pressing problem owing to rapid advances in video-centric mobile communication and display devices that are straining the capacity of the network infrastructure. New developments in automatic perceptual video quality models offer tools that have the potential to be used to perceptually optimize wireless video, leading to more efficient video data delivery and better received quality. In this talk I will review key perceptual principles that are, or could be used to create effective video quality prediction models, and leading quality prediction models that utilize these principles. The goal is to be able to monitor and perceptually optimize video networks by making them "quality-aware."

  11. Expert database system for quality control

    Science.gov (United States)

    Wang, Anne J.; Li, Zhi-Cheng

    1993-09-01

    There are more competitors today. Markets are not homogeneous they are fragmented into increasingly focused niches requiring greater flexibility in the product mix shorter manufacturing production runs and above allhigher quality. In this paper the author identified a real-time expert system as a way to improve plantwide quality management. The quality control expert database system (QCEDS) by integrating knowledge of experts in operations quality management and computer systems use all information relevant to quality managementfacts as well as rulesto determine if a product meets quality standards. Keywords: expert system quality control data base

  12. Video processing for human perceptual visual quality-oriented video coding.

    Science.gov (United States)

    Oh, Hyungsuk; Kim, Wonha

    2013-04-01

    We have developed a video processing method that achieves human perceptual visual quality-oriented video coding. The patterns of moving objects are modeled by considering the limited human capacity for spatial-temporal resolution and the visual sensory memory together, and an online moving pattern classifier is devised by using the Hedge algorithm. The moving pattern classifier is embedded in the existing visual saliency with the purpose of providing a human perceptual video quality saliency model. In order to apply the developed saliency model to video coding, the conventional foveation filtering method is extended. The proposed foveation filter can smooth and enhance the video signals locally, in conformance with the developed saliency model, without causing any artifacts. The performance evaluation results confirm that the proposed video processing method shows reliable improvements in the perceptual quality for various sequences and at various bandwidths, compared to existing saliency-based video coding methods.

  13. Video quality measure for mobile IPTV service

    Science.gov (United States)

    Kim, Wonjun; Kim, Changick

    2008-08-01

    Mobile IPTV is a multimedia service based on wireless networks with interactivity and mobility. Under mobile IPTV scenarios, people can watch various contents whenever they want and even deliver their request to service providers through the network. However, the frequent change of the wireless channel bandwidth may hinder the quality of service. In this paper, we propose an objective video quality measure (VQM) for mobile IPTV services, which is focused on the jitter measurement. Jitter is the result of frame repetition during the delay and one of the most severe impairments in the video transmission via mobile channels. We first employ YUV color space to compute the duration and occurrences of jitter and the motion activity. Then the VQM is modeled by the combination of these three factors and the result of subjective assessment. Since the proposed VQM is based on no-reference (NR) model, it can be applied for real-time applications. Experimental results show that the proposed VQM highly correlates to subjective evaluation.

  14. Deep learning for quality assessment in live video streaming

    NARCIS (Netherlands)

    Torres Vega, M.; Mocanu, D.C.; Famaey, J.; Stavrou, S.; Liotta, A.

    Video content providers put stringent requirements on the quality assessment methods realized on their services. They need to be accurate, real-time, adaptable to new content, and scalable as the video set grows. In this letter, we introduce a novel automated and computationally efficient video

  15. Rate control scheme for consistent video quality in scalable video codec.

    Science.gov (United States)

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  16. No-Reference Video Quality Assessment using MPEG Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2013-01-01

    We present a method for No-Reference (NR) Video Quality Assessment (VQA) for decoded video without access to the bitstream. This is achieved by extracting and pooling features from a NR image quality assessment method used frame by frame. We also present methods to identify the video coding...... and estimate the video coding parameters for MPEG-2 and H.264/AVC which can be used to improve the VQA. The analysis differs from most other video coding analysis methods since it is without access to the bitstream. The results show that our proposed method is competitive with other recent NR VQA methods...

  17. Quality-Aware Estimation of Facial Landmarks in Video Sequences

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2015-01-01

    Face alignment in video is a primitive step for facial image analysis. The accuracy of the alignment greatly depends on the quality of the face image in the video frames and low quality faces are proven to cause erroneous alignment. Thus, this paper proposes a system for quality aware face...... for facial landmark detection. If the face quality is low the proposed system corrects the facial landmarks that are detected by SDM. Depending upon the face velocity in consecutive video frames and face quality measure, two algorithms are proposed for correction of landmarks in low quality faces by using...

  18. Effective Quality-of-Service Renegotiating Schemes for Streaming Video

    Directory of Open Access Journals (Sweden)

    Song Hwangjun

    2004-01-01

    Full Text Available This paper presents effective quality-of-service renegotiating schemes for streaming video. The conventional network supporting quality of service generally allows a negotiation at a call setup. However, it is not efficient for the video application since the compressed video traffic is statistically nonstationary. Thus, we consider the network supporting quality-of-service renegotiations during the data transmission and study effective quality-of-service renegotiating schemes for streaming video. The token bucket model, whose parameters are token filling rate and token bucket size, is adopted for the video traffic model. The renegotiating time instants and the parameters are determined by analyzing the statistical information of compressed video traffic. In this paper, two renegotiating approaches, that is, fixed renegotiating interval case and variable renegotiating interval case, are examined. Finally, the experimental results are provided to show the performance of the proposed schemes.

  19. Content-Based Video Retrieval: A Database Perspective

    NARCIS (Netherlands)

    Petkovic, M.; Jonker, Willem

    2003-01-01

    Recent advances in computing, communication, and data storage have led to an increasing number of large digital libraries publicly available on the Internet. In addition to alphanumeric data, other modalities, including video play an important role in these libraries. Ordinary techniques will not

  20. No-Reference Video Quality Assessment using Codec Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2015-01-01

    types of videos, estimating the level of quantization used in the I-frames, and exploiting this information to assess the video quality. In order to do this for H.264/AVC, the distribution of the DCT-coefficients after intra-prediction and deblocking are modeled. To obtain VQA features for H.264/AVC, we......A no-reference video quality assessment (VQA) method is presented for videos distorted by H.264/AVC and MPEG-2. The assessment is performed without access to the bit-stream. Instead we analyze and estimate coefficients based on decoded pixels. The approach involves distinguishing between the two...... propose a novel estimation method of the quantization in H.264/AVC videos without bitstream access, which can also be used for Peak Signalto-Noise Ratio (PSNR) estimation. The results from the MPEG-2 and H.264/AVC analysis are mapped to a perceptual measure of video quality by Support Vector Regression...

  1. Offset Trace-Based Video Quality Evaluation Network Transport

    DEFF Research Database (Denmark)

    Seeling, P.; Reisslein, M.; Fitzek, Frank

    2006-01-01

    Video traces contain information about encoded video frames, such as frame sizes and qualities, and provide a convenient method to conduct multimedia networking research. Although wiedely used in networking research, these traces do not allow to determine the video qaulityin an accurate manner...... after networking transport that includes losses and delays. In this work, we provide (i) an overview of frame dependencies that have to be taken into consideration when working with video traces, (ii) an algorithmic approach to combine traditional video traces and offset distortion traces to determine...... the video quality or distortion after lossy network transport, (iii) offset distortion and quality characteristics and (iv) the offset distortion trace format and tools to create offset distortion traces....

  2. Perceptual video quality assessment in H.264 video coding standard using objective modeling.

    Science.gov (United States)

    Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu

    2014-01-01

    Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.

  3. Delivering Diagnostic Quality Video over Mobile Wireless Networks for Telemedicine

    Directory of Open Access Journals (Sweden)

    Sira P. Rao

    2009-01-01

    Full Text Available In real-time remote diagnosis of emergency medical events, mobility can be enabled by wireless video communications. However, clinical use of this potential advance will depend on definitive and compelling demonstrations of the reliability of diagnostic quality video. Because the medical domain has its own fidelity criteria, it is important to incorporate diagnostic video quality criteria into any video compression system design. To this end, we used flexible algorithms for region-of-interest (ROI video compression and obtained feedback from medical experts to develop criteria for diagnostically lossless (DL quality. The design of the system occurred in three steps-measurement of bit rate at which DL quality is achieved through evaluation of videos by medical experts, incorporation of that information into a flexible video encoder through the notion of encoder states, and an encoder state update option based on a built-in quality criterion. Medical experts then evaluated our system for the diagnostic quality of the video, allowing us to verify that it is possible to realize DL quality in the ROI at practical communication data transfer rates, enabling mobile medical assessment over bit-rate limited wireless channels. This work lays the scientific foundation for additional validation through prototyped technology, field testing, and clinical trials.

  4. Research on quality metrics of wireless adaptive video streaming

    Science.gov (United States)

    Li, Xuefei

    2018-04-01

    With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.

  5. Crowdsourcing based subjective quality assessment of adaptive video streaming

    DEFF Research Database (Denmark)

    Shahid, M.; Søgaard, Jacob; Pokhrel, J.

    2014-01-01

    In order to cater for user’s quality of experience (QoE) re- quirements, HTTP adaptive streaming (HAS) based solutions of video services have become popular recently. User QoE feedback can be instrumental in improving the capabilities of such services. Perceptual quality experiments that involve...... humans are considered to be the most valid method of the as- sessment of QoE. Besides lab-based subjective experiments, crowdsourcing based subjective assessment of video quality is gaining popularity as an alternative method. This paper presents insights into a study that investigates perceptual pref......- erences of various adaptive video streaming scenarios through crowdsourcing based subjective quality assessment....

  6. Subjective Video Quality Assessment in H.264/AVC Video Coding Standard

    Directory of Open Access Journals (Sweden)

    Z. Miličević

    2012-11-01

    Full Text Available This paper seeks to provide an approach for subjective video quality assessment in the H.264/AVC standard. For this purpose a special software program for the subjective assessment of quality of all the tested video sequences is developed. It was developed in accordance with recommendation ITU-T P.910, since it is suitable for the testing of multimedia applications. The obtained results show that in the proposed selective intra prediction and optimized inter prediction algorithm there is a small difference in picture quality (signal-to-noise ratio between decoded original and modified video sequences.

  7. No-Reference Video Quality Assessment by HEVC Codec Analysis

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2015-01-01

    This paper proposes a No-Reference (NR) Video Quality Assessment (VQA) method for videos subject to the distortion given by High Efficiency Video Coding (HEVC). The proposed assessment can be performed either as a BitstreamBased (BB) method or as a Pixel-Based (PB). It extracts or estimates...... the transform coefficients, estimates the distortion, and assesses the video quality. The proposed scheme generates VQA features based on Intra coded frames, and then maps features using an Elastic Net to predict subjective video quality. A set of HEVC coded 4K UHD sequences are tested. Results show...... that the quality scores computed by the proposed method are highly correlated with the subjective assessment....

  8. No-reference pixel based video quality assessment for HEVC decoded video

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2017-01-01

    the quantization step used in the Intra coding is estimated. We map the obtained HEVC features using an Elastic Net to predict subjective video quality scores, Mean Opinion Scores (MOS). The performance is verified on a dataset consisting of HEVC coded 4 K UHD (resolution equal to 3840 x 2160) video sequences...

  9. Quality and noise measurements in mobile phone video capture

    Science.gov (United States)

    Petrescu, Doina; Pincenti, John

    2011-02-01

    The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.

  10. Subjective Quality Assessment of H.264/AVC Video Streaming with Packet Losses

    Directory of Open Access Journals (Sweden)

    Naccari Matteo

    2011-01-01

    Full Text Available Research in the field of video quality assessment relies on the availability of subjective scores, collected by means of experiments in which groups of people are asked to rate the quality of video sequences. The availability of subjective scores is fundamental to enable validation and comparative benchmarking of the objective algorithms that try to predict human perception of video quality by automatically analyzing the video sequences, in a way to support reproducible and reliable research results. In this paper, a publicly available database of subjective quality scores and corrupted video sequences is described. The scores refer to 156 sequences at CIF and 4CIF spatial resolutions, encoded with H.264/AVC and corrupted by simulating the transmission over an error-prone network. The subjective evaluation has been performed by 40 subjects at the premises of two academic institutions, in standard-compliant controlled environments. In order to support reproducible research in the field of full-reference, reduced-reference, and no-reference video quality assessment algorithms, both the uncompressed files and the H.264/AVC bitstreams, as well as the packet loss patterns, have been made available to the research community.

  11. A time-varying subjective quality model for mobile streaming videos with stalling events

    Science.gov (United States)

    Ghadiyaram, Deepti; Pan, Janice; Bovik, Alan C.

    2015-09-01

    Over-the-top mobile video streaming is invariably influenced by volatile network conditions which cause playback interruptions (stalling events), thereby impairing users' quality of experience (QoE). Developing models that can accurately predict users' QoE could enable the more efficient design of quality-control protocols for video streaming networks that reduce network operational costs while still delivering high-quality video content to the customers. Existing objective models that predict QoE are based on global video features, such as the number of stall events and their lengths, and are trained and validated on a small pool of ad hoc video datasets, most of which are not publicly available. The model we propose in this work goes beyond previous models as it also accounts for the fundamental effect that a viewer's recent level of satisfaction or dissatisfaction has on their overall viewing experience. In other words, the proposed model accounts for and adapts to the recency, or hysteresis effect caused by a stall event in addition to accounting for the lengths, frequency of occurrence, and the positions of stall events - factors that interact in a complex way to affect a user's QoE. On the recently introduced LIVE-Avvasi Mobile Video Database, which consists of 180 distorted videos of varied content that are afflicted solely with over 25 unique realistic stalling events, we trained and validated our model to accurately predict the QoE, attaining standout QoE prediction performance.

  12. Modeling the time--varying subjective quality of HTTP video streams with rate adaptations.

    Science.gov (United States)

    Chen, Chao; Choi, Lark Kwon; de Veciana, Gustavo; Caramanis, Constantine; Heath, Robert W; Bovik, Alan C

    2014-05-01

    Newly developed hypertext transfer protocol (HTTP)-based video streaming technologies enable flexible rate-adaptation under varying channel conditions. Accurately predicting the users' quality of experience (QoE) for rate-adaptive HTTP video streams is thus critical to achieve efficiency. An important aspect of understanding and modeling QoE is predicting the up-to-the-moment subjective quality of a video as it is played, which is difficult due to hysteresis effects and nonlinearities in human behavioral responses. This paper presents a Hammerstein-Wiener model for predicting the time-varying subjective quality (TVSQ) of rate-adaptive videos. To collect data for model parameterization and validation, a database of longer duration videos with time-varying distortions was built and the TVSQs of the videos were measured in a large-scale subjective study. The proposed method is able to reliably predict the TVSQ of rate adaptive videos. Since the Hammerstein-Wiener model has a very simple structure, the proposed method is suitable for online TVSQ prediction in HTTP-based streaming.

  13. Quality of YouTube TM videos on dental implants.

    Science.gov (United States)

    Abukaraky, A; Hamdan, A-A; Ameera, M-N; Nasief, M; Hassona, Y

    2018-07-01

    Patients search YouTube for health-care information. To examine what YouTube offers patients seeking information on dental implants, and to evaluate the quality of provided information. A systematic search of YouTube for videos containing information on dental implants was performed using the key words Dental implant and Tooth replacement. Videos were examined by two senior Oral and Maxillofacial Surgery residents who were trained and calibrated to perform the search. Initial assessment was performed to exclude non- English language videos, duplicate videos, conference lectures, and irrelevant videos. Included videos were analyzed with regard to demographics and content's usefulness. Information for patients available from the American Academy of Implant Dentistry, European Association of Osseointegration, and British Society of Restorative Dentistry were used for benchmarking. A total of 117 videos were analyzed. The most commonly discussed topics were related to procedures involved in dental implantology (76.1%, n=89), and to the indications for dental implants (58.1%, n=78). The mean usefulness score of videos was poor (6.02 ±4.7 [range 0-21]), and misleading content was common (30.1% of videos); mainly in topics related to prognosis and maintenance of dental implants. Most videos (83.1%, n=97) failed to mention the source of information presented in the video or where to find more about dental implants. Information about dental implants on YouTube is limited in quality and quantity. YouTube videos can have a potentially important role in modulating patients attitude and treatment decision regarding dental implants.

  14. The impact of database quality on keystroke dynamics authentication

    KAUST Repository

    Panasiuk, Piotr; Rybnik, Mariusz; Saeed, Khalid; Rogowski, Marcin

    2016-01-01

    This paper concerns keystroke dynamics, also partially in the context of touchscreen devices. The authors concentrate on the impact of database quality and propose their algorithm to test database quality issues. The algorithm is used on their own

  15. Bandwidth allocation for video under quality of service constraints

    CERN Document Server

    Anjum, Bushra

    2014-01-01

    We present queueing-based algorithms to calculate the bandwidth required for a video stream so that the three main Quality of Service constraints, i.e., end-to-end delay, jitter and packet loss, are ensured. Conversational and streaming video-based applications are becoming a major part of the everyday Internet usage. The quality of these applications (QoS), as experienced by the user, depends on three main metrics of the underlying network, namely, end-to-end delay, jitter and packet loss. These metrics are, in turn, directly related to the capacity of the links that the video traffic trave

  16. Objective video quality measure for application to tele-echocardiography.

    Science.gov (United States)

    Moore, Peter Thomas; O'Hare, Neil; Walsh, Kevin P; Ward, Neil; Conlon, Niamh

    2008-08-01

    Real-time tele-echocardiography is widely used to remotely diagnose or exclude congenital heart defects. Cost effective technical implementation is realised using low-bandwidth transmission systems and lossy compression (videoconferencing) schemes. In our study, DICOM video sequences were converted to common multimedia formats, which were then, compressed using three lossy compression algorithms. We then applied a digital (multimedia) video quality metric (VQM) to determine objectively a value for degradation due to compression. Three levels of compression were simulated by varying system bandwidth and compared to a subjective assessment of video clip quality by three paediatric cardiologists with more than 5 years of experience.

  17. EXFOR: Improving the quality of international databases

    International Nuclear Information System (INIS)

    Dupont, Emmeric

    2014-01-01

    The NEA Data Bank is an international centre of reference for basic nuclear tools used for the analysis and prediction of phenomena in nuclear energy applications. The Data Bank collects, compiles, disseminates and contributes to improving computer codes and associated data. In the area of nuclear data, the Data Bank works in close co-operation with other data centres that contribute to the worldwide compilation of experimental nuclear reaction data in the EXFOR database. EXFOR contains basic nuclear data on low- to medium-energy experiments for incident neutron, photon and various charged particle induced reactions on a wide range of nuclei and compounds. Today, with more than 150 000 data sets from more than 20 000 experiments performed since 1935, EXFOR is by far the most important and complete experimental nuclear reaction database. It is widely used to further improve nuclear reaction models and evaluated nuclear data libraries. The Data Bank supervises the development of the Joint Evaluated Fission and Fusion (JEFF) file, which is one of the major evaluated nuclear data libraries used in the field of nuclear science and technology. As part of its mission, the Data Bank works to maintain the highest level of quality in its databases. One method that was proposed to check the mutual consistency of experimental data in EXFOR is to test for outlier measurements more than a few standard deviations from the mean value as, in principle, several measurements of the same reaction quantity should form a continuous distribution. More recently, another method was developed to cross-check evaluated and experimental data in databases in order to detect aberrant values. It was noted that there is no evidence, on the basis of numerical comparisons only, that outliers represent 'bad' data. The fact that such data deviate significantly from other data of the same reaction may, however, be helpful to nuclear data evaluators who focus on one or a few isotopes and may wish to

  18. Algorithms for the automatic identification of MARFEs and UFOs in JET database of visible camera videos

    International Nuclear Information System (INIS)

    Murari, A.; Camplani, M.; Cannas, B.; Usai, P.; Mazon, D.; Delaunay, F.

    2010-01-01

    MARFE instabilities and UFOs leave clear signatures in JET fast visible camera videos. Given the potential harmful consequences of these events, particularly as triggers of disruptions, it would be important to have the means of detecting them automatically. In this paper, the results of various algorithms to identify automatically the MARFEs and UFOs in JET visible videos are reported. The objective is to retrieve the videos, which have captured these events, exploring the whole JET database of images, as a preliminary step to the development of real-time identifiers in the future. For the detection of MARFEs, a complete identifier has been finalized, using morphological operators and Hu moments. The final algorithm manages to identify the videos with MARFEs with a success rate exceeding 80%. Due to the lack of a complete statistics of examples, the UFO identifier is less developed, but a preliminary code can detect UFOs quite reliably. (authors)

  19. Frame Rate versus Spatial Quality: Which Video Characteristics Do Matter?

    DEFF Research Database (Denmark)

    Korhonen, Jari; Reiter, Ulrich; Ukhanova, Ann

    2013-01-01

    and temporal quality levels. We also propose simple yet powerful metrics for characterizing spatial and temporal properties of a video sequence, and demonstrate how these metrics can be applied for evaluating the relative impact of spatial and temporal quality on the perceived overall quality....

  20. Predictive no-reference assessment of video quality

    NARCIS (Netherlands)

    Torres Vega, M.; Mocanu, D.C.; Stavrou, S.; Liotta, A.

    2017-01-01

    Among the various means to evaluate the quality of video streams, light-weight No-Reference (NR) methods have low computation and may be executed on thin clients. Thus, these methods would be perfect candidates in cases of real-time quality assessment, automated quality control and in adaptive

  1. A database of whole-body action videos for the study of action, emotion, and untrustworthiness.

    Science.gov (United States)

    Keefe, Bruce D; Villing, Matthias; Racey, Chris; Strong, Samantha L; Wincenciak, Joanna; Barraclough, Nick E

    2014-12-01

    We present a database of high-definition (HD) videos for the study of traits inferred from whole-body actions. Twenty-nine actors (19 female) were filmed performing different actions-walking, picking up a box, putting down a box, jumping, sitting down, and standing and acting-while conveying different traits, including four emotions (anger, fear, happiness, sadness), untrustworthiness, and neutral, where no specific trait was conveyed. For the actions conveying the four emotions and untrustworthiness, the actions were filmed multiple times, with the actor conveying the traits with different levels of intensity. In total, we made 2,783 action videos (in both two-dimensional and three-dimensional format), each lasting 7 s with a frame rate of 50 fps. All videos were filmed in a green-screen studio in order to isolate the action information from all contextual detail and to provide a flexible stimulus set for future use. In order to validate the traits conveyed by each action, we asked participants to rate each of the actions corresponding to the trait that the actor portrayed in the two-dimensional videos. To provide a useful database of stimuli of multiple actions conveying multiple traits, each video name contains information on the gender of the actor, the action executed, the trait conveyed, and the rating of its perceived intensity. All videos can be downloaded free at the following address: http://www-users.york.ac.uk/~neb506/databases.html. We discuss potential uses for the database in the analysis of the perception of whole-body actions.

  2. Quality Assessment of Compressed Video for Automatic License Plate Recognition

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Støttrup-Andersen, Jesper; Forchhammer, Søren

    2014-01-01

    Definition of video quality requirements for video surveillance poses new questions in the area of quality assessment. This paper presents a quality assessment experiment for an automatic license plate recognition scenario. We explore the influence of the compression by H.264/AVC and H.265/HEVC s...... recognition in our study has a behavior similar to human recognition, allowing the use of the same mathematical models. We furthermore propose an application of one of the models for video surveillance systems......Definition of video quality requirements for video surveillance poses new questions in the area of quality assessment. This paper presents a quality assessment experiment for an automatic license plate recognition scenario. We explore the influence of the compression by H.264/AVC and H.265/HEVC...... standards on the recognition performance. We compare logarithmic and logistic functions for quality modeling. Our results show that a logistic function can better describe the dependence of recognition performance on the quality for both compression standards. We observe that automatic license plate...

  3. Home Video Telemetry vs inpatient telemetry: A comparative study looking at video quality

    Directory of Open Access Journals (Sweden)

    Sutapa Biswas

    Full Text Available Objective: To compare the quality of home video recording with inpatient telemetry (IPT to evaluate our current Home Video Telemetry (HVT practice. Method: To assess our HVT practice, a retrospective comparison of the video quality against IPT was conducted with the latter as the gold standard. A pilot study had been conducted in 2008 on 5 patients.Patients (n = 28 were included in each group over a period of one year.The data was collected from referral spreadsheets, King’s EPR and telemetry archive.Scoring of the events captured was by consensus using two scorers.The variables compared included: visibility of the body part of interest, visibility of eyes, time of event, illumination, contrast, sound quality and picture clarity when amplified to 200%.Statistical evaluation was carried out using Shapiro–Wilk and Chi-square tests. The P-value of ⩽0.05 was considered statistically significant. Results: Significant differences were demonstrated in lighting and contrast between the two groups (HVT performed better in both.Amplified picture quality was slightly better in the HVT group. Conclusion: Video quality of HVT is comparable to IPT, even surpassing IPT in certain aspects such as the level of illumination and contrast. Results were reconfirmed in a larger sample of patients with more variables. Significance: Despite the user and environmental variability in HVT, it looks promising and can be seriously considered as a preferable alternative for patients who may require investigation at locations remote from an EEG laboratory. Keywords: Home Video Telemetry, EEG, Home video monitoring, Video quality

  4. Educational Quality of YouTube Videos in Thumb Exercises for Carpometacarpal Osteoarthritis: A Search on Current Practice.

    Science.gov (United States)

    Villafañe, Jorge Hugo; Cantero-Tellez, Raquel; Valdes, Kristin; Usuelli, Federico Giuseppe; Berjano, Pedro

    2017-09-01

    Conservative treatments are commonly performed therapeutic interventions for the management of carpometacarpal (CMC) joint osteoarthritis (OA). Physical and occupational therapies are starting to use video-based online content as both a patient teaching tool and a source for treatment techniques. YouTube is a popular video-sharing website that can be accessed easily. The purpose of this study was to analyze the quality of content and potential sources of bias in videos available on YouTube pertaining to thumb exercises for CMC OA. The YouTube video database was systematically searched using the search term thumb osteoarthritis and exercises from its inception to March 10, 2017. Authors independently selected videos, conducted quality assessment, and extracted results. A total of 832 videos were found using the keywords. Of these, 10 videos clearly demonstrated therapeutic exercise for the management of CMC OA. In addition, the top-ranked video found by performing a search of "views" was a video with more than 121 863 views uploaded in 2015 that lasted 12.33 minutes and scored only 2 points on the Global Score for Educational Value rating scale. Most of the videos viewed that described conservative interventions for CMC OA management have a low level of evidence to support their use. Although patients and novice hand therapists are using YouTube and other online resources, videos that are produced by expert hand therapists are scarce.

  5. Balancing Attended and Global Stimuli in Perceived Video Quality Assessment

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Perkis, Andrew

    2011-01-01

    . This paper proposes a quality model based on the late attention selection theory, assuming that the video quality is perceived via two mechanisms: global and local quality assessment. First we model several visual features influencing the visual attention in quality assessment scenarios to derive......The visual attention mechanism plays a key role in the human perception system and it has a significant impact on our assessment of perceived video quality. In spite of receiving less attention from the viewers, unattended stimuli can still contribute to the understanding of the visual content...... an attention map using appropriate fusion techniques. The global quality assessment as based on the assumption that viewers allocate their attention equally to the entire visual scene, is modeled by four carefully designed quality features. By employing these same quality features, the local quality model...

  6. Portuguese food composition database quality management system.

    Science.gov (United States)

    Oliveira, L M; Castanheira, I P; Dantas, M A; Porto, A A; Calhau, M A

    2010-11-01

    The harmonisation of food composition databases (FCDB) has been a recognised need among users, producers and stakeholders of food composition data (FCD). To reach harmonisation of FCDBs among the national compiler partners, the European Food Information Resource (EuroFIR) Network of Excellence set up a series of guidelines and quality requirements, together with recommendations to implement quality management systems (QMS) in FCDBs. The Portuguese National Institute of Health (INSA) is the national FCDB compiler in Portugal and is also a EuroFIR partner. INSA's QMS complies with ISO/IEC (International Organization for Standardisation/International Electrotechnical Commission) 17025 requirements. The purpose of this work is to report on the strategy used and progress made for extending INSA's QMS to the Portuguese FCDB in alignment with EuroFIR guidelines. A stepwise approach was used to extend INSA's QMS to the Portuguese FCDB. The approach included selection of reference standards and guides and the collection of relevant quality documents directly or indirectly related to the compilation process; selection of the adequate quality requirements; assessment of adequacy and level of requirement implementation in the current INSA's QMS; implementation of the selected requirements; and EuroFIR's preassessment 'pilot' auditing. The strategy used to design and implement the extension of INSA's QMS to the Portuguese FCDB is reported in this paper. The QMS elements have been established by consensus. ISO/IEC 17025 management requirements (except 4.5) and 5.2 technical requirements, as well as all EuroFIR requirements (including technical guidelines, FCD compilation flowchart and standard operating procedures), have been selected for implementation. The results indicate that the quality management requirements of ISO/IEC 17025 in place in INSA fit the needs for document control, audits, contract review, non-conformity work and corrective actions, and users' (customers

  7. Recurrent and Dynamic Models for Predicting Streaming Video Quality of Experience.

    Science.gov (United States)

    Bampis, Christos G; Li, Zhi; Katsavounidis, Ioannis; Bovik, Alan C

    2018-07-01

    Streaming video services represent a very large fraction of global bandwidth consumption. Due to the exploding demands of mobile video streaming services, coupled with limited bandwidth availability, video streams are often transmitted through unreliable, low-bandwidth networks. This unavoidably leads to two types of major streaming-related impairments: compression artifacts and/or rebuffering events. In streaming video applications, the end-user is a human observer; hence being able to predict the subjective Quality of Experience (QoE) associated with streamed videos could lead to the creation of perceptually optimized resource allocation strategies driving higher quality video streaming services. We propose a variety of recurrent dynamic neural networks that conduct continuous-time subjective QoE prediction. By formulating the problem as one of time-series forecasting, we train a variety of recurrent neural networks and non-linear autoregressive models to predict QoE using several recently developed subjective QoE databases. These models combine multiple, diverse neural network inputs, such as predicted video quality scores, rebuffering measurements, and data related to memory and its effects on human behavioral responses, using them to predict QoE on video streams impaired by both compression artifacts and rebuffering events. Instead of finding a single time-series prediction model, we propose and evaluate ways of aggregating different models into a forecasting ensemble that delivers improved results with reduced forecasting variance. We also deploy appropriate new evaluation metrics for comparing time-series predictions in streaming applications. Our experimental results demonstrate improved prediction performance that approaches human performance. An implementation of this work can be found at https://github.com/christosbampis/NARX_QoE_release.

  8. The art of assessing quality for images and video

    International Nuclear Information System (INIS)

    Deriche, M.

    2011-01-01

    The early years of this century have witnessed a tremendous growth in the use of digital multimedia data for di?erent communication applications. Researchers from around the world are spending substantial research efforts in developing techniques for improving the appearance of images/video. However, as we know, preserving high quality is a challenging task. Images are subject to distortions during acquisition, compression, transmission, analysis, and reconstruction. For this reason, the research area focusing on image and video quality assessment has attracted a lot of attention in recent years. In particular, compression applications and other multimedia applications need powerful techniques for evaluating quality objectively without human interference. This tutorial will cover the di?erent faces of image quality assessment. We will motivate the need for robust image quality assessment techniques, then discuss the main algorithms found in the literature with a critical perspective. We will present the di?erent metrics used for full reference, reduced reference and no reference applications. We will then discuss the difference between image and video quality assessment. In all of the above, we will take a critical approach to explain which metric can be used for which application. Finally we will discuss the different approaches to analyze the performance of image/video quality metrics, and end the tutorial with some perspectives on newly introduced metrics and their potential applications.

  9. Towards Video Quality Metrics Based on Colour Fractal Geometry

    Directory of Open Access Journals (Sweden)

    Richard Noël

    2010-01-01

    Full Text Available Vision is a complex process that integrates multiple aspects of an image: spatial frequencies, topology and colour. Unfortunately, so far, all these elements were independently took into consideration for the development of image and video quality metrics, therefore we propose an approach that blends together all of them. Our approach allows for the analysis of the complexity of colour images in the RGB colour space, based on the probabilistic algorithm for calculating the fractal dimension and lacunarity. Given that all the existing fractal approaches are defined only for gray-scale images, we extend them to the colour domain. We show how these two colour fractal features capture the multiple aspects that characterize the degradation of the video signal, based on the hypothesis that the quality degradation perceived by the user is directly proportional to the modification of the fractal complexity. We claim that the two colour fractal measures can objectively assess the quality of the video signal and they can be used as metrics for the user-perceived video quality degradation and we validated them through experimental results obtained for an MPEG-4 video streaming application; finally, the results are compared against the ones given by unanimously-accepted metrics and subjective tests.

  10. The impact of database quality on keystroke dynamics authentication

    KAUST Repository

    Panasiuk, Piotr

    2016-06-11

    This paper concerns keystroke dynamics, also partially in the context of touchscreen devices. The authors concentrate on the impact of database quality and propose their algorithm to test database quality issues. The algorithm is used on their own database> as well as the well-known database>. Following specific problems were researched: classification accuracy, development of user typing proficiency, time precision during sample acquisition, representativeness of training set, sample length.

  11. Quality assurance testing on video games : The importance and impact of a misunderstood industry

    OpenAIRE

    Ruuska, Essi

    2015-01-01

    The aim of this research was to provide a more holistic insight of the video game quality assurance industry to video game industry professionals and prospective employees in order to promote the importance and impact of quality assurance testing in video games. The motive for this thesis came from the author's work experience in video game quality assurance testing, and from realizing how little is known about the industry. The research question was defined as 'what is video game quality ass...

  12. Computed Quality Assessment of MPEG4-compressed DICOM Video Data.

    Science.gov (United States)

    Frankewitsch, Thomas; Söhnlein, Sven; Müller, Marcel; Prokosch, Hans-Ulrich

    2005-01-01

    Digital Imaging and Communication in Medicine (DICOM) has become one of the most popular standards in medicine. This standard specifies the exact procedures in which digital images are exchanged between devices, either using a network or storage medium. Sources for images vary; therefore there exist definitions for the exchange for CR, CT, NMR, angiography, sonography and so on. With its spreading, with the increasing amount of sources included, data volume is increasing, too. This affects storage and traffic. While for long-time storage data compression is generally not accepted at the moment, there are many situations where data compression is possible: Telemedicine for educational purposes (e.g. students at home using low speed internet connections), presentations with standard-resolution video projectors, or even the supply on wards combined receiving written findings. DICOM comprises compression: for still image there is JPEG, for video MPEG-2 is adopted. Within the last years MPEG-2 has been evolved to MPEG-4, which squeezes data even better, but the risk of significant errors increases, too. Within the last years effects of compression have been analyzed for entertainment movies, but these are not comparable to videos of physical examinations (e.g. echocardiography). In medical videos an individual image plays a more important role. Erroneous single images affect total quality even more. Additionally, the effect of compression can not be generalized from one test series to all videos. The result depends strongly on the source. Some investigations have been presented, where different MPEG-4 algorithms compressed videos have been compared and rated manually. But they describe only the results in an elected testbed. In this paper some methods derived from video rating are presented and discussed for an automatically created quality control for the compression of medical videos, primary stored in DICOM containers.

  13. Human Variome Project Quality Assessment Criteria for Variation Databases.

    Science.gov (United States)

    Vihinen, Mauno; Hancock, John M; Maglott, Donna R; Landrum, Melissa J; Schaafsma, Gerard C P; Taschner, Peter

    2016-06-01

    Numerous databases containing information about DNA, RNA, and protein variations are available. Gene-specific variant databases (locus-specific variation databases, LSDBs) are typically curated and maintained for single genes or groups of genes for a certain disease(s). These databases are widely considered as the most reliable information source for a particular gene/protein/disease, but it should also be made clear they may have widely varying contents, infrastructure, and quality. Quality is very important to evaluate because these databases may affect health decision-making, research, and clinical practice. The Human Variome Project (HVP) established a Working Group for Variant Database Quality Assessment. The basic principle was to develop a simple system that nevertheless provides a good overview of the quality of a database. The HVP quality evaluation criteria that resulted are divided into four main components: data quality, technical quality, accessibility, and timeliness. This report elaborates on the developed quality criteria and how implementation of the quality scheme can be achieved. Examples are provided for the current status of the quality items in two different databases, BTKbase, an LSDB, and ClinVar, a central archive of submissions about variants and their clinical significance. © 2016 WILEY PERIODICALS, INC.

  14. Weighted-MSE based on saliency map for assessing video quality of H.264 video streams

    Science.gov (United States)

    Boujut, H.; Benois-Pineau, J.; Hadar, O.; Ahmed, T.; Bonnet, P.

    2011-01-01

    Human vision system is very complex and has been studied for many years specifically for purposes of efficient encoding of visual, e.g. video content from digital TV. There have been physiological and psychological evidences which indicate that viewers do not pay equal attention to all exposed visual information, but only focus on certain areas known as focus of attention (FOA) or saliency regions. In this work, we propose a novel based objective quality assessment metric, for assessing the perceptual quality of decoded video sequences affected by transmission errors and packed loses. The proposed method weights the Mean Square Error (MSE), Weighted-MSE (WMSE), according to the calculated saliency map at each pixel. Our method was validated trough subjective quality experiments.

  15. Compensating for Type-I Errors in Video Quality Assessment

    DEFF Research Database (Denmark)

    Brunnström, Kjell; Tavakoli, Samira; Søgaard, Jacob

    2015-01-01

    This paper analyzes the impact on compensating for Type-I errors in video quality assessment. A Type-I error is to incorrectly conclude that there is an effect. The risk increases with the number of comparisons that are performed in statistical tests. Type-I errors are an issue often neglected...

  16. A regression method for real-time video quality evaluation

    NARCIS (Netherlands)

    Torres Vega, M.; Mocanu, D.C.; Liotta, A.; Abdulrazak, B.; Pardede, E.; Steinbauer, M.; Khalil, I.; Anderst-Kotsis, G.

    2016-01-01

    No-Reference (NR) metrics provide a mechanism to assess video quality in an ever-growing wireless network. Their low computational complexity and functional characteristics make them the primary choice when it comes to realtime content management and mobile streaming control. Unfortunately, common

  17. Video Quality Assessment Using Spatio-Velocity Contrast Sensitivity Function

    Science.gov (United States)

    Hirai, Keita; Tumurtogoo, Jambal; Kikuchi, Ayano; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    Due to the development and popularization of high-definition televisions, digital video cameras, Blu-ray discs, digital broadcasting, IP television and so on, it plays an important role to identify and quantify video quality degradations. In this paper, we propose SV-CIELAB which is an objective video quality assessment (VQA) method using a spatio-velocity contrast sensitivity function (SV-CSF). In SV-CIELAB, motion information in videos is effectively utilized for filtering unnecessary information in the spatial frequency domain. As the filter to apply videos, we used the SV-CSF. It is a modulation transfer function of the human visual system, and consists of the relationship among contrast sensitivities, spatial frequencies and velocities of perceived stimuli. In the filtering process, the SV-CSF cannot be directly applied in the spatial frequency domain because spatial coordinate information is required when using velocity information. For filtering by the SV-CSF, we obtain video frames separated in spatial frequency domain. By using velocity information, the separated frames with limited spatial frequencies are weighted by contrast sensitivities in the SV-CSF model. In SV-CIELAB, the criteria are obtained by calculating image differences between filtered original and distorted videos. For the validation of SV-CIELAB, subjective evaluation experiments were conducted. The subjective experimental results were compared with SV-CIELAB and the conventional VQA methods such as CIELAB color difference, Spatial-CIELAB, signal to noise ratio and so on. From the experimental results, it was shown that SV-CIELAB is a more efficient VQA method than the conventional methods.

  18. Impact of Constant Rate Factor on Objective Video Quality Assessment

    Directory of Open Access Journals (Sweden)

    Juraj Bienik

    2017-01-01

    Full Text Available This paper deals with the impact of constant rate factor value on the objective video quality assessment using PSNR and SSIM metrics. Compression efficiency of H.264 and H.265 codecs defined by different Constant rate factor (CRF values was tested. The assessment was done for eight types of video sequences depending on content for High Definition (HD, Full HD (FHD and Ultra HD (UHD resolution. Finally, performance of both mentioned codecs with emphasis on compression ratio and efficiency of coding was compared.

  19. Video Quality Assessment and Machine Learning: Performance and Interpretability

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2015-01-01

    In this work we compare a simple and a complex Machine Learning (ML) method used for the purpose of Video Quality Assessment (VQA). The simple ML method chosen is the Elastic Net (EN), which is a regularized linear regression model and easier to interpret. The more complex method chosen is Support...... Vector Regression (SVR), which has gained popularity in VQA research. Additionally, we present an ML-based feature selection method. Also, it is investigated how well the methods perform when tested on videos from other datasets. Our results show that content-independent cross-validation performance...... on a single dataset can be misleading and that in the case of very limited training and test data, especially in regards to different content as is the case for many video datasets, a simple ML approach is the better choice....

  20. Operation quality assessment model for video conference system

    Science.gov (United States)

    Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian

    2018-01-01

    Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.

  1. Video quality of 3G videophones for telephone cardiopulmonary resuscitation.

    Science.gov (United States)

    Tränkler, Uwe; Hagen, Oddvar; Horsch, Alexander

    2008-01-01

    We simulated a cardiopulmonary resuscitation (CPR) scene with a manikin and used two 3G videophones on the caller's side to transmit video to a laptop PC. Five observers (two doctors with experience in emergency medicine and three paramedics) evaluated the video. They judged whether the manikin was breathing and whether they would give advice for CPR; they also graded the confidence of their decision-making. Breathing was only visible from certain orientations of the videophones, at distances below 150 cm with good illumination and a still background. Since the phones produced a degradation in colours and shadows, detection of breathing mainly depended on moving contours. Low camera positioning produced better results than having the camera high up. Darkness, shaking of the camera and a moving background made detection of breathing almost impossible. The video from the two 3G videophones that were tested was of sufficient quality for telephone CPR provided that camera orientation, distance, illumination and background were carefully chosen. Thus it seems possible to use 3G videophones for emergency calls involving CPR. However, further studies on the required video quality in different scenarios are necessary.

  2. GestuRe and ACtion Exemplar (GRACE) video database: stimuli for research on manners of human locomotion and iconic gestures.

    Science.gov (United States)

    Aussems, Suzanne; Kwok, Natasha; Kita, Sotaro

    2018-06-01

    Human locomotion is a fundamental class of events, and manners of locomotion (e.g., how the limbs are used to achieve a change of location) are commonly encoded in language and gesture. To our knowledge, there is no openly accessible database containing normed human locomotion stimuli. Therefore, we introduce the GestuRe and ACtion Exemplar (GRACE) video database, which contains 676 videos of actors performing novel manners of human locomotion (i.e., moving from one location to another in an unusual manner) and videos of a female actor producing iconic gestures that represent these actions. The usefulness of the database was demonstrated across four norming experiments. First, our database contains clear matches and mismatches between iconic gesture videos and action videos. Second, the male actors and female actors whose action videos matched the gestures in the best possible way, perform the same actions in very similar manners and different actions in highly distinct manners. Third, all the actions in the database are distinct from each other. Fourth, adult native English speakers were unable to describe the 26 different actions concisely, indicating that the actions are unusual. This normed stimuli set is useful for experimental psychologists working in the language, gesture, visual perception, categorization, memory, and other related domains.

  3. Danish Quality Database for Mammography Screening

    DEFF Research Database (Denmark)

    Mikkelsen, Ellen Margrethe; Njor, Sisse Helle; Vejborg, Ilse Merete Munk

    2016-01-01

    diagnosed with breast cancer between screening rounds, 7) invasive breast tumors, 8) node-negative cancers, 9) invasive tumors ≤10 mm, 10) ratio of surgery for benign vs malignant lesions, and 11) breast-conserving therapy. DESCRIPTIVE DATA: As of August 10, 2015, the database included data from 888...

  4. Extracting a Good Quality Frontal Face Image from a Low-Resolution Video Sequence

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2011-01-01

    Feeding low-resolution and low-quality images, from inexpensive surveillance cameras, to systems like, e.g., face recognition, produces erroneous and unstable results. Therefore, there is a need for a mechanism to bridge the gap between on one hand low-resolution and low-quality images......, we use a learning-based super-resolution algorithm applied to the result of the reconstruction-based part to improve the quality by another factor of two. This results in an improvement factor of four for the entire system. The proposed system has been tested on 122 low-resolution sequences from two...... different databases. The experimental results show that the proposed system can indeed produce a high-resolution and good quality frontal face image from low-resolution video sequences....

  5. Objective video quality assessment method for freeze distortion based on freeze aggregation

    Science.gov (United States)

    Watanabe, Keishiro; Okamoto, Jun; Kurita, Takaaki

    2006-01-01

    With the development of the broadband network, video communications such as videophone, video distribution, and IPTV services are beginning to become common. In order to provide these services appropriately, we must manage them based on subjective video quality, in addition to designing a network system based on it. Currently, subjective quality assessment is the main method used to quantify video quality. However, it is time-consuming and expensive. Therefore, we need an objective quality assessment technology that can estimate video quality from video characteristics effectively. Video degradation can be categorized into two types: spatial and temporal. Objective quality assessment methods for spatial degradation have been studied extensively, but methods for temporal degradation have hardly been examined even though it occurs frequently due to network degradation and has a large impact on subjective quality. In this paper, we propose an objective quality assessment method for temporal degradation. Our approach is to aggregate multiple freeze distortions into an equivalent freeze distortion and then derive the objective video quality from the equivalent freeze distortion. Specifically, our method considers the total length of all freeze distortions in a video sequence as the length of the equivalent single freeze distortion. In addition, we propose a method using the perceptual characteristics of short freeze distortions. We verified that our method can estimate the objective video quality well within the deviation of subjective video quality.

  6. Quality Assurance Source Requirements Traceability Database

    International Nuclear Information System (INIS)

    MURTHY, R.; NAYDENOVA, A.; DEKLEVER, R.; BOONE, A.

    2006-01-01

    At the Yucca Mountain Project the Project Requirements Processing System assists in the management of relationships between regulatory and national/industry standards source criteria, and Quality Assurance Requirements and Description document (DOE/R W-0333P) requirements to create compliance matrices representing respective relationships. The matrices are submitted to the U.S. Nuclear Regulatory Commission to assist in the commission's review, interpretation, and concurrence with the Yucca Mountain Project QA program document. The tool is highly customized to meet the needs of the Office of Civilian Radioactive Waste Management Office of Quality Assurance

  7. dBBQs: dataBase of Bacterial Quality scores

    OpenAIRE

    Wanchai, Visanu; Patumcharoenpol, Preecha; Nookaew, Intawat; Ussery, David

    2017-01-01

    Background: It is well-known that genome sequencing technologies are becoming significantly cheaper and faster. As a result of this, the exponential growth in sequencing data in public databases allows us to explore ever growing large collections of genome sequences. However, it is less known that the majority of available sequenced genome sequences in public databases are not complete, drafts of varying qualities. We have calculated quality scores for around 100,000 bacterial genomes from al...

  8. Aspects of quality assurance in a thermodynamic Mg alloy database

    Energy Technology Data Exchange (ETDEWEB)

    Schmid-Fetzer, R.; Janz, A.; Groebner, J.; Ohno, M. [Clausthal University of Technology, Institute of Metallurgy, Robert-Koch-Str. 42, D-38678 Clausthal-Zellerfeld (Germany)

    2005-12-01

    Quality assurance is a major concern for large thermodynamic databases. Examples for standard tests on phase diagrams, thermodynamic functions or parameters will be shown that are of practical use in checking consistency and plausibility. The typical end user, applying the database to a real multicomponent material or process, will generally not have sufficient time, resources, and experience to perform the quality check himself. (Abstract Copyright [2005], Wiley Periodicals, Inc.)

  9. PSQM-based RR and NR video quality metrics

    Science.gov (United States)

    Lu, Zhongkang; Lin, Weisi; Ong, Eeping; Yang, Xiaokang; Yao, Susu

    2003-06-01

    This paper presents a new and general concept, PQSM (Perceptual Quality Significance Map), to be used in measuring the visual distortion. It makes use of the selectivity characteristic of HVS (Human Visual System) that it pays more attention to certain area/regions of visual signal due to one or more of the following factors: salient features in image/video, cues from domain knowledge, and association of other media (e.g., speech or audio). PQSM is an array whose elements represent the relative perceptual-quality significance levels for the corresponding area/regions for images or video. Due to its generality, PQSM can be incorporated into any visual distortion metrics: to improve effectiveness or/and efficiency of perceptual metrics; or even to enhance a PSNR-based metric. A three-stage PQSM estimation method is also proposed in this paper, with an implementation of motion, texture, luminance, skin-color and face mapping. Experimental results show the scheme can improve the performance of current image/video distortion metrics.

  10. Quality of Experience management for video streams : the case of Skype

    NARCIS (Netherlands)

    Liotta, A.; Druda, L.; Exarchakos, G.; Menkovski, V.; Khalil, I.

    2012-01-01

    With the widespread adoption of mobile Internet, the process of streaming video has become varied and complex. A diversity of factors affect the way we perceive quality in video streaming (also known as 'quality of experience', or QoE), involving far more than the individual video and network

  11. Tools for quality control of fingerprint databases

    Science.gov (United States)

    Swann, B. Scott; Libert, John M.; Lepley, Margaret A.

    2010-04-01

    Integrity of fingerprint data is essential to biometric and forensic applications. Accordingly, the FBI's Criminal Justice Information Services (CJIS) Division has sponsored development of software tools to facilitate quality control functions relative to maintaining its fingerprint data assets inherent to the Integrated Automated Fingerprint Identification System (IAFIS) and Next Generation Identification (NGI). This paper provides an introduction of two such tools. The first FBI-sponsored tool was developed by the National Institute of Standards and Technology (NIST) and examines and detects the spectral signature of the ridge-flow structure characteristic of friction ridge skin. The Spectral Image Validation/Verification (SIVV) utility differentiates fingerprints from non-fingerprints, including blank frames or segmentation failures erroneously included in data; provides a "first look" at image quality; and can identify anomalies in sample rates of scanned images. The SIVV utility might detect errors in individual 10-print fingerprints inaccurately segmented from the flat, multi-finger image acquired by one of the automated collection systems increasing in availability and usage. In such cases, the lost fingerprint can be recovered by re-segmentation from the now compressed multi-finger image record. The second FBI-sponsored tool, CropCoeff was developed by MITRE and thoroughly tested via NIST. CropCoeff enables cropping of the replacement single print directly from the compressed data file, thus avoiding decompression and recompression of images that might degrade fingerprint features necessary for matching.

  12. dBBQs: dataBase of Bacterial Quality scores.

    Science.gov (United States)

    Wanchai, Visanu; Patumcharoenpol, Preecha; Nookaew, Intawat; Ussery, David

    2017-12-28

    It is well-known that genome sequencing technologies are becoming significantly cheaper and faster. As a result of this, the exponential growth in sequencing data in public databases allows us to explore ever growing large collections of genome sequences. However, it is less known that the majority of available sequenced genome sequences in public databases are not complete, drafts of varying qualities. We have calculated quality scores for around 100,000 bacterial genomes from all major genome repositories and put them in a fast and easy-to-use database. Prokaryotic genomic data from all sources were collected and combined to make a non-redundant set of bacterial genomes. The genome quality score for each was calculated by four different measurements: assembly quality, number of rRNA and tRNA genes, and the occurrence of conserved functional domains. The dataBase of Bacterial Quality scores (dBBQs) was designed to store and retrieve quality scores. It offers fast searching and download features which the result can be used for further analysis. In addition, the search results are shown in interactive JavaScript chart framework using DC.js. The analysis of quality scores across major public genome databases find that around 68% of the genomes are of acceptable quality for many uses. dBBQs (available at http://arc-gem.uams.edu/dbbqs ) provides genome quality scores for all available prokaryotic genome sequences with a user-friendly Web-interface. These scores can be used as cut-offs to get a high-quality set of genomes for testing bioinformatics tools or improving the analysis. Moreover, all data of the four measurements that were combined to make the quality score for each genome, which can potentially be used for further analysis. dBBQs will be updated regularly and is freely use for non-commercial purpose.

  13. Video-Quality Estimation Based on Reduced-Reference Model Employing Activity-Difference

    Science.gov (United States)

    Yamada, Toru; Miyamoto, Yoshihiro; Senda, Yuzo; Serizawa, Masahiro

    This paper presents a Reduced-reference based video-quality estimation method suitable for individual end-user quality monitoring of IPTV services. With the proposed method, the activity values for individual given-size pixel blocks of an original video are transmitted to end-user terminals. At the end-user terminals, the video quality of a received video is estimated on the basis of the activity-difference between the original video and the received video. Psychovisual weightings and video-quality score adjustments for fatal degradations are applied to improve estimation accuracy. In addition, low-bit-rate transmission is achieved by using temporal sub-sampling and by transmitting only the lower six bits of each activity value. The proposed method achieves accurate video quality estimation using only low-bit-rate original video information (15kbps for SDTV). The correlation coefficient between actual subjective video quality and estimated quality is 0.901 with 15kbps side information. The proposed method does not need computationally demanding spatial and gain-and-offset registrations. Therefore, it is suitable for real-time video-quality monitoring in IPTV services.

  14. The use of databases and registries to enhance colonoscopy quality.

    Science.gov (United States)

    Logan, Judith R; Lieberman, David A

    2010-10-01

    Administrative databases, registries, and clinical databases are designed for different purposes and therefore have different advantages and disadvantages in providing data for enhancing quality. Administrative databases provide the advantages of size, availability, and generalizability, but are subject to constraints inherent in the coding systems used and from data collection methods optimized for billing. Registries are designed for research and quality reporting but require significant investment from participants for secondary data collection and quality control. Electronic health records contain all of the data needed for quality research and measurement, but that data is too often locked in narrative text and unavailable for analysis. National mandates for electronic health record implementation and functionality will likely change this landscape in the near future. Copyright © 2010 Elsevier Inc. All rights reserved.

  15. Applicability of Existing Objective Metrics of Perceptual Quality for Adaptive Video Streaming

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Krasula, Lukás; Shahid, Muhammad

    2016-01-01

    Objective video quality metrics are designed to estimate the quality of experience of the end user. However, these objective metrics are usually validated with video streams degraded under common distortion types. In the presented work, we analyze the performance of published and known full......-reference and noreference quality metrics in estimating the perceived quality of adaptive bit-rate video streams knowingly out of scope. Experimental results indicate not surprisingly that state of the art objective quality metrics overlook the perceived degradations in the adaptive video streams and perform poorly...

  16. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yasaman Samei

    2008-08-01

    Full Text Available Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN. With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture. This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  17. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks.

    Science.gov (United States)

    Aghdasi, Hadi S; Abbaspour, Maghsoud; Moghadam, Mohsen Ebrahimi; Samei, Yasaman

    2008-08-04

    Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS) and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN). With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture). This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  18. Is perception of quality more important than technical quality in patient video cases?

    Science.gov (United States)

    Roland, Damian; Matheson, David; Taub, Nick; Coats, Tim; Lakhanpaul, Monica

    2015-08-13

    The use of video cases to demonstrate key signs and symptoms in patients (patient video cases or PVCs) is a rapidly expanding field. The aims of this study were to evaluate whether the technical quality, or judgement of quality, of a video clip influences a paediatrician's judgment on acuity of the case and assess the relationship between perception of quality and the technical quality of a selection of video clips. Participants (12 senior consultant paediatricians attending an examination workshop) individually categorised 28 PVCs into one of 3 possible acuities and then described the quality of the image seen. The PVCs had been converted into four different technical qualities (differing bit rates ranging from excellent to low quality). Participants' assessment of quality and the actual industry standard of the PVC were independent (333 distinct observations, spearmans rho = 0.0410, p = 0.4564). Agreement between actual acuity and participants' judgement was generally good at higher acuities but moderate at medium/low acuities of illness (overall correlation 0.664). Perception of the quality of the clip was related to correct assignment of acuity regardless of the technical quality of the clip (number of obs = 330, z = 2.07, p = 0.038). It is important to benchmark PVCs prior to use in learning resources as experts may not agree on the information within, or quality of, the clip. It appears, although PVCs may be beneficial in a pedagogical context, the perception of quality of clip may be an important determinant of an expert's decision making.

  19. Quality Assurance Procedures for ModCat Database Code Files

    Energy Technology Data Exchange (ETDEWEB)

    Siciliano, Edward R.; Devanathan, Ram; Guillen, Zoe C.; Kouzes, Richard T.; Schweppe, John E.

    2014-04-01

    The Quality Assurance procedures used for the initial phase of the Model Catalog Project were developed to attain two objectives, referred to as “basic functionality” and “visualization.” To ensure the Monte Carlo N-Particle model input files posted into the ModCat database meet those goals, all models considered as candidates for the database are tested, revised, and re-tested.

  20. Diagnostic image quality of video-digitized chest images

    International Nuclear Information System (INIS)

    Winter, L.H.; Butler, R.B.; Becking, W.B.; Warnars, G.A.O.; Haar Romeny, B. ter; Ottes, F.P.; Valk, J.-P.J. de

    1989-01-01

    The diagnostic accuracy obtained with the Philips picture archiving and communications subsystem was investigated by means of an observer performance study using receiver operating characteristic (ROC) analysis. The image qualities of conventional films and video digitized images were compared. The scanner had a 1024 x 1024 x 8 bit memory. The digitized images were displayed on a 60 Hz interlaced display monitor 1024 lines. Posteroanterior (AP) roetgenograms of a chest phantom with superimposed simulated interstitial pattern disease (IPD) were produced; there were 28 normal and 40 abnormal films. Normal films were produced by the chest phantom alone. Abnormal films were taken of the chest phantom with varying degrees of superimposed simulated intersitial disease (PND) for an observer performance study, because the results of a simulated interstitial pattern disease study are less likely to be influenced by perceptual capabilities. The conventional films and the video digitized images were viewed by five experienced observers during four separate sessions. Conventional films were presented on a viewing box, the digital images were displayed on the monitor described above. The presence of simulated intersitial disease was indicated on a 5-point ROC certainty scale by each observer. We analyzed the differences between ROC curves derived from correlated data statistically. The mean time required to evaluate 68 digitized images is approximately four times the mean time needed to read the convential films. The diagnostic quality of the video digitized images was significantly lower (at the 5% level) than that of the conventional films (median area under the curve (AUC) of 0.71 and 0.94, respectively). (author). 25 refs.; 2 figs.; 4 tabs

  1. Degraded visual environment image/video quality metrics

    Science.gov (United States)

    Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.

    2014-06-01

    A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.

  2. Quality of pharmaceutical care at the pharmacy counter : Patients’ experiences versus video observation

    NARCIS (Netherlands)

    Koster, Ellen S.; Blom, Lyda; Overbeeke, Marloes R.; Philbert, Daphne; Vervloet, Marcia; Koopman, Laura; van Dijk, Liset

    2016-01-01

    Introduction: Consumer Quality Index questionnaires are used to assess quality of care from patients’ experiences. Objective: To provide insight into the agreement about quality of pharmaceutical care, measured both by a patient questionnaire and video observations. Methods: Pharmaceutical

  3. Quality of pharmaceutical care at the pharmacy counter: patients’ experiences versus video observation.

    NARCIS (Netherlands)

    Koster, E.S.; Blom, L.; Overbeeke, M.R.; Philbert, D.; Vervloet, M.; Koopman, L.; Dijk, L. van

    2016-01-01

    Introduction: Consumer Quality Index questionnaires are used to assess quality of care from patients’ experiences. Objective: To provide insight into the agreement about quality of pharmaceutical care, measured both by a patient questionnaire and video observations. Methods: Pharmaceutical

  4. On the definition of adapted audio/video profiles for high-quality video calling services over LTE/4G

    Science.gov (United States)

    Ndiaye, Maty; Quinquis, Catherine; Larabi, Mohamed Chaker; Le Lay, Gwenael; Saadane, Hakim; Perrine, Clency

    2014-01-01

    During the last decade, the important advances and widespread availability of mobile technology (operating systems, GPUs, terminal resolution and so on) have encouraged a fast development of voice and video services like video-calling. While multimedia services have largely grown on mobile devices, the generated increase of data consumption is leading to the saturation of mobile networks. In order to provide data with high bit-rates and maintain performance as close as possible to traditional networks, the 3GPP (The 3rd Generation Partnership Project) worked on a high performance standard for mobile called Long Term Evolution (LTE). In this paper, we aim at expressing recommendations related to audio and video media profiles (selection of audio and video codecs, bit-rates, frame-rates, audio and video formats) for a typical video-calling services held over LTE/4G mobile networks. These profiles are defined according to targeted devices (smartphones, tablets), so as to ensure the best possible quality of experience (QoE). Obtained results indicate that for a CIF format (352 x 288 pixels) which is usually used for smartphones, the VP8 codec provides a better image quality than the H.264 codec for low bitrates (from 128 to 384 kbps). However sequences with high motion, H.264 in slow mode is preferred. Regarding audio, better results are globally achieved using wideband codecs offering good quality except for opus codec (at 12.2 kbps).

  5. Quality assurance database for the CBM silicon tracking system

    Energy Technology Data Exchange (ETDEWEB)

    Lymanets, Anton [Physikalisches Institut, Universitaet Tuebingen (Germany); Collaboration: CBM-Collaboration

    2015-07-01

    The Silicon Tracking System is a main tracking device of the CBM Experiment at FAIR. Its construction includes production, quality assurance and assembly of large number of components, e.g., 106 carbon fiber support structures, 1300 silicon microstrip sensors, 16.6k readout chips, analog microcables, etc. Detector construction is distributed over several production and assembly sites and calls for a database that would be extensible and allow tracing the components, integrating the test data, monitoring the component statuses and data flow. A possible implementation of the above-mentioned requirements is being developed at GSI (Darmstadt) based on the FAIR DB Virtual Database Library that provides connectivity to common SQL-Database engines (PostgreSQL, Oracle, etc.). Data structure, database architecture as well as status of implementation are discussed.

  6. A no-reference image and video visual quality metric based on machine learning

    Science.gov (United States)

    Frantc, Vladimir; Voronin, Viacheslav; Semenishchev, Evgenii; Minkin, Maxim; Delov, Aliy

    2018-04-01

    The paper presents a novel visual quality metric for lossy compressed video quality assessment. High degree of correlation with subjective estimations of quality is due to using of a convolutional neural network trained on a large amount of pairs video sequence-subjective quality score. We demonstrate how our predicted no-reference quality metric correlates with qualitative opinion in a human observer study. Results are shown on the EVVQ dataset with comparison existing approaches.

  7. A randomized controlled trial of an educational video to improve quality of bowel preparation for colonoscopy.

    Science.gov (United States)

    Park, Jin-Seok; Kim, Min Su; Kim, HyungKil; Kim, Shin Il; Shin, Chun Ho; Lee, Hyun Jung; Lee, Won Seop; Moon, Soyoung

    2016-06-17

    High-quality bowel preparation is necessary for colonoscopy. A few studies have been conducted to investigate improvement in bowel preparation quality through patient education. However, the effect of patient education on bowel preparation has not been well studied. A randomized and prospective study was conducted. All patients received regular instruction for bowel preparation during a pre-colonoscopy visit. Those scheduled for colonoscopy were randomly assigned to view an educational video instruction (video group) on the day before the colonoscopy, or to a non-video (control) group. Qualities of bowel preparation using the Ottawa Bowel Preparation Quality scale (Ottawa score) were compared between the video and non-video groups. In addition, factors associated with poor bowel preparation were investigated. A total of 502 patients were randomized, 250 to the video group and 252 to the non-video group. The video group exhibited better bowel preparation (mean Ottawa total score: 3.03 ± 1.9) than the non-video group (4.21 ± 1.9; P educational video could improve the quality of bowel preparation in comparison with standard preparation method. Clinical Research Information Service KCT0001836 . The date of registration: March, 08(th), 2016, Retrospectively registered.

  8. Image quality assessment for video stream recognition systems

    Science.gov (United States)

    Chernov, Timofey S.; Razumnuy, Nikita P.; Kozharinov, Alexander S.; Nikolaev, Dmitry P.; Arlazarov, Vladimir V.

    2018-04-01

    Recognition and machine vision systems have long been widely used in many disciplines to automate various processes of life and industry. Input images of optical recognition systems can be subjected to a large number of different distortions, especially in uncontrolled or natural shooting conditions, which leads to unpredictable results of recognition systems, making it impossible to assess their reliability. For this reason, it is necessary to perform quality control of the input data of recognition systems, which is facilitated by modern progress in the field of image quality evaluation. In this paper, we investigate the approach to designing optical recognition systems with built-in input image quality estimation modules and feedback, for which the necessary definitions are introduced and a model for describing such systems is constructed. The efficiency of this approach is illustrated by the example of solving the problem of selecting the best frames for recognition in a video stream for a system with limited resources. Experimental results are presented for the system for identity documents recognition, showing a significant increase in the accuracy and speed of the system under simulated conditions of automatic camera focusing, leading to blurring of frames.

  9. Analysis of quality data based on national clinical databases

    DEFF Research Database (Denmark)

    Utzon, Jan; Petri, A.L.; Christophersen, S.

    2009-01-01

    There is little agreement on the philosophy of measuring clinical quality in health care. How data should be analyzed and transformed to healthcare information is an ongoing discussion. To accept a difference in quality between health departments as a real difference, one should consider to which...... extent the selection of patients, random variation, confounding and inconsistency may have influenced results. The aim of this article is to summarize aspects of clinical healthcare data analyses provided from the national clinical quality databases and to show how data may be presented in a way which...... is understandable to readers without specialised knowledge of statistics Udgivelsesdato: 2009/9/14...

  10. Analysis of quality data based on national clinical databases

    DEFF Research Database (Denmark)

    Utzon, Jan; Petri, A.L.; Christophersen, S.

    2009-01-01

    extent the selection of patients, random variation, confounding and inconsistency may have influenced results. The aim of this article is to summarize aspects of clinical healthcare data analyses provided from the national clinical quality databases and to show how data may be presented in a way which......There is little agreement on the philosophy of measuring clinical quality in health care. How data should be analyzed and transformed to healthcare information is an ongoing discussion. To accept a difference in quality between health departments as a real difference, one should consider to which...

  11. Teaching Surgical Procedures with Movies: Tips for High-quality Video Clips

    OpenAIRE

    Jacquemart, Mathieu; Bouletreau, Pierre; Breton, Pierre; Mojallal, Ali; Sigaux, Nicolas

    2016-01-01

    Summary: Video must now be considered as a precious tool for learning surgery. However, the medium does present production challenges, and currently, quality movies are not always accessible. We developed a series of 7 surgical videos and made them available on a publicly accessible internet website. Our videos have been viewed by thousands of people worldwide. High-quality educational movies must respect strategic and technical points to be reliable.

  12. Teaching Surgical Procedures with Movies: Tips for High-quality Video Clips.

    Science.gov (United States)

    Jacquemart, Mathieu; Bouletreau, Pierre; Breton, Pierre; Mojallal, Ali; Sigaux, Nicolas

    2016-09-01

    Video must now be considered as a precious tool for learning surgery. However, the medium does present production challenges, and currently, quality movies are not always accessible. We developed a series of 7 surgical videos and made them available on a publicly accessible internet website. Our videos have been viewed by thousands of people worldwide. High-quality educational movies must respect strategic and technical points to be reliable.

  13. Software for creating quality control database in diagnostic radiology

    International Nuclear Information System (INIS)

    Stoeva, M.; Spassov, G.; Tabakov, S.

    2000-01-01

    The paper describes a PC based program with database for quality control (QC). It keeps information about all surveyed equipment and measured parameters. The first function of the program is to extract information from old (existing) MS Excel spreadsheets with QC surveys. The second function is used for input of measurements which are automatically organized in MS Excel spreadsheets and built into the database. The spreadsheets are based on the protocols described in the EMERALD Training Scheme. In addition, the program can make statistics of all measured parameters, both in absolute term and in time

  14. The role of optical flow in automated quality assessment of full-motion video

    Science.gov (United States)

    Harguess, Josh; Shafer, Scott; Marez, Diego

    2017-09-01

    In real-world video data, such as full-motion-video (FMV) taken from unmanned vehicles, surveillance systems, and other sources, various corruptions to the raw data is inevitable. This can be due to the image acquisition process, noise, distortion, and compression artifacts, among other sources of error. However, we desire methods to analyze the quality of the video to determine whether the underlying content of the corrupted video can be analyzed by humans or machines and to what extent. Previous approaches have shown that motion estimation, or optical flow, can be an important cue in automating this video quality assessment. However, there are many different optical flow algorithms in the literature, each with their own advantages and disadvantages. We examine the effect of the choice of optical flow algorithm (including baseline and state-of-the-art), on motionbased automated video quality assessment algorithms.

  15. Non-intrusive Packet-Layer Model for Monitoring Video Quality of IPTV Services

    Science.gov (United States)

    Yamagishi, Kazuhisa; Hayashi, Takanori

    Developing a non-intrusive packet-layer model is required to passively monitor the quality of experience (QoE) during service. We propose a packet-layer model that can be used to estimate the video quality of IPTV using quality parameters derived from transmitted packet headers. The computational load of the model is lighter than that of the model that takes video signals and/or video-related bitstream information such as motion vectors as input. This model is applicable even if the transmitted bitstream information is encrypted because it uses transmitted packet headers rather than bitstream information. For developing the model, we conducted three extensive subjective quality assessments for different encoders and decoders (codecs), and video content. Then, we modeled the subjective video quality assessment characteristics based on objective features affected by coding and packet loss. Finally, we verified the model's validity by applying our model to unknown data sets different from training data sets used above.

  16. Summarization of Surveillance Video Sequences Using Face Quality Assessment

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.; Rahmati, Mohammad

    2011-01-01

    Constant working surveillance cameras in public places, such as airports and banks, produce huge amount of video data. Faces in such videos can be extracted in real time. However, most of these detected faces are either redundant or useless. Redundant information adds computational costs to facial...

  17. Enabling 'togetherness' in high-quality domestic video conferencing

    NARCIS (Netherlands)

    Kegel, I.; Cesar, P.; Jansen, J.; Bulterman, D.C.A.; Stevens, T.; Kort, J.; Färber, N.

    2012-01-01

    Low-cost video conferencing systems have provided an existence proof for the value of video communication in a home setting. At the same time, current systems have a number of fundamental limitations that inhibit more general social interactions among multiple groups of participants. In our work, we

  18. Enabling 'Togetherness' in High-Quality Domestic Video Conferencing

    NARCIS (Netherlands)

    I. Kegel; P.S. Cesar Garcia (Pablo Santiago); A.J. Jansen (Jack); D.C.A. Bulterman (Dick); J. Kort; T. Stevens; N. Farber

    2012-01-01

    htmlabstractLow-cost video conferencing systems have provided an existence proof for the value of video communication in a home setting. At the same time, current systems have a number of fundamental limitations that inhibit more general social interactions among multiple groups of participants. In

  19. An extensible database architecture for nationwide power quality monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Kuecuek, Dilek; Inan, Tolga; Salor, Oezguel; Demirci, Turan; Buhan, Serkan; Boyrazoglu, Burak [TUBITAK Uzay, Power Electronics Group, TR 06531 Ankara (Turkey); Akkaya, Yener; Uensar, Oezguer; Altintas, Erinc; Haliloglu, Burhan [Turkish Electricity Transmission Co. Inc., TR 06490 Ankara (Turkey); Cadirci, Isik [TUBITAK Uzay, Power Electronics Group, TR 06531 Ankara (Turkey); Hacettepe University, Electrical and Electronics Eng. Dept., TR 06532 Ankara (Turkey); Ermis, Muammer [METU, Electrical and Electronics Eng. Dept., TR 06531 Ankara (Turkey)

    2010-07-15

    Electrical power quality (PQ) data is one of the prevalent types of engineering data. Its measurement at relevant sampling rates leads to large volumes of PQ data to be managed and analyzed. In this paper, an extensible database architecture is presented based on a novel generic data model for PQ data. The proposed architecture is operated on the nationwide PQ data of the Turkish Electricity Transmission System measured in the field by mobile PQ monitoring systems. The architecture is extensible in the sense that it can be used to store and manage PQ data collected by any means with little or no customization. The architecture has three modules: a PQ database corresponding to the implementation of the generic data model, a visual user query interface to enable its users to specify queries to the PQ database and a query processor acting as a bridge between the query interface and the database. The operation of the architecture is illustrated on the field PQ data with several query examples through the visual query interface. The execution of the architecture on this data of considerable volume supports its applicability and convenience for PQ data. (author)

  20. Evaluating the Accuracy and Quality of the Information in Kyphosis Videos Shared on YouTube.

    Science.gov (United States)

    Erdem, Mehmet Nuri; Karaca, Sinan

    2018-04-16

    A quality-control YouTube-based study using the recognized quality scoring systems. In this study, our aim was to confirm the accuracy and quality of the information in kyphosis videos shared on YouTube. The Internet is a widely and increasingly used source for obtaining medical information both by patients and clinicians. YouTube, in particular, manifests itself as a leading source with its ease of access to information and visual advantage for Internet users. The first 50 videos returned by the YouTube search engine in response to 'kyphosis' keyword query were included in the study and categorized under seven and six groups, based on their source and content. The popularity of the videos were evaluated with a new index called the video power index (VPI). The quality, educational quality and accuracy of the source of information were measured using the JAMA score, Global Quality Score (GQS) and Kyphosis Specific Score (KSS). Videos had a mean duration of 397 seconds and a mean number of views of 131,644, with a total viewing number of 6,582,221. The source (uploader) in 36% of the videos was a trainer and the content in 46% of the videos was exercise training. 72% of the videos were about postural kyphosis. Videos had a mean JAMA score of 1.36 (range: 1 to 4), GQS of 1.68 (range: 1 to 5) and KSS of 3.02 (range:0 to 32). The academic group had the highest scores and the lowest VPIs. Online information on kyphosis is low quality and its contents are of unknown source and accuracy. In order to keep the balance in sharing the right information with the patient, clinicians should possess knowledge about the online information related to their field, and should contribute to the development of optimal medical videos. 3.

  1. No-Reference Video Quality Assessment Model for Distortion Caused by Packet Loss in the Real-Time Mobile Video Services

    Directory of Open Access Journals (Sweden)

    Jiarun Song

    2014-01-01

    Full Text Available Packet loss will make severe errors due to the corruption of related video data. For most video streams, because the predictive coding structures are employed, the transmission errors in one frame will not only cause decoding failure of itself at the receiver side, but also propagate to its subsequent frames along the motion prediction path, which will bring a significant degradation of end-to-end video quality. To quantify the effects of packet loss on video quality, a no-reference objective quality assessment model is presented in this paper. Considering the fact that the degradation of video quality significantly relies on the video content, the temporal complexity is estimated to reflect the varying characteristic of video content, using the macroblocks with different motion activities in each frame. Then, the quality of the frame affected by the reference frame loss, by error propagation, or by both of them is evaluated, respectively. Utilizing a two-level temporal pooling scheme, the video quality is finally obtained. Extensive experimental results show that the video quality estimated by the proposed method matches well with the subjective quality.

  2. 3D Scan-Based Wavelet Transform and Quality Control for Video Coding

    Directory of Open Access Journals (Sweden)

    Parisot Christophe

    2003-01-01

    Full Text Available Wavelet coding has been shown to achieve better compression than DCT coding and moreover allows scalability. 2D DWT can be easily extended to 3D and thus applied to video coding. However, 3D subband coding of video suffers from two drawbacks. The first is the amount of memory required for coding large 3D blocks; the second is the lack of temporal quality due to the sequence temporal splitting. In fact, 3D block-based video coders produce jerks. They appear at blocks temporal borders during video playback. In this paper, we propose a new temporal scan-based wavelet transform method for video coding combining the advantages of wavelet coding (performance, scalability with acceptable reduced memory requirements, no additional CPU complexity, and avoiding jerks. We also propose an efficient quality allocation procedure to ensure a constant quality over time.

  3. Quality Assessment of Adaptive Bitrate Videos using Image Metrics and Machine Learning

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Brunnström, Kjell

    2015-01-01

    Adaptive bitrate (ABR) streaming is widely used for distribution of videos over the internet. In this work, we investigate how well we can predict the quality of such videos using well-known image metrics, information about the bitrate levels, and a relatively simple machine learning method...

  4. Interactional Quality Depicted in Infant and Toddler Videos: Where Are the Interactions?

    Science.gov (United States)

    Fenstermacher, Susan K.; Barr, Rachel; Brey, Elizabeth; Pempek, Tiffany A.; Ryan, Maureen; Calvert, Sandra L.; Shwery, Clay E.; Linebarger, Deborah

    2010-01-01

    This study examined the social-emotional content and the quality of social interactions depicted in a sample of 58 DVDs marketed towards infants and toddlers. Infant-directed videos rarely used social interactions between caregiver and child or between peers to present content. Even when videos explicitly targeted social-emotional content,…

  5. Video Monitoring a Simulation-Based Quality Improvement Program in Bihar, India.

    Science.gov (United States)

    Dyer, Jessica; Spindler, Hilary; Christmas, Amelia; Shah, Malay Bharat; Morgan, Melissa; Cohen, Susanna R; Sterne, Jason; Mahapatra, Tanmay; Walker, Dilys

    2018-04-01

    Simulation-based training has become an accepted clinical training andragogy in high-resource settings with its use increasing in low-resource settings. Video recordings of simulated scenarios are commonly used by facilitators. Beyond using the videos during debrief sessions, researchers can also analyze the simulation videos to quantify technical and nontechnical skills during simulated scenarios over time. Little is known about the feasibility and use of large-scale systems to video record and analyze simulation and debriefing data for monitoring and evaluation in low-resource settings. This manuscript describes the process of designing and implementing a large-scale video monitoring system. Mentees and Mentors were consented and all simulations and debriefs conducted at 320 Primary Health Centers (PHCs) were video recorded. The system design, number of video recordings, and inter-rater reliability of the coded videos were assessed. The final dataset included a total of 11,278 videos. Overall, a total of 2,124 simulation videos were coded and 183 (12%) were blindly double-coded. For the double-coded sample, the average inter-rater reliability (IRR) scores were 80% for nontechnical skills, and 94% for clinical technical skills. Among 4,450 long debrief videos received, 216 were selected for coding and all were double-coded. Data quality of simulation videos was found to be very good in terms of recorded instances of "unable to see" and "unable to hear" in Phases 1 and 2. This study demonstrates that video monitoring systems can be effectively implemented at scale in resource limited settings. Further, video monitoring systems can play several vital roles within program implementation, including monitoring and evaluation, provision of actionable feedback to program implementers, and assurance of program fidelity.

  6. Educational Colonoscopy Video Enhances Bowel Preparation Quality and Comprehension in an Inner City Population.

    Science.gov (United States)

    Pillai, Ajish; Menon, Radha; Oustecky, David; Ahmad, Asyia

    2017-07-24

    Quality of bowel preparation and patient knowledge remains a major barrier for completing colorectal cancer screening. Few studies have tested unique ways to impact patient understanding centering on interactive computer programs, pictures, and brochures. Two studies explored instructional videos but focused on patient compliance and anxiety as endpoints. Furthermore, excessive video length and content may limit their impact on a broad patient population. No study so far has studied a video's impact on preparation quality and patient understanding of the colonoscopy procedure. We conducted a single blinded prospective study of inner city patients presenting for a first time screening colonoscopy. During their initial visit patients were randomized to watch an instructional colonoscopy video or a video discussing gastroesophageal reflux disease (GERD). All patients watched a 6 minutes long video with the same spokesperson, completed a demographic questionnaire (Supplemental Digital Content 1, http://links.lww.com/JCG/A352) and were enrolled only if screened within 30 days of their visit. On the day of the colonoscopy, patients completed a 14 question quiz of their knowledge. Blinded endoscopist graded patient preparations based on the Ottawa scale. All authors had access to the study data and reviewed and approved the final manuscript. Among the 104 subjects enrolled in the study, 56 were in the colonoscopy video group, 48 were in GERD video group, and 12 were excluded. Overall, 48% were male and 52% female; 90% of patients had less than a high school education, 76% were African American, and 67% used a 4 L split-dose preparation. There were no differences between either video group with regard to any of the above categories. Comparisons between the 2 groups revealed that the colonoscopy video group had significantly better Ottawa bowel preparation score (4.77 vs. 6.85; P=0.01) than the GERD video group. The colonoscopy video group also had less-inadequate repeat

  7. Quality of Service: a study in databases bibliometric international

    Directory of Open Access Journals (Sweden)

    Deosir Flávio Lobo de Castro Junior

    2013-08-01

    Full Text Available The purpose of this article is to serve as a source of references on Quality of Service for future research. After surveying the international databases, EBSCO and ProQuest, the results on the state of the art in this issue are presented. The method used was the bibliometrics, and 132 items from a universe of 13,427 were investigated. The analyzed works cover the period from 1985 to 2011. Among the contributions, results and conclusions for future research are presented: i most cited authors ii most used methodology, dimensions and questionnaire; iii most referenced publications iv international journals with most publications on the subject, v distribution of the number of publications per year; vi authors networks vii educational institutions network; viii terms used in the search in international databases; ix the relationships studied in 132 articles; x criteria for choice of methodology in the research on quality of services; xi most often used paradigm, and xii 160 high impact references.

  8. On subjective quality assessment of adaptive video streaming via crowdsourcing and laboratory based experiments

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Shahid, Muhammad; Pokhrel, Jeevan

    2017-01-01

    Video streaming services are offered over the Internet and since the service providers do not have full control over the network conditions all the way to the end user, streaming technologies have been developed to maintain the quality of service in these varying network conditions i.e. so called...... adaptive video streaming. In order to cater for users' Quality of Experience (QoE) requirements, HTTP based adaptive streaming solutions of video services have become popular. However, the keys to ensure the users a good QoE with this technology is still not completely understood. User QoE feedback...

  9. The influence of motion quality on responses towards video playback stimuli

    Directory of Open Access Journals (Sweden)

    Emma Ware

    2015-07-01

    Full Text Available Visual motion, a critical cue in communication, can be manipulated and studied using video playback methods. A primary concern for the video playback researcher is the degree to which objects presented on video appear natural to the non-human subject. Here we argue that the quality of motion cues on video, as determined by the video's image presentation rate (IPR, are of particular importance in determining a subject's social response behaviour. We present an experiment testing the effect of variations in IPR on pigeon (Columbia livia response behaviour towards video images of courting opposite sex partners. Male and female pigeons were presented with three video playback stimuli, each containing a different social partner. Each stimulus was then modified to appear at one of three IPRs: 15, 30 or 60 progressive (p frames per second. The results showed that courtship behaviour became significantly longer in duration as IPR increased. This finding implies that the IPR significantly affects the perceived quality of motion cues impacting social behaviour. In males we found that the duration of courtship also depended on the social partner viewed and that this effect interacted with the effects of IPR on behaviour. Specifically, the effect of social partner reached statistical significance only when the stimuli were displayed at 60 p, demonstrating the potential for erroneous results when insufficient IPRs are used. In addition to demonstrating the importance of IPR in video playback experiments, these findings help to highlight and describe the role of visual motion processing in communication behaviour.

  10. Data Quality Assessment and Recommendations to Improve the Quality of Hemodialysis Database

    Directory of Open Access Journals (Sweden)

    Neda Firouraghi

    2018-01-01

    Full Text Available Introduction: Since clinical data contain abnormalities, quality assessment and reporting of data errors are necessary. Data quality analysis consists of developing strategies, making recommendations to avoid future errors and improving the quality of data entry by identifying error types and their causes. Therefore, this approach can be extremely useful to improve the quality of the databases. The aim of this study was to analyze hemodialysis (HD patients’ data in order to improve the quality of data entry and avoid future errors. Method: The study was done on Shiraz University of Medical Sciences HD database in 2015. The database consists of 2367 patients who had at least 12 months follow up (22.34±11.52 months in 2012-2014. Duplicated data were removed; outliers were detected based on statistical methods, expert opinion and the relationship between variables; then, the missing values were handled in 72 variables by using IBM SPSS Statistics 22 in order to improve the quality of the database. According to the results, some recommendations were given to improve the data entry process. Results: The variables had outliers in the range of 0-9.28 percent. Seven variables had missing values over 20 percent and in the others they were between 0 and 19.73 percent. The majority of missing values belong to serum alkaline phosphatase, uric acid, high and low density lipoprotein, total iron binding capacity, hepatitis B surface antibody titer, and parathyroid hormone. The variables with displacement (the values of two or more variables were recorded in the wrong attribute were weight, serum creatinine, blood urea nitrogen, systolic and diastolic blood pressure. These variables may lead to decreased data quality. Conclusion: According to the results and expert opinion, applying some data entry principles, such as defining ranges of values, using the relationship between hemodialysis features, developing alert systems about empty or duplicated data and

  11. A database for the storage of quality control parameters

    International Nuclear Information System (INIS)

    Alves, J.G.; Abrantes, J.N.; Rangel, S.; Santos, L.

    2005-01-01

    Full text: The Individual Monitoring Service at ITN-DPRSN is based on a TLD dosimetry system, that consists of two 6600 Harshaw TLD readers and an the Harshaw 8814 TL card and holder containing two LiF:Mg,Ti (TLD-100) detectors for the evaluation of H p (10) and H p (0.07). A database for the storage of quality control parameters was created using MS Access and is presented in this work. At the moment, the database has a passive role and is used for storage of data and for the retrospective statistical evaluation of important parameters and their evolution with time. lt is regularly fed with the files generated by the NETREMS and/or WINREMS software from Harshaw (presently Thermo Electron Corporation), and allows a quick and user friendly visualization of the data. At present, the information stored therein is: The individual efficiency correction coefficients (ecc) for the card population determined for every TLD card prior to a first use, and their identification as quality control, zero, field and bad cards; The results of the start up daily tests, automatically performed before readouts, e.g. average and relative standard deviation of ten measurements of the temperature, high voltage, ±15 V, D/A reference, ground, internal reference light (RL) source intensity and the photomultiplier tube (PMT) noise; The daily list of readings of the pre-irradiated and unirradiated cards, inter-spaced with field cards at regular intervals, as well as the readings of the PMT noise and the RL intensity, performed at regular intervals during readouts. The average daily readings and their respective standard deviation are also stored; the reader calibration factors (RCF) determined every month at the beginning of a monitoring period; the calibration factor for the 90 Sr-- 90 Y internal irradiator, determined on a monthly basis; the linearity parameters derived from the linear regression curves, performed every month. The insertion of data is determined by each parameter

  12. Perceptual quality estimation of H.264/AVC videos using reduced-reference and no-reference models

    Science.gov (United States)

    Shahid, Muhammad; Pandremmenou, Katerina; Kondi, Lisimachos P.; Rossholm, Andreas; Lövström, Benny

    2016-09-01

    Reduced-reference (RR) and no-reference (NR) models for video quality estimation, using features that account for the impact of coding artifacts, spatio-temporal complexity, and packet losses, are proposed. The purpose of this study is to analyze a number of potentially quality-relevant features in order to select the most suitable set of features for building the desired models. The proposed sets of features have not been used in the literature and some of the features are used for the first time in this study. The features are employed by the least absolute shrinkage and selection operator (LASSO), which selects only the most influential of them toward perceptual quality. For comparison, we apply feature selection in the complete feature sets and ridge regression on the reduced sets. The models are validated using a database of H.264/AVC encoded videos that were subjectively assessed for quality in an ITU-T compliant laboratory. We infer that just two features selected by RR LASSO and two bitstream-based features selected by NR LASSO are able to estimate perceptual quality with high accuracy, higher than that of ridge, which uses more features. The comparisons with competing works and two full-reference metrics also verify the superiority of our models.

  13. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  14. The many faces of a face: Comparing stills and videos of facial expressions in eight dimensions (SAVE database).

    Science.gov (United States)

    Garrido, Margarida V; Lopes, Diniz; Prada, Marília; Rodrigues, David; Jerónimo, Rita; Mourão, Rui P

    2017-08-01

    This article presents subjective rating norms for a new set of Stills And Videos of facial Expressions-the SAVE database. Twenty nonprofessional models were filmed while posing in three different facial expressions (smile, neutral, and frown). After each pose, the models completed the PANAS questionnaire, and reported more positive affect after smiling and more negative affect after frowning. From the shooting material, stills and 5 s and 10 s videos were edited (total stimulus set = 180). A different sample of 120 participants evaluated the stimuli for attractiveness, arousal, clarity, genuineness, familiarity, intensity, valence, and similarity. Overall, facial expression had a main effect in all of the evaluated dimensions, with smiling models obtaining the highest ratings. Frowning expressions were perceived as being more arousing, clearer, and more intense, but also as more negative than neutral expressions. Stimulus presentation format only influenced the ratings of attractiveness, familiarity, genuineness, and intensity. The attractiveness and familiarity ratings increased with longer exposure times, whereas genuineness decreased. The ratings in the several dimensions were correlated. The subjective norms of facial stimuli presented in this article have potential applications to the work of researchers in several research domains. From our database, researchers may choose the most adequate stimulus presentation format for a particular experiment, select and manipulate the dimensions of interest, and control for the remaining dimensions. The full stimulus set and descriptive results (means, standard deviations, and confidence intervals) for each stimulus per dimension are provided as supplementary material.

  15. Intelligent Packet Shaper to Avoid Network Congestion for Improved Streaming Video Quality at Clients

    DEFF Research Database (Denmark)

    Kaul, Manohar; Khosla, Rajiv; Mitsukura, Y

    2003-01-01

    of this intelligent traffic-shaping algorithm on the underlying network real time packet traffic and the eradication of unwanted abruption in the streaming video qualiy. This paper concluded from the end results of the simulation that neural networks are a very superior means of modeling real-time traffic......This paper proposes a traffic shaping algorithm based on neural networks, which adapts to a network over which streaming video is being transmitted. The purpose of this intelligent shaper is to eradicate all traffic congestion and improve the end-user's video quality. It possesses the capability...

  16. Investigating the quality of video consultations performed using fourth generation (4G) mobile telecommunications.

    Science.gov (United States)

    Caffery, Liam J; Smith, Anthony C

    2015-09-01

    The use of fourth-generation (4G) mobile telecommunications to provide real-time video consultations were investigated in this study with the aims of determining if 4G is a suitable telecommunications technology; and secondly, to identify if variation in perceived audio and video quality were due to underlying network performance. Three patient end-points that used 4G Internet connections were evaluated. Consulting clinicians recorded their perception of audio and video quality using the International Telecommunications Union scales during clinics with these patient end-points. These scores were used to calculate a mean opinion score (MOS). The network performance metrics were obtained for each session and the relationships between these metrics and the session's quality scores were tested. Clinicians scored the quality of 50 hours of video consultations, involving 36 clinic sessions. The MOS for audio was 4.1 ± 0.62 and the MOS for video was 4.4 ± 0.22. Image impairment and effort to listen were also rated favourably. There was no correlation between audio or video quality and the network metrics of packet loss or jitter. These findings suggest that 4G networks are an appropriate telecommunication technology to deliver real-time video consultations. Variations in quality scores observed during this study were not explained by the packet loss and jitter in the underlying network. Before establishing a telemedicine service, the performance of the 4G network should be assessed at the location of the proposed service. This is due to known variability in performance of 4G networks. © The Author(s) 2015.

  17. A model linking video gaming, sleep quality, sweet drinks consumption and obesity among children and youth.

    Science.gov (United States)

    Turel, O; Romashkin, A; Morrison, K M

    2017-08-01

    There is a growing need to curb paediatric obesity. The aim of this study is to untangle associations between video-game-use attributes and obesity as a first step towards identifying and examining possible interventions. Cross-sectional time-lagged cohort study was employed using parent-child surveys (t1) and objective physical activity and physiological measures (t2) from 125 children/adolescents (mean age = 13.06, 9-17-year-olds) who play video games, recruited from two clinics at a Canadian academic children's hospital. Structural equation modelling and analysis of covariance were employed for inference. The results of the study are as follows: (i) self-reported video-game play duration in the 4-h window before bedtime is related to greater abdominal adiposity (waist-to-height ratio) and this association may be mediated through reduced sleep quality (measured with the Pittsburgh Sleep Quality Index); and (ii) self-reported average video-game session duration is associated with greater abdominal adiposity and this association may be mediated through higher self-reported sweet drinks consumption while playing video games and reduced sleep quality. Video-game play duration in the 4-h window before bedtime, typical video-game session duration, sweet drinks consumption while playing video games and poor sleep quality have aversive associations with abdominal adiposity. Paediatricians and researchers should further explore how these factors can be altered through behavioural or pharmacological interventions as a means to reduce paediatric obesity. © 2017 World Obesity Federation.

  18. 2008 Niday Perinatal Database quality audit: report of a quality assurance project.

    Science.gov (United States)

    Dunn, S; Bottomley, J; Ali, A; Walker, M

    2011-12-01

    This quality assurance project was designed to determine the reliability, completeness and comprehensiveness of the data entered into Niday Perinatal Database. Quality of the data was measured by comparing data re-abstracted from the patient record to the original data entered into the Niday Perinatal Database. A representative sample of hospitals in Ontario was selected and a random sample of 100 linked mother and newborn charts were audited for each site. A subset of 33 variables (representing 96 data fields) from the Niday dataset was chosen for re-abstraction. Of the data fields for which Cohen's kappa statistic or intraclass correlation coefficient (ICC) was calculated, 44% showed substantial or almost perfect agreement (beyond chance). However, about 17% showed less than 95% agreement and a kappa or ICC value of less than 60% indicating only slight, fair or moderate agreement (beyond chance). Recommendations to improve the quality of these data fields are presented.

  19. Video quality-of-service for consumer terminals : a novel system for programmable components

    NARCIS (Netherlands)

    Hentschel, C.; Bril, R.J.; Chen, Y.; Braspenning, R.A.C.; Lan, T-H.

    2003-01-01

    Future consumer terminals will be more and more based on programmable platforms instead of only dedicated hardware. Novel scalable video algorithm (SVA) software modules trade off resource usage against quality of the output signal. SVAs together with a strategy manager and a quality-of-service

  20. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  1. Impairment-Factor-Based Audiovisual Quality Model for IPTV: Influence of Video Resolution, Degradation Type, and Content Type

    Directory of Open Access Journals (Sweden)

    Garcia MN

    2011-01-01

    Full Text Available This paper presents an audiovisual quality model for IPTV services. The model estimates the audiovisual quality of standard and high definition video as perceived by the user. The model is developed for applications such as network planning and packet-layer quality monitoring. It mainly covers audio and video compression artifacts and impairments due to packet loss. The quality tests conducted for model development demonstrate a mutual influence of the perceived audio and video quality, and the predominance of the video quality for the overall audiovisual quality. The balance between audio quality and video quality, however, depends on the content, the video format, and the audio degradation type. The proposed model is based on impairment factors which quantify the quality-impact of the different degradations. The impairment factors are computed from parameters extracted from the bitstream or packet headers. For high definition video, the model predictions show a correlation with unknown subjective ratings of 95%. For comparison, we have developed a more classical audiovisual quality model which is based on the audio and video qualities and their interaction. Both quality- and impairment-factor-based models are further refined by taking the content-type into account. At last, the different model variants are compared with modeling approaches described in the literature.

  2. Encryption for confidentiality of the network and influence of this to the quality of streaming video through network

    Science.gov (United States)

    Sevcik, L.; Uhrin, D.; Frnda, J.; Voznak, M.; Toral-Cruz, Homer; Mikulec, M.; Jakovlev, Sergej

    2015-05-01

    Nowadays, the interest in real-time services, like audio and video, is growing. These services are mostly transmitted over packet networks, which are based on IP protocol. It leads to analyses of these services and their behavior in such networks which are becoming more frequent. Video has become the significant part of all data traffic sent via IP networks. In general, a video service is one-way service (except e.g. video calls) and network delay is not such an important factor as in a voice service. Dominant network factors that influence the final video quality are especially packet loss, delay variation and the capacity of the transmission links. Analysis of video quality concentrates on the resistance of video codecs to packet loss in the network, which causes artefacts in the video. IPsec provides confidentiality in terms of safety, integrity and non-repudiation (using HMAC-SHA1 and 3DES encryption for confidentiality and AES in CBC mode) with an authentication header and ESP (Encapsulating Security Payload). The paper brings a detailed view of the performance of video streaming over an IP-based network. We compared quality of video with packet loss and encryption as well. The measured results demonstrated the relation between the video codec type and bitrate to the final video quality.

  3. Design and Establishment of Quality Model of Fundamental Geographic Information Database

    Science.gov (United States)

    Ma, W.; Zhang, J.; Zhao, Y.; Zhang, P.; Dang, Y.; Zhao, T.

    2018-04-01

    In order to make the quality evaluation for the Fundamental Geographic Information Databases(FGIDB) more comprehensive, objective and accurate, this paper studies and establishes a quality model of FGIDB, which formed by the standardization of database construction and quality control, the conformity of data set quality and the functionality of database management system, and also designs the overall principles, contents and methods of the quality evaluation for FGIDB, providing the basis and reference for carry out quality control and quality evaluation for FGIDB. This paper designs the quality elements, evaluation items and properties of the Fundamental Geographic Information Database gradually based on the quality model framework. Connected organically, these quality elements and evaluation items constitute the quality model of the Fundamental Geographic Information Database. This model is the foundation for the quality demand stipulation and quality evaluation of the Fundamental Geographic Information Database, and is of great significance on the quality assurance in the design and development stage, the demand formulation in the testing evaluation stage, and the standard system construction for quality evaluation technology of the Fundamental Geographic Information Database.

  4. Objective analysis of image quality of video image capture systems

    Science.gov (United States)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  5. Concerns of Quality and Safety in Public Domain Surgical Education Videos: An Assessment of the Critical View of Safety in Frequently Used Laparoscopic Cholecystectomy Videos.

    Science.gov (United States)

    Deal, Shanley B; Alseidi, Adnan A

    2017-12-01

    Online videos are among the most common resources for case preparation. Using crowd sourcing, we evaluated the relationship between operative quality and viewing characteristics of online laparoscopic cholecystectomy videos. We edited 160 online videos of laparoscopic cholecystectomy to 60 seconds or less. Crowd workers (CW) rated videos using Global Objective Assessment of Laparoscopic Skills (GOALS), the critical view of safety (CVS) criteria, and assigned overall pass/fail ratings if CVS was achieved; linear mixed effects models derived average ratings. Views, likes, dislikes, subscribers, and country were recorded for subset analysis of YouTube videos. Spearman correlation coefficient (SCC) assessed correlation between performance measures. One video (0.06%) achieved a passing CVS score of ≥5; 23%, ≥4; 44%, ≥3; 79%, ≥2; and 100% ≥1. Pass/fail ratings correlated to CVS, SCC 0.95 (p quality. The average CVS and GOALS scores were no different for videos with >20,000 views (22%) compared with those with online surgical videos of LC. Favorable characteristics, such as number of views or likes, do not translate to higher quality. Copyright © 2017 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  6. Network analysis on skype end-to-end video quality

    NARCIS (Netherlands)

    Exarchakos, Georgios; Druda, Luca; Menkovski, Vlado; Liotta, Antonio

    2015-01-01

    Purpose – This paper aims to argue on the efficiency of Quality of Service (QoS)-based adaptive streamingwith regards to perceived quality Quality of Experience (QoE). Although QoS parameters are extensivelyused even by high-end adaptive streaming algorithms, achieved QoE fails to justify their use

  7. VQone MATLAB toolbox: A graphical experiment builder for image and video quality evaluations: VQone MATLAB toolbox.

    Science.gov (United States)

    Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka

    2016-03-01

    This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  8. Pathway — Using a State-of-the-Art Digital Video Database for Research and Development in Teacher Education

    Science.gov (United States)

    Adrian, Brian; Zollman, Dean; Stevens, Scott

    2006-02-01

    To demonstrate how state-of-the-art video databases can address issues related to the lack of preparation of many physics teachers, we have created the prototype Physics Teaching Web Advisory (Pathway). Pathway's Synthetic Interviews and related video materials are beginning to provide pre-service and out-of-field in-service teachers with much-needed professional development and well-prepared teachers with new perspectives on teaching physics. The prototype was limited to a demonstration of the systems. Now, with an additional grant we will extend the system and conduct research and evaluation on its effectiveness. This project will provide virtual expert help on issues of pedagogy and content. In particular, the system will convey, by example and explanation, contemporary ideas about the teaching of physics and applications of physics education research. The research effort will focus on the value of contemporary technology to address the continuing education of teachers who are teaching in a field in which they have not been trained.

  9. Management of speech and video telephony quality in heterogeneous wireless networks

    CERN Document Server

    Lewcio, Błażej

    2014-01-01

    This book shows how networking research and quality engineering can be combined to successfully manage the transmission quality when speech and video telephony is delivered in heterogeneous wireless networks. Nomadic use of services requires intelligent management of ongoing transmission, and to make the best of available resources many fundamental trade-offs must be considered. Network coverage versus throughput and reliability of a connection is one key aspect, efficiency versus robustness of signal compression is another. However, to successfully manage services, user-perceived Quality of Experience (QoE) in heterogeneous networks must be known, and the perception of quality changes must be understood.  These issues are addressed in this book, in particular focusing on the perception of quality changes due to switching between diverse networks, speech and video codecs, and encoding bit rates during active calls.

  10. Nonintrusive Method Based on Neural Networks for Video Quality of Experience Assessment

    Directory of Open Access Journals (Sweden)

    Diego José Luis Botia Valderrama

    2016-01-01

    Full Text Available The measurement and evaluation of the QoE (Quality of Experience have become one of the main focuses in the telecommunications to provide services with the expected quality for their users. However, factors like the network parameters and codification can affect the quality of video, limiting the correlation between the objective and subjective metrics. The above increases the complexity to evaluate the real quality of video perceived by users. In this paper, a model based on artificial neural networks such as BPNNs (Backpropagation Neural Networks and the RNNs (Random Neural Networks is applied to evaluate the subjective quality metrics MOS (Mean Opinion Score and the PSNR (Peak Signal Noise Ratio, SSIM (Structural Similarity Index Metric, VQM (Video Quality Metric, and QIBF (Quality Index Based Frame. The proposed model allows establishing the QoS (Quality of Service based in the strategy Diffserv. The metrics were analyzed through Pearson’s and Spearman’s correlation coefficients, RMSE (Root Mean Square Error, and outliers rate. Correlation values greater than 90% were obtained for all the evaluated metrics.

  11. Quality Variation Control for Three-Dimensional Wavelet-Based Video Coders

    Directory of Open Access Journals (Sweden)

    Vidhya Seran

    2007-02-01

    Full Text Available The fluctuation of quality in time is a problem that exists in motion-compensated-temporal-filtering (MCTF- based video coding. The goal of this paper is to design a solution for overcoming the distortion fluctuation challenges faced by wavelet-based video coders. We propose a new technique for determining the number of bits to be allocated to each temporal subband in order to minimize the fluctuation in the quality of the reconstructed video. Also, the wavelet filter properties are explored to design suitable scaling coefficients with the objective of smoothening the temporal PSNR. The biorthogonal 5/3 wavelet filter is considered in this paper and experimental results are presented for 2D+t and t+2D MCTF wavelet coders.

  12. Quality Variation Control for Three-Dimensional Wavelet-Based Video Coders

    Directory of Open Access Journals (Sweden)

    Seran Vidhya

    2007-01-01

    Full Text Available The fluctuation of quality in time is a problem that exists in motion-compensated-temporal-filtering (MCTF- based video coding. The goal of this paper is to design a solution for overcoming the distortion fluctuation challenges faced by wavelet-based video coders. We propose a new technique for determining the number of bits to be allocated to each temporal subband in order to minimize the fluctuation in the quality of the reconstructed video. Also, the wavelet filter properties are explored to design suitable scaling coefficients with the objective of smoothening the temporal PSNR. The biorthogonal 5/3 wavelet filter is considered in this paper and experimental results are presented for 2D+t and t+2D MCTF wavelet coders.

  13. Modeling the Color Image and Video Quality on Liquid Crystal Displays with Backlight Dimming

    DEFF Research Database (Denmark)

    Korhonen, Jari; Mantel, Claire; Burini, Nino

    2013-01-01

    Objective image and video quality metrics focus mostly on the digital representation of the signal. However, the display characteristics are also essential for the overall Quality of Experience (QoE). In this paper, we use a model of a backlight dimming system for Liquid Crystal Display (LCD......) and show how the modeled image can be used as an input to quality assessment algorithms. For quality assessment, we propose an image quality metric, based on Peak Signal-to-Noise Ratio (PSNR) computation in the CIE L*a*b* color space. The metric takes luminance reduction, color distortion and loss...

  14. EQUIP: A European Survey of Quality Criteria for the Evaluation of Databases.

    Science.gov (United States)

    Wilson, T. D.

    1998-01-01

    Reports on two stages of an investigation into the perceived quality of online databases. Presents data from 989 questionnaires from 600 database users in 12 European and Scandinavian countries and results of a test of the SERVQUAL methodology for identifying user expectations about database services. Lists statements used in the SERVQUAL survey.…

  15. Feasibility study and methodology to create a quality-evaluated database of primary care data

    Directory of Open Access Journals (Sweden)

    Alison Bourke

    2004-11-01

    Conclusions In the group of practices studied, levels of recording were generally assessed to be of sufficient quality to enable a database of quality-evaluated, anonymised primary care records to be created.

  16. The CIRDO Corpus: Comprehensive Audio/Video Database of Domestic Falls of Elderly People

    OpenAIRE

    Vacher , Michel; Bouakaz , Saida; Bobillier-Chaumon , Marc-Eric; Aman , F; Khan , Rizwan Ahmed; Bekkadja , S; Portet , François; Guillou , Erwan; Rossato , S; Lecouteux , Benjamin

    2016-01-01

    International audience; Ambient Assisted Living aims at enhancing the quality of life of older and disabled people at home thanks to Smart Homes. In particular, regarding elderly living alone at home, the detection of distress situation after a fall is very important to reassure this kind of population. However, many studies do not include tests in real settings, because data collection in this domain is very expensive and challenging and because of the few available data sets. The CIRDOcorpu...

  17. ATM Quality of Service Tests for Digitized Video Using ATM Over Satellite: Laboratory Tests

    Science.gov (United States)

    Ivancic, William D.; Brooks, David E.; Frantz, Brian D.

    1997-01-01

    A digitized video application was used to help determine minimum quality of service parameters for asynchronous transfer mode (ATM) over satellite. For these tests, binomially distributed and other errors were digitally inserted in an intermediate frequency link via a satellite modem and a commercial gaussian noise generator. In this paper, the relation- ship between the ATM cell error and cell loss parameter specifications is discussed with regard to this application. In addition, the video-encoding algorithms, test configurations, and results are presented in detail.

  18. Expert system for quality control in the INIS database

    International Nuclear Information System (INIS)

    Todeschini, C.; Tolstenkov, A.

    1990-05-01

    An expert system developed to identify input items to INIS database with a high probability of containing errors is described. The system employs a Knowledge Base constructed by the interpretation of a large number of intellectual choices or expert decisions made by human indexers and incorporated in the INIS database. On the basis of the descriptor indexing, the system checks the correctness of the categorization. A notable feature of the system is its capability of self improvement by the continuous updating of the Knowledge Base. The expert system has also been found to be extremely useful in identifying documents with poor indexing. 3 refs, 9 figs

  19. Expert system for quality control in the INIS database

    Energy Technology Data Exchange (ETDEWEB)

    Todeschini, C; Tolstenkov, A [International Atomic Energy Agency, Vienna (Austria)

    1990-05-01

    An expert system developed to identify input items to INIS database with a high probability of containing errors is described. The system employs a Knowledge Base constructed by the interpretation of a large number of intellectual choices or expert decisions made by human indexers and incorporated in the INIS database. On the basis of the descriptor indexing, the system checks the correctness of the categorization. A notable feature of the system is its capability of self improvement by the continuous updating of the Knowledge Base. The expert system has also been found to be extremely useful in identifying documents with poor indexing. 3 refs, 9 figs.

  20. Architectures for radio over fiber transmission of high-quality video and data signals

    DEFF Research Database (Denmark)

    Lebedev, Alexander

    with a constraint on complexity. For wireless personal area networks distribution, we explore the notion of joint optimization of physical layer parameters of a fiber-wireless link (optical power levels, wireless transmission distance) and the codec parameters (quantization, error-resilience tools) based...... on the peak signal-to-noise ratio as an objective video quality metric for compressed video transmission. Furthermore, we experimentally demonstrate uncompressed 1080i highdefinition video distribution in V-band (50–75 GHz) and W-band (75–110 GHz) fiber-wireless links achieving 3 m of wireless transmission...... efficient wired/wireless backhaul of picocell networks. Gigabit signal transmission is realized in combined fiber-wireless-fiber link enabling simultaneous backhaul of dense metropolitan and suburban areas. In this Thesis, we propose a technique to combat periodic chromatic dispersion-induced radio...

  1. Subjective quality of video sequences rendered on LCD with local backlight dimming at different lighting conditions

    Science.gov (United States)

    Mantel, Claire; Korhonen, Jari; Pedersen, Jesper M.; Bech, Søren; Andersen, Jakob Dahl; Forchhammer, Søren

    2015-01-01

    This paper focuses on the influence of ambient light on the perceived quality of videos displayed on Liquid Crystal Display (LCD) with local backlight dimming. A subjective test assessing the quality of videos with two backlight dimming methods and three lighting conditions, i.e. no light, low light level (5 lux) and higher light level (60 lux) was organized to collect subjective data. Results show that participants prefer the method exploiting local dimming possibilities to the conventional full backlight but that this preference varies depending on the ambient light level. The clear preference for one method at the low light conditions decreases at the high ambient light, confirming that the ambient light significantly attenuates the perception of the leakage defect (light leaking through dark pixels). Results are also highly dependent on the content of the sequence, which can modulate the effect of the ambient light from having an important influence on the quality grades to no influence at all.

  2. A novel no-reference objective stereoscopic video quality assessment method based on visual saliency analysis

    Science.gov (United States)

    Yang, Xinyan; Zhao, Wei; Ye, Long; Zhang, Qin

    2017-07-01

    This paper proposes a no-reference objective stereoscopic video quality assessment method with the motivation that making the effect of objective experiments close to that of subjective way. We believe that the image regions with different visual salient degree should not have the same weights when designing an assessment metric. Therefore, we firstly use GBVS algorithm to each frame pairs and separate both the left and right viewing images into the regions with strong, general and week saliency. Besides, local feature information like blockiness, zero-crossing and depth are extracted and combined with a mathematical model to calculate a quality assessment score. Regions with different salient degree are assigned with different weights in the mathematical model. Experiment results demonstrate the superiority of our method compared with the existed state-of-the-art no-reference objective Stereoscopic video quality assessment methods.

  3. Quality of pharmaceutical care at the pharmacy counter: patients' experiences versus video observation.

    Science.gov (United States)

    Koster, Ellen S; Blom, Lyda; Overbeeke, Marloes R; Philbert, Daphne; Vervloet, Marcia; Koopman, Laura; van Dijk, Liset

    2016-01-01

    Consumer Quality Index questionnaires are used to assess quality of care from patients' experiences. To provide insight into the agreement about quality of pharmaceutical care, measured both by a patient questionnaire and video observations. Pharmaceutical encounters in four pharmacies were video-recorded. Patients completed a questionnaire based upon the Consumer Quality Index Pharmaceutical Care after the encounter containing questions about patients' experiences regarding information provision, medication counseling, and pharmacy staff's communication style. An observation protocol was used to code the recorded encounters. Agreement between video observation and patients' experiences was calculated. In total, 109 encounters were included for analysis. For the domains "medication counseling" and "communication style", agreement between patients' experiences and observations was very high (>90%). Less agreement (45%) was found for "information provision", which was rated more positive by patients compared to the observations, especially for the topic, encouragement of patients' questioning behavior. A questionnaire is useful to assess the quality of medication counseling and pharmacy staff's communication style, but might be less suitable to evaluate information provision and pharmacy staff's encouragement of patients' questioning behavior. Although patients may believe that they have received all necessary information to use their new medicine, some information on specific instructions was not addressed during the encounter. When using questionnaires to get insight into information provision, observations of encounters are very informative to validate the patient questionnaires and make necessary adjustments.

  4. Quality of pharmaceutical care at the pharmacy counter: patients’ experiences versus video observation

    Science.gov (United States)

    Koster, Ellen S; Blom, Lyda; Overbeeke, Marloes R; Philbert, Daphne; Vervloet, Marcia; Koopman, Laura; van Dijk, Liset

    2016-01-01

    Introduction Consumer Quality Index questionnaires are used to assess quality of care from patients’ experiences. Objective To provide insight into the agreement about quality of pharmaceutical care, measured both by a patient questionnaire and video observations. Methods Pharmaceutical encounters in four pharmacies were video-recorded. Patients completed a questionnaire based upon the Consumer Quality Index Pharmaceutical Care after the encounter containing questions about patients’ experiences regarding information provision, medication counseling, and pharmacy staff’s communication style. An observation protocol was used to code the recorded encounters. Agreement between video observation and patients’ experiences was calculated. Results In total, 109 encounters were included for analysis. For the domains “medication counseling” and “communication style”, agreement between patients’ experiences and observations was very high (>90%). Less agreement (45%) was found for “information provision”, which was rated more positive by patients compared to the observations, especially for the topic, encouragement of patients’ questioning behavior. Conclusion A questionnaire is useful to assess the quality of medication counseling and pharmacy staff’s communication style, but might be less suitable to evaluate information provision and pharmacy staff’s encouragement of patients’ questioning behavior. Although patients may believe that they have received all necessary information to use their new medicine, some information on specific instructions was not addressed during the encounter. When using questionnaires to get insight into information provision, observations of encounters are very informative to validate the patient questionnaires and make necessary adjustments. PMID:27042025

  5. The detector of BES III muon constructs with the quality control database

    International Nuclear Information System (INIS)

    Yao Ning; Chinese Academy of Sciences, Beijing; Zheng Guoheng; Yang Lei; Zhang Jiawen; Han Jifeng; Xie Yuguang; Zhao Jianbing; Chen Jin

    2006-01-01

    Because of these softwares' characters, the authors use MySQL, PHP, Apache to construct our quality control database. The authors show the structure of BES MUON Detector and explain the reason why we must construct database. The authors show the results that our database can present. People can access the system through its web site, which retrieves data on request from the database and can display results in dynamically created images. The database is the transparent technique support platform of the maintenance of the detector. (authors)

  6. Constructing a no-reference H.264/AVC bitstream-based video quality metric using genetic programming-based symbolic regression

    OpenAIRE

    Staelens, Nicolas; Deschrijver, Dirk; Vladislavleva, E; Vermeulen, Brecht; Dhaene, Tom; Demeester, Piet

    2013-01-01

    In order to ensure optimal quality of experience toward end users during video streaming, automatic video quality assessment becomes an important field-of-interest to video service providers. Objective video quality metrics try to estimate perceived quality with high accuracy and in an automated manner. In traditional approaches, these metrics model the complex properties of the human visual system. More recently, however, it has been shown that machine learning approaches can also yield comp...

  7. Traffic and Quality Characterization of the H.264/AVC Scalable Video Coding Extension

    Directory of Open Access Journals (Sweden)

    Geert Van der Auwera

    2008-01-01

    Full Text Available The recent scalable video coding (SVC extension to the H.264/AVC video coding standard has unprecedented compression efficiency while supporting a wide range of scalability modes, including temporal, spatial, and quality (SNR scalability, as well as combined spatiotemporal SNR scalability. The traffic characteristics, especially the bit rate variabilities, of the individual layer streams critically affect their network transport. We study the SVC traffic statistics, including the bit rate distortion and bit rate variability distortion, with long CIF resolution video sequences and compare them with the corresponding MPEG-4 Part 2 traffic statistics. We consider (i temporal scalability with three temporal layers, (ii spatial scalability with a QCIF base layer and a CIF enhancement layer, as well as (iii quality scalability modes FGS and MGS. We find that the significant improvement in RD efficiency of SVC is accompanied by substantially higher traffic variabilities as compared to the equivalent MPEG-4 Part 2 streams. We find that separately analyzing the traffic of temporal-scalability only encodings gives reasonable estimates of the traffic statistics of the temporal layers embedded in combined spatiotemporal encodings and in the base layer of combined FGS-temporal encodings. Overall, we find that SVC achieves significantly higher compression ratios than MPEG-4 Part 2, but produces unprecedented levels of traffic variability, thus presenting new challenges for the network transport of scalable video.

  8. Quality-Based Backlight Optimization for Video Playback on Handheld Devices

    Directory of Open Access Journals (Sweden)

    Liang Cheng

    2007-01-01

    Full Text Available For a typical handheld device, the backlight accounts for a significant percentage of the total energy consumption (e.g., around 30% for a Compaq iPAQ 3650. Substantial energy savings can be achieved by dynamically adapting backlight intensity levels on such low-power portable devices. In this paper, we analyze the characteristics of video streaming services and propose a cross-layer optimization scheme called quality adapted backlight scaling (QABS to achieve backlight energy savings for video playback applications on handheld devices. Specifically, we present a fast algorithm to optimize backlight dimming while keeping the degradation in image quality to a minimum so that the overall service quality is close to a specified threshold. Additionally, we propose two effective techniques to prevent frequent backlight switching, which negatively affects user perception of video. Our initial experimental results indicate that the energy used for backlight is significantly reduced, while the desired quality is satisfied. The proposed algorithms can be realized in real time.

  9. Quality of pharmaceutical care at the pharmacy counter: patients’ experiences versus video observation

    Directory of Open Access Journals (Sweden)

    Koster ES

    2016-03-01

    Full Text Available Ellen S Koster,1 Lyda Blom,1 Marloes R Overbeeke,1 Daphne Philbert,1 Marcia Vervloet,2 Laura Koopman,2,3 Liset van Dijk2 1Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht University, the Netherlands; 2Netherlands Institute of Health Services Research (NIVEL, Utrecht, the Netherlands; 3National Health Care Institute, Diemen, the Netherlands Introduction: Consumer Quality Index questionnaires are used to assess quality of care from patients’ experiences.Objective: To provide insight into the agreement about quality of pharmaceutical care, measured both by a patient questionnaire and video observations.Methods: Pharmaceutical encounters in four pharmacies were video-recorded. Patients completed a questionnaire based upon the Consumer Quality Index Pharmaceutical Care after the encounter containing questions about patients’ experiences regarding information provision, medication counseling, and pharmacy staff’s communication style. An observation protocol was used to code the recorded encounters. Agreement between video observation and patients’ experiences was calculated.Results: In total, 109 encounters were included for analysis. For the domains “medication counseling” and “communication style”, agreement between patients’ experiences and observations was very high (>90%. Less agreement (45% was found for “information provision”, which was rated more positive by patients compared to the observations, especially for the topic, encouragement of patients’ questioning behavior.Conclusion: A questionnaire is useful to assess the quality of medication counseling and pharmacy staff’s communication style, but might be less suitable to evaluate information provision and pharmacy staff’s encouragement of patients’ questioning behavior. Although patients may believe that they have received all necessary information to use their new medicine, some information on specific instructions was not addressed during

  10. Nationwide quality improvement of cholecystectomy: results from a national database

    DEFF Research Database (Denmark)

    Harboe, Kirstine M; Bardram, Linda

    2011-01-01

    To evaluate whether quality improvements in the performance of cholecystectomy have been achieved in Denmark since 2006, after revision of the Danish National Guidelines for treatment of gallstones.......To evaluate whether quality improvements in the performance of cholecystectomy have been achieved in Denmark since 2006, after revision of the Danish National Guidelines for treatment of gallstones....

  11. Air Quality Modelling and the National Emission Database

    DEFF Research Database (Denmark)

    Jensen, S. S.

    The project focuses on development of institutional strengthening to be able to carry out national air emission inventories based on the CORINAIR methodology. The present report describes the link between emission inventories and air quality modelling to ensure that the new national air emission...... inventory is able to take into account the data requirements of air quality models...

  12. Task-oriented quality assessment and adaptation in real-time mission critical video streaming applications

    Science.gov (United States)

    Nightingale, James; Wang, Qi; Grecos, Christos

    2015-02-01

    In recent years video traffic has become the dominant application on the Internet with global year-on-year increases in video-oriented consumer services. Driven by improved bandwidth in both mobile and fixed networks, steadily reducing hardware costs and the development of new technologies, many existing and new classes of commercial and industrial video applications are now being upgraded or emerging. Some of the use cases for these applications include areas such as public and private security monitoring for loss prevention or intruder detection, industrial process monitoring and critical infrastructure monitoring. The use of video is becoming commonplace in defence, security, commercial, industrial, educational and health contexts. Towards optimal performances, the design or optimisation in each of these applications should be context aware and task oriented with the characteristics of the video stream (frame rate, spatial resolution, bandwidth etc.) chosen to match the use case requirements. For example, in the security domain, a task-oriented consideration may be that higher resolution video would be required to identify an intruder than to simply detect his presence. Whilst in the same case, contextual factors such as the requirement to transmit over a resource-limited wireless link, may impose constraints on the selection of optimum task-oriented parameters. This paper presents a novel, conceptually simple and easily implemented method of assessing video quality relative to its suitability for a particular task and dynamically adapting videos streams during transmission to ensure that the task can be successfully completed. Firstly we defined two principle classes of tasks: recognition tasks and event detection tasks. These task classes are further subdivided into a set of task-related profiles, each of which is associated with a set of taskoriented attributes (minimum spatial resolution, minimum frame rate etc.). For example, in the detection class

  13. Subjective quality of video sequences rendered on LCD with local backlight dimming at different lighting conditions

    DEFF Research Database (Denmark)

    Mantel, Claire; Korhonen, Jari; Pedersen, Jesper Mørkhøj

    2015-01-01

    This paper focuses on the influence of ambient light on the perceived quality of videos displayed on Liquid Crystal Display (LCD) with local backlight dimming. A subjective test assessing the quality of videos with two backlight dimming methods and three lighting conditions, i.e. no light, low...... light level (5 lux) and higher light level (60 lux) was organized to collect subjective data. Results show that participants prefer the method exploiting local dimming possibilities to the conventional full backlight but that this preference varies depending on the ambient light level. The clear...... preference for one method at the low light conditions decreases at the high ambient light, confirming that the ambient light significantly attenuates the perception of the leakage defect (light leaking through dark pixels). Results are also highly dependent on the content of the sequence, which can modulate...

  14. Quality optimization of H.264/AVC video transmission over noisy environments using a sparse regression framework

    Science.gov (United States)

    Pandremmenou, K.; Tziortziotis, N.; Paluri, S.; Zhang, W.; Blekas, K.; Kondi, L. P.; Kumar, S.

    2015-03-01

    We propose the use of the Least Absolute Shrinkage and Selection Operator (LASSO) regression method in order to predict the Cumulative Mean Squared Error (CMSE), incurred by the loss of individual slices in video transmission. We extract a number of quality-relevant features from the H.264/AVC video sequences, which are given as input to the LASSO. This method has the benefit of not only keeping a subset of the features that have the strongest effects towards video quality, but also produces accurate CMSE predictions. Particularly, we study the LASSO regression through two different architectures; the Global LASSO (G.LASSO) and Local LASSO (L.LASSO). In G.LASSO, a single regression model is trained for all slice types together, while in L.LASSO, motivated by the fact that the values for some features are closely dependent on the considered slice type, each slice type has its own regression model, in an e ort to improve LASSO's prediction capability. Based on the predicted CMSE values, we group the video slices into four priority classes. Additionally, we consider a video transmission scenario over a noisy channel, where Unequal Error Protection (UEP) is applied to all prioritized slices. The provided results demonstrate the efficiency of LASSO in estimating CMSE with high accuracy, using only a few features. les that typically contain high-entropy data, producing a footprint that is far less conspicuous than existing methods. The system uses a local web server to provide a le system, user interface and applications through an web architecture.

  15. Modeling the Subjective Quality of Highly Contrasted Videos Displayed on LCD With Local Backlight Dimming

    DEFF Research Database (Denmark)

    Mantel, Claire; Bech, Søren; Korhonen, Jari

    2015-01-01

    Local backlight dimming is a technology aiming at both saving energy and improving visual quality on television sets. As the rendition of the image is specified locally, the numerical signal corresponding to the displayed image needs to be computed through a model of the display. This simulated...... signal can then be used as input to objective quality metrics. The focus of this paper is on determining which characteristics of locally backlit displays influence quality assessment. A subjective experiment assessing the quality of highly contrasted videos displayed with various local backlight......-dimming algorithms is set up. Subjective results are then compared with both objective measures and objective quality metrics using different display models. The first analysis indicates that the most significant objective features are temporal variations, power consumption (probably representing leakage...

  16. Expert system for quality control in bibliographic databases

    International Nuclear Information System (INIS)

    Todeschini, C.; Farrell, M.P.

    1989-01-01

    An Expert System is presented that can identify errors in the intellectual decisions made by indexers when categorizing documents into an a priori category scheme. The system requires the compilation of a Knowledge Base that incorporates in statistical form the decisions on the linking of indexing and categorization derived from a preceding period of the bibliographic database. New input entering the database is checked against the Knowledge Base, using the descriptor indexing assigned to each record, and the system computed a value for the match of each record with the particular category chosen by the indexer. This category match value is used as a criterion for identifying those documents that have been erroneously categorized. The system was tested on large sample of almost 26,000 documents, representing all the literature falling into ten of the subject categories of the Energy Data Base during the five year period 1980-1984. For valid comparisons among categories, the Knowledge Base must be constructed with an approximately equal number of unique descriptors for each subject category. The system identified those items with high probability of having been erroneously categorized. These items, constituting up to 5% of the sample, were evaluated manually by subject specialists for correct categorization and then compared with the results of the Expert System. Of those pieces of literature deemed by the system to be erroneously categorized, about 75% did indeed belong to a different category. This percentage, however, is dependent on the level at which the threshold on the category match value is set. With a lower threshold value, the percentage can be raised to 90%, but this is accompanied by a lowering of the absolute number of wrongly categorized records caught by the system. The Expert System can be considered as a first step to complete semiautomatic categorizing system

  17. The Quality of Open-Access Video-Based Orthopaedic Instructional Content for the Shoulder Physical Exam is Inconsistent.

    Science.gov (United States)

    Urch, Ekaterina; Taylor, Samuel A; Cody, Elizabeth; Fabricant, Peter D; Burket, Jayme C; O'Brien, Stephen J; Dines, David M; Dines, Joshua S

    2016-10-01

    The internet has an increasing role in both patient and physician education. While several recent studies critically appraised the quality and accuracy of web-based written information available to patients, no studies have evaluated such parameters for open-access video content designed for provider use. The primary goal of the study was to determine the accuracy of internet-based instructional videos featuring the shoulder physical examination. An assessment of quality and accuracy of said video content was performed using the basic shoulder examination as a surrogate for the "best-case scenario" due to its widely accepted components that are stable over time. Three search terms ("shoulder," "examination," and "shoulder exam") were entered into the four online video resources most commonly accessed by orthopaedic surgery residents (VuMedi, G9MD, Orthobullets, and YouTube). Videos were captured and independently reviewed by three orthopaedic surgeons. Quality and accuracy were assessed in accordance with previously published standards. Of the 39 video tutorials reviewed, 61% were rated as fair or poor. Specific maneuvers such as the Hawkins test, O'Brien sign, and Neer impingement test were accurately demonstrated in 50, 36, and 27% of videos, respectively. Inter-rater reliability was excellent (mean kappa 0.80, range 0.79-0.81). Our results suggest that information presented in open-access video tutorials featuring the physical examination of the shoulder is inconsistent. Trainee exposure to such potentially inaccurate information may have a significant impact on trainee education.

  18. A blinded assessment of video quality in wearable technology for telementoring in open surgery: the Google Glass experience.

    Science.gov (United States)

    Hashimoto, Daniel A; Phitayakorn, Roy; Fernandez-del Castillo, Carlos; Meireles, Ozanan

    2016-01-01

    The goal of telementoring is to recreate face-to-face encounters with a digital presence. Open-surgery telementoring is limited by lack of surgeon's point-of-view cameras. Google Glass is a wearable computer that looks like a pair of glasses but is equipped with wireless connectivity, a camera, and viewing screen for video conferencing. This study aimed to assess the safety of using Google Glass by assessing the video quality of a telementoring session. Thirty-four (n = 34) surgeons at a single institution were surveyed and blindly compared via video captured with Google Glass versus an Apple iPhone 5 during the open cholecystectomy portion of a Whipple. Surgeons were asked to evaluate the quality of the video and its adequacy for safe use in telementoring. Thirty-four of 107 invited surgical attendings (32%) responded to the anonymous survey. A total of 50% rated the Google Glass video as fair with the other 50% rating it as bad to poor. A total of 52.9% of respondents rated the Apple iPhone video as good. A significantly greater proportion of respondents felt Google Glass video quality was inadequate for telementoring versus the Apple iPhone's (82.4 vs 26.5%, p safe telementoring. As the device is still in initial phases of development, future iterations or competitor devices may provide a better telementoring application for wearable devices.

  19. Quality standards for DNA sequence variation databases to improve clinical management under development in Australia

    Directory of Open Access Journals (Sweden)

    B. Bennetts

    2014-09-01

    Full Text Available Despite the routine nature of comparing sequence variations identified during clinical testing to database records, few databases meet quality requirements for clinical diagnostics. To address this issue, The Royal College of Pathologists of Australasia (RCPA in collaboration with the Human Genetics Society of Australasia (HGSA, and the Human Variome Project (HVP is developing standards for DNA sequence variation databases intended for use in the Australian clinical environment. The outputs of this project will be promoted to other health systems and accreditation bodies by the Human Variome Project to support the development of similar frameworks in other jurisdictions.

  20. Preliminary study on effects of 60Co γ-irradiation on video quality and the image de-noising methods

    International Nuclear Information System (INIS)

    Yuan Mei; Zhao Jianbin; Cui Lei

    2011-01-01

    There will be variable noises appear on images in video once the play device irradiated by γ-rays, so as to affect the image clarity. In order to eliminate the image noising, the affection mechanism of γ-irradiation on video-play device was studied in this paper and the methods to improve the image quality with both hardware and software were proposed by use of protection program and de-noising algorithm. The experimental results show that the scheme of video de-noising based on hardware and software can improve effectively the PSNR by 87.5 dB. (authors)

  1. Modeling the Quality of Videos Displayed With Local Dimming Backlight at Different Peak White and Ambient Light Levels

    DEFF Research Database (Denmark)

    Mantel, Claire; Søgaard, Jacob; Bech, Søren

    2016-01-01

    is computed using a model of the display. Widely used objective quality metrics are applied based on the rendering models of the videos to predict the subjective evaluations. As these predictions are not satisfying, three machine learning methods are applied: partial least square regression, elastic net......This paper investigates the impact of ambient light and peak white (maximum brightness of a display) on the perceived quality of videos displayed using local backlight dimming. Two subjective tests providing quality evaluations are presented and analyzed. The analyses of variance show significant...

  2. Historical return on investment and improved quality resulting from development and mining of a hospital laboratory relational database.

    Science.gov (United States)

    Brimhall, Bradley B; Hall, Timothy E; Walczak, Steven

    2006-01-01

    A hospital laboratory relational database, developed over eight years, has demonstrated significant cost savings and a substantial financial return on investment (ROI). In addition, the database has been used to measurably improve laboratory operations and the quality of patient care.

  3. Multi-frame super-resolution with quality self-assessment for retinal fundus videos.

    Science.gov (United States)

    Köhler, Thomas; Brost, Alexander; Mogalle, Katja; Zhang, Qianyi; Köhler, Christiane; Michelson, Georg; Hornegger, Joachim; Tornow, Ralf P

    2014-01-01

    This paper proposes a novel super-resolution framework to reconstruct high-resolution fundus images from multiple low-resolution video frames in retinal fundus imaging. Natural eye movements during an examination are used as a cue for super-resolution in a robust maximum a-posteriori scheme. In order to compensate heterogeneous illumination on the fundus, we integrate retrospective illumination correction for photometric registration to the underlying imaging model. Our method utilizes quality self-assessment to provide objective quality scores for reconstructed images as well as to select regularization parameters automatically. In our evaluation on real data acquired from six human subjects with a low-cost video camera, the proposed method achieved considerable enhancements of low-resolution frames and improved noise and sharpness characteristics by 74%. In terms of image analysis, we demonstrate the importance of our method for the improvement of automatic blood vessel segmentation as an example application, where the sensitivity was increased by 13% using super-resolution reconstruction.

  4. Effects of music and music video interventions on sleep quality: A randomized controlled trial in adults with sleep disturbances.

    Science.gov (United States)

    Huang, Chiung-Yu; Chang, En-Ting; Hsieh, Yuan-Mei; Lai, Hui-Ling

    2017-10-01

    The present study aimed to compare the effects of music and music video interventions on objective and subjective sleep quality in adults with sleep disturbances. A randomized controlled trial was performed on 71 adults who were recruited from the outpatient department of a hospital with 1100 beds and randomly assigned to the control, music, and music video groups. During the 4 test days (Days 2-5), for 30min before nocturnal sleep, the music group listened to Buddhist music and the music video group watched Buddhist music videos. They were instructed to not listen/watch to the music/MV on the first night (pretest, Day 1) and the final night (Day 6). The control group received no intervention. Sleep was assessed using a one-channel electroencephalography machine in their homes and self-reported questionnaires. The music and music video interventions had no effect on any objective sleep parameters, as measured using electroencephalography. However, the music group had significantly longer subjective total sleep time than the music video group did (Wald χ 2 =6.23, p=0.04). Our study results increase knowledge regarding music interventions for sleep quality in adults with sleep disturbances. This study suggested that more research is required to strengthen the scientific knowledge of the effects of music intervention on sleep quality in adults with sleep disturbances. (ISRCTN94971645). Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Benford's Law for Quality Assurance of Manner of Death Counts in Small and Large Databases.

    Science.gov (United States)

    Daniels, Jeremy; Caetano, Samantha-Jo; Huyer, Dirk; Stephen, Andrew; Fernandes, John; Lytwyn, Alice; Hoppe, Fred M

    2017-09-01

    To assess if Benford's law, a mathematical law used for quality assurance in accounting, can be applied as a quality assurance measure for the manner of death determination. We examined a regional forensic pathology service's monthly manner of death counts (N = 2352) from 2011 to 2013, and provincial monthly and weekly death counts from 2009 to 2013 (N = 81,831). We tested whether each dataset's leading digit followed Benford's law via the chi-square test. For each database, we assessed whether number 1 was the most common leading digit. The manner of death counts first digit followed Benford's law in all the three datasets. Two of the three datasets had 1 as the most frequent leading digit. The manner of death data in this study showed qualities consistent with Benford's law. The law has potential as a quality assurance metric in the manner of death determination for both small and large databases. © 2017 American Academy of Forensic Sciences.

  6. Development of a quality assurance safety assessment database for near surface radioactive waste disposal

    International Nuclear Information System (INIS)

    Park, J. W.; Kim, C. L.; Park, J. B.; Lee, E. Y.; Lee, Y. M.; Kang, C. H.; Zhou, W.; Kozak, M. W.

    2003-01-01

    A quality assurance safety assessment database, called QUARK (QUality Assurance program for Radioactive waste management in Korea), has been developed to manage both analysis information and parameter database for safety assessment of Low- and Intermediate-Level radioactive Waste (LILW) disposal facility in Korea. QUARK is such a tool that serves QA purposes for managing safety assessment information properly and securely. In QUARK, the information is organized and linked to maximize the integrity of information and traceability. QUARK provides guidance to conduct safety assessment analysis, from scenario generation to result analysis, and provides a window to inspect and trace previous safety assessment analysis and parameter values. QUARK also provides default database for safety assessment staff who construct input data files using SAGE(Safety Assessment Groundwater Evaluation), a safety assessment computer code

  7. Subscribing to Databases: How Important Is Depth and Quality of Indexing?

    Science.gov (United States)

    Delong, Linwood

    2007-01-01

    This paper compares the subject indexing on articles pertaining to Immanuel Kant, agriculture, and aging that are found simultaneously in Humanities Index, Academic Search Elite (EBSCO) and Periodicals Research II (Micromedia ProQuest), in order to show that there are substantial variations in the depth and quality of indexing in these databases.…

  8. Can student-produced video transform university teaching?

    DEFF Research Database (Denmark)

    2011-01-01

    as preparation for the two week intensive field course. The overall objective of the redesign was to modernize and improve the quality of the students learning experience, by exploring the potentials of video and online tools to create flexible, student-centered and student-activating education. The student...... produced three types of videos during the course: Video 1 was independently produced by the students, guided by online tasks and instructions. These videos were student produced learning material, showing cases from all over Europe. The videos was collected and presented in a "visual database" in Google...

  9. A Quality-Control-Oriented Database for a Mesoscale Meteorological Observation Network

    Science.gov (United States)

    Lussana, C.; Ranci, M.; Uboldi, F.

    2012-04-01

    In the operational context of a local weather service, data accessibility and quality related issues must be managed by taking into account a wide set of user needs. This work describes the structure and the operational choices made for the operational implementation of a database system storing data from highly automated observing stations, metadata and information on data quality. Lombardy's environmental protection agency, ARPA Lombardia, manages a highly automated mesoscale meteorological network. A Quality Assurance System (QAS) ensures that reliable observational information is collected and disseminated to the users. The weather unit in ARPA Lombardia, at the same time an important QAS component and an intensive data user, has developed a database specifically aimed to: 1) providing quick access to data for operational activities and 2) ensuring data quality for real-time applications, by means of an Automatic Data Quality Control (ADQC) procedure. Quantities stored in the archive include hourly aggregated observations of: precipitation amount, temperature, wind, relative humidity, pressure, global and net solar radiation. The ADQC performs several independent tests on raw data and compares their results in a decision-making procedure. An important ADQC component is the Spatial Consistency Test based on Optimal Interpolation. Interpolated and Cross-Validation analysis values are also stored in the database, providing further information to human operators and useful estimates in case of missing data. The technical solution adopted is based on a LAMP (Linux, Apache, MySQL and Php) system, constituting an open source environment suitable for both development and operational practice. The ADQC procedure itself is performed by R scripts directly interacting with the MySQL database. Users and network managers can access the database by using a set of web-based Php applications.

  10. Quality control in diagnostic radiology: software (Visual Basic 6) and database applications

    International Nuclear Information System (INIS)

    Md Saion Salikin; Muhammad Farid Abdul Khalid

    2002-01-01

    Quality Assurance programme in diagnostic Radiology is being implemented by the Ministry of Health (MoH) in Malaysia. Under this program the performance of an x-ray machine used for diagnostic purpose is tested by using the approved procedure which is commonly known as Quality Control in diagnostic radiology. The quality control or performance tests are carried out b a class H licence holder issued the Atomic Energy Licensing Act 1984. There are a few computer applications (software) that are available in the market which can be used for this purpose. A computer application (software) using Visual Basics 6 and Microsoft Access, is being developed to expedite data handling, analysis and storage as well as report writing of the quality control tests. In this paper important features of the software for quality control tests are explained in brief. A simple database is being established for this purpose which is linked to the software. Problems encountered in the preparation of database are discussed in this paper. A few examples of practical usage of the software and database applications are presented in brief. (Author)

  11. Improving Indicators in a Brazilian Hospital Through Quality-Improvement Programs Based on STS Database Reports

    Directory of Open Access Journals (Sweden)

    Pedro Gabriel Melo de Barros e Silva

    2015-12-01

    Full Text Available ABSTRACT OBJECTIVE: To report the initial changes after quality-improvement programs based on STS-database in a Brazilian hospital. METHODS: Since 2011 a Brazilian hospital has joined STS-Database and in 2012 multifaceted actions based on STS reports were implemented aiming reductions in the time of mechanical ventilation and in the intensive care stay and also improvements in evidence-based perioperative therapies among patients who underwent coronary artery bypass graft surgeries. RESULTS: All the 947 patients submitted to coronary artery bypass graft surgeries from July 2011 to June 2014 were analyzed and there was an improvement in all the three target endpoints after the implementation of the quality-improvement program but the reduction in time on mechanical ventilation was not statistically significant after adjusting for prognostic characteristics. CONCLUSION: The initial experience with STS registry in a Brazilian hospital was associated with improvement in most of targeted quality-indicators.

  12. Development of a data entry auditing protocol and quality assurance for a tissue bank database.

    Science.gov (United States)

    Khushi, Matloob; Carpenter, Jane E; Balleine, Rosemary L; Clarke, Christine L

    2012-03-01

    Human transcription error is an acknowledged risk when extracting information from paper records for entry into a database. For a tissue bank, it is critical that accurate data are provided to researchers with approved access to tissue bank material. The challenges of tissue bank data collection include manual extraction of data from complex medical reports that are accessed from a number of sources and that differ in style and layout. As a quality assurance measure, the Breast Cancer Tissue Bank (http:\\\\www.abctb.org.au) has implemented an auditing protocol and in order to efficiently execute the process, has developed an open source database plug-in tool (eAuditor) to assist in auditing of data held in our tissue bank database. Using eAuditor, we have identified that human entry errors range from 0.01% when entering donor's clinical follow-up details, to 0.53% when entering pathological details, highlighting the importance of an audit protocol tool such as eAuditor in a tissue bank database. eAuditor was developed and tested on the Caisis open source clinical-research database; however, it can be integrated in other databases where similar functionality is required.

  13. Data Model and Relational Database Design for Highway Runoff Water-Quality Metadata

    Science.gov (United States)

    Granato, Gregory E.; Tessler, Steven

    2001-01-01

    A National highway and urban runoff waterquality metadatabase was developed by the U.S. Geological Survey in cooperation with the Federal Highway Administration as part of the National Highway Runoff Water-Quality Data and Methodology Synthesis (NDAMS). The database was designed to catalog available literature and to document results of the synthesis in a format that would facilitate current and future research on highway and urban runoff. This report documents the design and implementation of the NDAMS relational database, which was designed to provide a catalog of available information and the results of an assessment of the available data. All the citations and the metadata collected during the review process are presented in a stratified metadatabase that contains citations for relevant publications, abstracts (or previa), and reportreview metadata for a sample of selected reports that document results of runoff quality investigations. The database is referred to as a metadatabase because it contains information about available data sets rather than a record of the original data. The database contains the metadata needed to evaluate and characterize how valid, current, complete, comparable, and technically defensible published and available information may be when evaluated for application to the different dataquality objectives as defined by decision makers. This database is a relational database, in that all information is ultimately linked to a given citation in the catalog of available reports. The main database file contains 86 tables consisting of 29 data tables, 11 association tables, and 46 domain tables. The data tables all link to a particular citation, and each data table is focused on one aspect of the information collected in the literature search and the evaluation of available information. This database is implemented in the Microsoft (MS) Access database software because it is widely used within and outside of government and is familiar to many

  14. NM WAIDS: A PRODUCED WATER QUALITY AND INFRASTRUCTURE GIS DATABASE FOR NEW MEXICO OIL PRODUCERS

    Energy Technology Data Exchange (ETDEWEB)

    Martha Cather; Robert Lee; Ibrahim Gundiler; Andrew Sung; Naomi Davidson; Ajeet Kumar Reddy; Mingzhen Wei

    2003-04-01

    The New Mexico Water and Infrastructure Data System (NM WAIDS) seeks to alleviate a number of produced water-related issues in southeast New Mexico. The project calls for the design and implementation of a Geographical Information System (GIS) and integral tools that will provide operators and regulators with necessary data and useful information to help them make management and regulatory decisions. The major components of this system are: (1) databases on produced water quality, cultural and groundwater data, oil pipeline and infrastructure data, and corrosion information, (2) a web site capable of displaying produced water and infrastructure data in a GIS or accessing some of the data by text-based queries, (3) a fuzzy logic-based, site risk assessment tool that can be used to assess the seriousness of a spill of produced water, and (4) a corrosion management toolkit that will provide operators with data and information on produced waters that will aid them in deciding how to address corrosion issues. The various parts of NM WAIDS will be integrated into a website with a user-friendly interface that will provide access to previously difficult-to-obtain data and information. Primary attention during the first six months of this project has been focused on creating the water quality databases for produced water and surface water, along with collection of corrosion information and building parts of the corrosion toolkit. Work on the project to date includes: (1) Creation of a water quality database for produced water analyses. The database was compiled from a variety of sources and currently has over 4000 entries for southeast New Mexico. (2) Creation of a web-based data entry system for the water quality database. This system allows a user to view, enter, or edit data from a web page rather than having to directly access the database. (3) Creation of a semi-automated data capturing system for use with standard water quality analysis forms. This system improves the

  15. Quality controls in integrative approaches to detect errors and inconsistencies in biological databases

    Directory of Open Access Journals (Sweden)

    Ghisalberti Giorgio

    2010-12-01

    Full Text Available Numerous biomolecular data are available, but they are scattered in many databases and only some of them are curated by experts. Most available data are computationally derived and include errors and inconsistencies. Effective use of available data in order to derive new knowledge hence requires data integration and quality improvement. Many approaches for data integration have been proposed. Data warehousing seams to be the most adequate when comprehensive analysis of integrated data is required. This makes it the most suitable also to implement comprehensive quality controls on integrated data. We previously developed GFINDer (http://www.bioinformatics.polimi.it/GFINDer/, a web system that supports scientists in effectively using available information. It allows comprehensive statistical analysis and mining of functional and phenotypic annotations of gene lists, such as those identified by high-throughput biomolecular experiments. GFINDer backend is composed of a multi-organism genomic and proteomic data warehouse (GPDW. Within the GPDW, several controlled terminologies and ontologies, which describe gene and gene product related biomolecular processes, functions and phenotypes, are imported and integrated, together with their associations with genes and proteins of several organisms. In order to ease maintaining updated the GPDW and to ensure the best possible quality of data integrated in subsequent updating of the data warehouse, we developed several automatic procedures. Within them, we implemented numerous data quality control techniques to test the integrated data for a variety of possible errors and inconsistencies. Among other features, the implemented controls check data structure and completeness, ontological data consistency, ID format and evolution, unexpected data quantification values, and consistency of data from single and multiple sources. We use the implemented controls to analyze the quality of data available from several

  16. Subjective quality of videos displayed with local backlight dimming at different peak white and ambient light levels

    DEFF Research Database (Denmark)

    Mantel, Claire; Korhonen, Jari; Forchhammer, Søren

    2015-01-01

    In this paper the influence of ambient light and peak white (maximum brightness) of a display on the subjective quality of videos shown with local backlight dimming is examined. A subjective experiment investigating those factors is set-up using high contrast test sequences. The results are firstly...

  17. Monitoring outcomes with relational databases: does it improve quality of care?

    Science.gov (United States)

    Clemmer, Terry P

    2004-12-01

    There are 3 key ingredients in improving quality of medial care: 1) using a scientific process of improvement, 2) executing the process at the lowest possible level in the organization, and 3) measuring the results of any change reliably. Relational databases when used within these guidelines are of great value in these efforts if they contain reliable information that is pertinent to the project and used in a scientific process of quality improvement by a front line team. Unfortunately, the data are frequently unreliable and/or not pertinent to the local process and is used by persons at very high levels in the organization without a scientific process and without reliable measurement of the outcome. Under these circumstances the effectiveness of relational databases in improving care is marginal at best, frequently wasteful and has the potential to be harmful. This article explores examples of these concepts.

  18. Quality Control Algorithms for the Kennedy Space Center 50-Megahertz Doppler Radar Wind Profiler Winds Database

    Science.gov (United States)

    Barbre, Robert E., Jr.

    2012-01-01

    This paper presents the process used by the Marshall Space Flight Center Natural Environments Branch (EV44) to quality control (QC) data from the Kennedy Space Center's 50-MHz Doppler Radar Wind Profiler for use in vehicle wind loads and steering commands. The database has been built to mitigate limitations of using the currently archived databases from weather balloons. The DRWP database contains wind measurements from approximately 2.7-18.6 km altitude at roughly five minute intervals for the August 1997 to December 2009 period of record, and the extensive QC process was designed to remove spurious data from various forms of atmospheric and non-atmospheric artifacts. The QC process is largely based on DRWP literature, but two new algorithms have been developed to remove data contaminated by convection and excessive first guess propagations from the Median Filter First Guess Algorithm. In addition to describing the automated and manual QC process in detail, this paper describes the extent of the data retained. Roughly 58% of all possible wind observations exist in the database, with approximately 100 times as many complete profile sets existing relative to the EV44 balloon databases. This increased sample of near-continuous wind profile measurements may help increase launch availability by reducing the uncertainty of wind changes during launch countdown

  19. The Effect of Signal Quality and Contiguous Word of Mouth on Customer Acquisition for a Video-on-Demand Service

    OpenAIRE

    Sungjoon Nam; Puneet Manchanda; Pradeep K. Chintagunta

    2010-01-01

    This paper documents the existence and magnitude of contiguous word-of-mouth effects of signal quality of a video-on-demand (VOD) service on customer acquisition. We operationalize contiguous word-of-mouth effect based on geographic proximity and use behavioral data to quantify the effect. The signal quality for this VOD service is exogenously determined, objectively measured, and spatially uncorrelated. Furthermore, it is unobserved to the potential subscriber and is revealed postadoption. F...

  20. A Public Database of Immersive VR Videos with Corresponding Ratings of Arousal, Valence, and Correlations between Head Movements and Self Report Measures

    Directory of Open Access Journals (Sweden)

    Benjamin J. Li

    2017-12-01

    Full Text Available Virtual reality (VR has been proposed as a methodological tool to study the basic science of psychology and other fields. One key advantage of VR is that sharing of virtual content can lead to more robust replication and representative sampling. A database of standardized content will help fulfill this vision. There are two objectives to this study. First, we seek to establish and allow public access to a database of immersive VR video clips that can act as a potential resource for studies on emotion induction using virtual reality. Second, given the large sample size of participants needed to get reliable valence and arousal ratings for our video, we were able to explore the possible links between the head movements of the observer and the emotions he or she feels while viewing immersive VR. To accomplish our goals, we sourced for and tested 73 immersive VR clips which participants rated on valence and arousal dimensions using self-assessment manikins. We also tracked participants' rotational head movements as they watched the clips, allowing us to correlate head movements and affect. Based on past research, we predicted relationships between the standard deviation of head yaw and valence and arousal ratings. Results showed that the stimuli varied reasonably well along the dimensions of valence and arousal, with a slight underrepresentation of clips that are of negative valence and highly arousing. The standard deviation of yaw positively correlated with valence, while a significant positive relationship was found between head pitch and arousal. The immersive VR clips tested are available online as supplemental material.

  1. NM WAIDS: A PRODUCED WATER QUALITY AND INFRASTRUCTURE GIS DATABASE FOR NEW MEXICO OIL PRODUCERS

    Energy Technology Data Exchange (ETDEWEB)

    Martha Cather; Robert Lee; Ibrahim Gundiler; Andrew Sung

    2003-09-24

    The New Mexico Water and Infrastructure Data System (NM WAIDS) seeks to alleviate a number of produced water-related issues in southeast New Mexico. The project calls for the design and implementation of a Geographical Information System (GIS) and integral tools that will provide operators and regulators with necessary data and useful information to help them make management and regulatory decisions. The major components of this system are: (1) Databases on produced water quality, cultural and groundwater data, oil pipeline and infrastructure data, and corrosion information. (2) A web site capable of displaying produced water and infrastructure data in a GIS or accessing some of the data by text-based queries. (3) A fuzzy logic-based, site risk assessment tool that can be used to assess the seriousness of a spill of produced water. (4) A corrosion management toolkit that will provide operators with data and information on produced waters that will aid them in deciding how to address corrosion issues. The various parts of NM WAIDS will be integrated into a website with a user-friendly interface that will provide access to previously difficult-to-obtain data and information. Primary attention during the first six months of this project was focused on creating the water quality databases for produced water and surface water, along with collecting of corrosion information and building parts of the corrosion toolkit. Work on the project to date includes: (1) Creation of a water quality database for produced water analyses. The database was compiled from a variety of sources and currently has over 7000 entries for New Mexico. (2) Creation of a web-based data entry system for the water quality database. This system allows a user to view, enter, or edit data from a web page rather than having to directly access the database. (3) Creation of a semi-automated data capturing system for use with standard water quality analysis forms. This system improves the accuracy and speed

  2. High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images

    Science.gov (United States)

    Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2006-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.

  3. A precipitation database of station-based daily and monthly measurements for West Africa: Overview, quality control and harmonization

    Science.gov (United States)

    Bliefernicht, Jan; Waongo, Moussa; Annor, Thompson; Laux, Patrick; Lorenz, Manuel; Salack, Seyni; Kunstmann, Harald

    2017-04-01

    West Africa is a data sparse region. High quality and long-term precipitation data are often not readily available for applications in hydrology, agriculture, meteorology and other needs. To close this gap, we use multiple data sources to develop a precipitation database with long-term daily and monthly time series. This database was compiled from 16 archives including global databases e.g. from the Global Historical Climatology Network (GHCN), databases from research projects (e.g. the AMMA database) and databases of the national meteorological services of some West African countries. The collection consists of more than 2000 precipitation gauges with measurements dating from 1850 to 2015. Due to erroneous measurements (e.g. temporal offsets, unit conversion errors), missing values and inconsistent meta-data, the merging of this precipitation dataset is not straightforward and requires a thorough quality control and harmonization. To this end, we developed geostatistical-based algorithms for quality control of individual databases and harmonization to a joint database. The algorithms are based on a pairwise comparison of the correspondence of precipitation time series in dependence to the distance between stations. They were tested for precipitation time series from gages located in a rectangular domain covering Burkina Faso, Ghana, Benin and Togo. This harmonized and quality controlled precipitation database was recently used for several applications such as the validation of a high resolution regional climate model and the bias correction of precipitation projections provided the Coordinated Regional Climate Downscaling Experiment (CORDEX). In this presentation, we will give an overview of the novel daily and monthly precipitation database and the algorithms used for quality control and harmonization. We will also highlight the quality of global and regional archives (e.g. GHCN, GSOD, AMMA database) in comparison to the precipitation databases provided by the

  4. EXFOR-CINDA-ENDF: Migration of Databases to Give Higher-Quality Nuclear Data Services

    International Nuclear Information System (INIS)

    Zerkin, V.V.; McLane, V.; Herman, M.W.; Dunford, C.L.

    2005-01-01

    Extensive work began in 1999 to migrate the EXFOR, CINDA, and ENDF nuclear reaction databases, and convert the available nuclear data services from VMS to a modern computing environment. This work has been performed through co-operative efforts between the IAEA Nuclear Data Section (IAEA-NDS) and the National Nuclear Data Center (NNDC), Brookhaven National Laboratory. The project also afforded the opportunity to make general revisions and improvements to the nuclear reaction data services by taking account of past experience with the old system and users' feedback. A main goal of the project was to implement databases in a relational form that provides full functionality for maintenance by data centre staff and improved retrieval capability for external users. As a result, the quality of our nuclear service has significantly improved, with better functionality of the system, accessibility of data, and improved data retrieval functions for users involved in a wide range of applications

  5. Quality assurance for the IAEA International Database on Irradiated Nuclear Graphite Properties

    International Nuclear Information System (INIS)

    Wickham, A.J.; Humbert, D.

    2006-06-01

    Consideration has been given to the process of Quality Assurance applied to data entered into current versions of the IAEA International Database on Irradiated Nuclear Graphite Properties. Originally conceived simply as a means of collecting and preserving data on irradiation experiments and reactor operation, the data are increasingly being utilised for the preparation of safety arguments and in the design of new graphites for forthcoming generations of graphite-moderated plant. Under these circumstances, regulatory agencies require assurances that the data are of appropriate accuracy and correctly transcribed, that obvious errors in the original documentation are either highlighted or corrected, etc., before they are prepared to accept analyses built upon these data. The processes employed in the data transcription are described in this document, and proposals are made for the categorisation of data and for error reporting by Database users. (author)

  6. Enhancing Perceived Quality of Compressed Images and Video with Anisotropic Diffusion and Fuzzy Filtering

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Korhonen, Jari; Forchhammer, Søren

    2013-01-01

    and subjective results on JPEG compressed images, as well as MJPEG and H.264/AVC compressed video, indicate that the proposed algorithms employing directional and spatial fuzzy filters achieve better artifact reduction than other methods. In particular, robust improvements with H.264/AVC video have been gained...

  7. Resolution enhancement of low quality videos using a high-resolution frame

    NARCIS (Netherlands)

    Pham, T.Q.; Van Vliet, L.J.; Schutte, K.

    2006-01-01

    This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of

  8. Patient-Reported Outcome and Quality of Life Instruments Database (PROQOLID: Frequently asked questions

    Directory of Open Access Journals (Sweden)

    Perrier Laure-Lou

    2005-03-01

    Full Text Available Abstract The exponential development of Patient-Reported Outcomes (PRO measures in clinical research has led to the creation of the Patient-Reported Outcome and Quality of Life Instruments Database (PROQOLID to facilitate the selection process of PRO measures in clinical research. The project was initiated by Mapi Research Trust in Lyon, France. Initially called QOLID (Quality of Life Instruments Database, the project's purpose was to provide all those involved in health care evaluation with a comprehensive and unique source of information on PRO and HRQOL measures available through the Internet. PROQOLID currently describes more than 470 PRO instruments in a structured format. It is available in two levels, non-subscribers and subscribers, at http://www.proqolid.org. The first level is free of charge and contains 14 categories of basic useful information on the instruments (e.g. author, objective, original language, list of existing translations, etc.. The second level provides significantly more information about the instruments. It includes review copies of over 350 original instruments, 120 user manuals and 350 translations. Most are available in PDF format. This level is only accessible to annual subscribers. PROQOLID is updated in close collaboration with the instruments' authors on a regular basis. Fifty or more new instruments are added to the database annually. Today, all of the major pharmaceutical companies, prestigious institutions (such as the FDA, the NIH's National Cancer Institute, the U.S. Veterans Administration, dozens of universities, public institutions and researchers subscribe to PROQOLID on a yearly basis. More than 800 users per day routinely visit the database.

  9. A Survey of Standardized Approaches towards the Quality of Experience Evaluation for Video Services: An ITU Perspective

    Directory of Open Access Journals (Sweden)

    Debajyoti Pal

    2018-01-01

    Full Text Available Over the past few years there has been an exponential increase in the amount of multimedia data being streamed over the Internet. At the same time, we are also witnessing a change in the way quality of any particular service is interpreted, with more emphasis being given to the end-users. Thus, silently there has been a paradigm shift from the traditional Quality of Service approach (QoS towards a Quality of Experience (QoE model while evaluating the service quality. A lot of work that tries to evaluate the quality of audio, video, and multimedia services over the Internet has been done. At the same time, research is also going on trying to map the two different domains of quality metrics, i.e., the QoS and QoE domain. Apart from the work done by individual researchers, the International Telecommunications Union (ITU has been quite active in this area of quality assessment. This is obvious from the large number of ITU standards that are available for different application types. The sheer variety of techniques being employed by ITU as well as other researchers sometimes tends to be too complex and diversified. Although there are survey papers that try to present the current state of the art methodologies for video quality evaluation, none has focused on the ITU perspective. In this work, we try to fill up this void by presenting up-to-date information on the different measurement methods that are currently being employed by ITU for a video streaming scenario. We highlight the outline of each method with sufficient detail and try to analyze the challenges being faced along with the direction of future research.

  10. The EDEN-IW ontology model for sharing knowledge and water quality data between heterogenous databases

    DEFF Research Database (Denmark)

    Stjernholm, M.; Poslad, S.; Zuo, L.

    2004-01-01

    The Environmental Data Exchange Network for Inland Water (EDEN-IW) project's main aim is to develop a system for making disparate and heterogeneous databases of Inland Water quality more accessible to users. The core technology is based upon a combination of: ontological model to represent...... a Semantic Web based data model for IW; software agents as an infrastructure to share and reason about the IW se-mantic data model and XML to make the information accessible to Web portals and mainstream Web services. This presentation focuses on the Semantic Web or Onto-logical model. Currently, we have...

  11. Imagining life with an ostomy: Does a video intervention improve quality-of-life predictions for a medical condition that may elicit disgust?☆

    Science.gov (United States)

    Angott, Andrea M.; Comerford, David A.; Ubel, Peter A.

    2014-01-01

    Objective To test a video intervention as a way to improve predictions of mood and quality-of-life with an emotionally evocative medical condition. Such predictions are typically inaccurate, which can be consequential for decision making. Method In Part 1, people presently or formerly living with ostomies predicted how watching a video depicting a person changing his ostomy pouch would affect mood and quality-of-life forecasts for life with an ostomy. In Part 2, participants from the general public read a description about life with an ostomy; half also watched a video depicting a person changing his ostomy pouch. Participants’ quality-of-life and mood forecasts for life with an ostomy were assessed. Results Contrary to our expectations, and the expectations of people presently or formerly living with ostomies, the video did not reduce mood or quality-of-life estimates, even among participants high in trait disgust sensitivity. Among low-disgust participants, watching the video increased quality-of-life predictions for ostomy. Conclusion Video interventions may improve mood and quality-of-life forecasts for medical conditions, including those that may elicit disgust, such as ostomy. Practice implications Video interventions focusing on patients’ experience of illness continue to show promise as components of decision aids, even for emotionally charged health states such as ostomy. PMID:23177398

  12. Editorial and scientific quality in the parameters for inclusion of journals commercial and open access databases

    Directory of Open Access Journals (Sweden)

    Cecilia Rozemblum

    2015-04-01

    Full Text Available In this article, the parameters used by RedALyC, Catalogo Latindex, SciELO, Scopus and Web of Science for the incorporation of scientific journals in their collections are analyzed with the goal of proving their relation with the objectives of each database in addition of debating the valuation that the scientific society is giving to those systems as decisive of "scientific quality". The used indicators are classified in: 1 Editorial quality (formal aspects or editorial management. 2 Content quality (peer review or originality and 3 Visibility (prestige of editors and editorial use and impact, accessibility and indexing It is revealed that: a between 9 and 16% of the indicators are related to the quality of content; b Lack specificity in their definition and determination of measure systems, and c match the goals of each base, although a marked trend towards formal aspects related and visibility is observed. Thus makes it clear that these systems pursuing their own objectives, making a core of journals of “quality” for its readership. We conclude, therefore, that the presence or absence of a journal in these collections is not sufficient to determine the quality of scientific magazine and its contents parameter.

  13. Rheumatoid Arthritis Educational Video Series

    Science.gov (United States)

    ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos ... Your Arthritis Managing Chronic Pain and Depression in Arthritis Nutrition & Rheumatoid Arthritis Arthritis and Health-related Quality of Life ...

  14. Native Health Research Database

    Science.gov (United States)

    ... Indian Health Board) Welcome to the Native Health Database. Please enter your search terms. Basic Search Advanced ... To learn more about searching the Native Health Database, click here. Tutorial Video The NHD has made ...

  15. A Framework for Video Modeling

    NARCIS (Netherlands)

    Petkovic, M.; Jonker, Willem

    In recent years, research in video databases has increased greatly, but relatively little work has been done in the area of semantic content-based retrieval. In this paper, we present a framework for video modelling with emphasis on semantic content of video data. The video data model presented

  16. The need for high-quality whole-genome sequence databases in microbial forensics.

    Science.gov (United States)

    Sjödin, Andreas; Broman, Tina; Melefors, Öjar; Andersson, Gunnar; Rasmusson, Birgitta; Knutsson, Rickard; Forsman, Mats

    2013-09-01

    Microbial forensics is an important part of a strengthened capability to respond to biocrime and bioterrorism incidents to aid in the complex task of distinguishing between natural outbreaks and deliberate acts. The goal of a microbial forensic investigation is to identify and criminally prosecute those responsible for a biological attack, and it involves a detailed analysis of the weapon--that is, the pathogen. The recent development of next-generation sequencing (NGS) technologies has greatly increased the resolution that can be achieved in microbial forensic analyses. It is now possible to identify, quickly and in an unbiased manner, previously undetectable genome differences between closely related isolates. This development is particularly relevant for the most deadly bacterial diseases that are caused by bacterial lineages with extremely low levels of genetic diversity. Whole-genome analysis of pathogens is envisaged to be increasingly essential for this purpose. In a microbial forensic context, whole-genome sequence analysis is the ultimate method for strain comparisons as it is informative during identification, characterization, and attribution--all 3 major stages of the investigation--and at all levels of microbial strain identity resolution (ie, it resolves the full spectrum from family to isolate). Given these capabilities, one bottleneck in microbial forensics investigations is the availability of high-quality reference databases of bacterial whole-genome sequences. To be of high quality, databases need to be curated and accurate in terms of sequences, metadata, and genetic diversity coverage. The development of whole-genome sequence databases will be instrumental in successfully tracing pathogens in the future.

  17. An Analysis of Quality of Service (QoS In Live Video Streaming Using Evolved HSPA Network Media

    Directory of Open Access Journals (Sweden)

    Achmad Zakaria Azhar

    2016-10-01

    Full Text Available Evolved High Speed Packet Access (HSPA+ is a mobile telecommunication system technology and the evolution of HSPA technology. This technology has a packet data based service with downlink speeds up to 21.1 Mbps and uplink speed up to 11.5 Mbps on the bandwidth 5MHz. This technology is expected to fulfill and support the needs for information that involves all aspects of multimedia such as video and audio, especially live video streaming. By utilizing this technology it will facilitate communicating the information, for example to monitoring the situation of the house, the news coverage at some certain area, and other events in real time. This thesis aims to identify and test the Quality of Service (QoS performance on the network that is used for live video streaming with the parameters of throughput, delay, jitter and packet loss. The software used for monitoring the data traffic of the live video streaming network is wireshark network analyzer. From the test results it is obtained that the average throughput of provider B is 5,295 Kbps bigger than the provider A, the average delay of provider B is 0.618 ms smaller than the provider A, the average jitter of provider B is 0.420 ms smaller than the provider A and the average packet loss of provider B is 0.451% smaller than the provider A.

  18. Video over cognitive radio networks when quality of service meets spectrum

    CERN Document Server

    Mao, Shiwen

    2014-01-01

    This book focuses on the problem of video streaming over emerging cognitive radio (CR) networks. The book discusses the problems and techniques for scalable video streaming over cellular cognitive radio networks, ad hoc CR networks, cooperative CR networks, and femtocell CR networks. The author formulates these problems and proposes optimal algorithms to solve these problems. Also, the book analyzes the proposed algorithms and validates the algorithms with simulations.

  19. Training value of laparoscopic colorectal videos on the World Wide Web: a pilot study on the educational quality of laparoscopic right hemicolectomy videos.

    Science.gov (United States)

    Celentano, V; Browning, M; Hitchins, C; Giglio, M C; Coleman, M G

    2017-11-01

    Instructive laparoscopy videos with appropriate exposition could be ideal for initial training in laparoscopic surgery, but unfortunately there are no guidelines for annotating these videos or agreed methods to measure the educational content and the safety of the procedure presented. Aim of this study is to systematically search the World Wide Web to determine the availability of laparoscopic colorectal surgery videos and to objectively establish their potential training value. A search for laparoscopic right hemicolectomy videos was performed on the three most used English language web search engines Google.com, Bing.com, and Yahoo.com; moreover, a survey among 25 local trainees was performed to identify additional websites for inclusion. All laparoscopic right hemicolectomy videos with an English language title were included. Videos of open surgery, single incision laparoscopic surgery, robotic, and hand-assisted surgery were excluded. The safety of the demonstrated procedure was assessed with a validated competency assessment tool specifically designed for laparoscopic colorectal surgery and data on the educational content of the video were extracted. Thirty-one websites were identified and 182 surgical videos were included. One hundred and seventy-three videos (95%) detailed the year of publication; this demonstrated a significant increase in the number of videos published per year from 2009. Characteristics of the patient were rarely presented, only 10 videos (5.4%) reported operating time and only 6 videos (3.2%) reported 30-day morbidity; 34 videos (18.6%) underwent a peer-review process prior to publication. Formal case presentation, the presence of audio narration, the use of diagrams, and snapshots and a step-by-step approach are all characteristics of peer-reviewed videos but no significant difference was found in the safety of the procedure. Laparoscopic videos can be a useful adjunct to operative training. There is a large and increasing amount of

  20. [Systematic review of studies on quality of life indexed on the SciELO database].

    Science.gov (United States)

    Landeiro, Graziela Macedo Bastos; Pedrozo, Celine Cristina Raimundo; Gomes, Maria José; Oliveira, Elizabete Regina de Araújo

    2011-10-01

    Interest in the quality of life construct has increased in the same proportion as the output of instruments to measure it. In order to analyze the scientific literature on the subject to provide a reflection on this construct in Brazil, a systematic review of the SciELO database covering the period from January 2001 to December 2006 was conducted. It was divided into 3 phases: the first involving 180 publications, the second 124, and the third 10. Of the 180 publications, 77.4% consisted of production in the last three years, with growth of 32.4% from 2001 to 2006. Of these, 124 were selected for methodological analysis in accordance with the category of the study: 79 (63.9%) instrument application articles; 25 (20.1%) translation, validation, adaptation and construction of a QOL instrument; 10 (8%) qualitative studies on QOL; 5 (4%) bibliographical review, 5 (4%) on the quality of life concept. The next stage involved the use of questionnaires and/or interview scripts in order to obtain a broader consensus on perceived quality of life from the interviewees. It was seen that there was significant scientific output in the period under scrutiny, with diversification of approaches and methodologies, highlighting the complexity of the quality of life construct.

  1. cDNA sequence quality data - Budding yeast cDNA sequencing project | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Budding yeast cDNA sequencing project cDNA sequence quality data Data detail Data name cDNA sequence quality... data DOI 10.18908/lsdba.nbdc00838-003 Description of data contents Phred's quality score. P...tion Download License Update History of This Database Site Policy | Contact Us cDNA sequence quality

  2. Database Quality and Access Issues Relevant to Research Using Anesthesia Information Management System Data.

    Science.gov (United States)

    Epstein, Richard H; Dexter, Franklin

    2018-07-01

    For this special article, we reviewed the computer code, used to extract the data, and the text of all 47 studies published between January 2006 and August 2017 using anesthesia information management system (AIMS) data from Thomas Jefferson University Hospital (TJUH). Data from this institution were used in the largest number (P = .0007) of papers describing the use of AIMS published in this time frame. The AIMS was replaced in April 2017, making this finite sample finite. The objective of the current article was to identify factors that made TJUH successful in publishing anesthesia informatics studies. We examined the structured query language used for each study to examine the extent to which databases outside of the AIMS were used. We examined data quality from the perspectives of completeness, correctness, concordance, plausibility, and currency. Our results were that most could not have been completed without external database sources (36/47, 76.6%; P = .0003 compared with 50%). The operating room management system was linked to the AIMS and was used significantly more frequently (26/36, 72%) than other external sources. Access to these external data sources was provided, allowing exploration of data quality. The TJUH AIMS used high-resolution timestamps (to the nearest 3 milliseconds) and created audit tables to track changes to clinical documentation. Automatic data were recorded at 1-minute intervals and were not editable; data cleaning occurred during analysis. Few paired events with an expected order were out of sequence. Although most data elements were of high quality, there were notable exceptions, such as frequent missing values for estimated blood loss, height, and weight. Some values were duplicated with different units, and others were stored in varying locations. Our conclusions are that linking the TJUH AIMS to the operating room management system was a critical step in enabling publication of multiple studies using AIMS data. Access to this and

  3. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    Science.gov (United States)

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  4. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    Directory of Open Access Journals (Sweden)

    Behrang Barekatain

    Full Text Available In recent years, Random Network Coding (RNC has emerged as a promising solution for efficient Peer-to-Peer (P2P video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  5. Resolution enhancement of low-quality videos using a high-resolution frame

    Science.gov (United States)

    Pham, Tuan Q.; van Vliet, Lucas J.; Schutte, Klamer

    2006-01-01

    This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of corresponding LR-HR pairs of image patches from the HR still image, high-frequency details are transferred from the HR source to the LR video. The DCT-domain algorithm is much faster than example-based SR in spatial domain 6 because of a reduction in search dimensionality, which is a direct result of the compact and uncorrelated DCT representation. Fast searching techniques like tree-structure vector quantization 16 and coherence search1 are also key to the improved efficiency. Preliminary results on MJPEG sequence show promising result of the DCT-domain SR synthesis approach.

  6. Australia's continental-scale acoustic tracking database and its automated quality control process

    Science.gov (United States)

    Hoenner, Xavier; Huveneers, Charlie; Steckenreuter, Andre; Simpfendorfer, Colin; Tattersall, Katherine; Jaine, Fabrice; Atkins, Natalia; Babcock, Russ; Brodie, Stephanie; Burgess, Jonathan; Campbell, Hamish; Heupel, Michelle; Pasquer, Benedicte; Proctor, Roger; Taylor, Matthew D.; Udyawer, Vinay; Harcourt, Robert

    2018-01-01

    Our ability to predict species responses to environmental changes relies on accurate records of animal movement patterns. Continental-scale acoustic telemetry networks are increasingly being established worldwide, producing large volumes of information-rich geospatial data. During the last decade, the Integrated Marine Observing System's Animal Tracking Facility (IMOS ATF) established a permanent array of acoustic receivers around Australia. Simultaneously, IMOS developed a centralised national database to foster collaborative research across the user community and quantify individual behaviour across a broad range of taxa. Here we present the database and quality control procedures developed to collate 49.6 million valid detections from 1891 receiving stations. This dataset consists of detections for 3,777 tags deployed on 117 marine species, with distances travelled ranging from a few to thousands of kilometres. Connectivity between regions was only made possible by the joint contribution of IMOS infrastructure and researcher-funded receivers. This dataset constitutes a valuable resource facilitating meta-analysis of animal movement, distributions, and habitat use, and is important for relating species distribution shifts with environmental covariates.

  7. Identifying Measures Used for Assessing Quality of YouTube Videos with Patient Health Information: A Review of Current Literature.

    Science.gov (United States)

    Gabarron, Elia; Fernandez-Luque, Luis; Armayones, Manuel; Lau, Annie Ys

    2013-02-28

    Recent publications on YouTube have advocated its potential for patient education. However, a reliable description of what could be considered quality information for patient education on YouTube is missing. To identify topics associated with the concept of quality information for patient education on YouTube in the scientific literature. A literature review was performed in MEDLINE, ISI Web of Knowledge, Scopus, and PsychINFO. Abstract selection was first conducted by two independent reviewers; discrepancies were discussed in a second abstract review with two additional independent reviewers. Full text of selected papers were analyzed looking for concepts, definitions, and topics used by its authors that focused on the quality of information on YouTube for patient education. In total, 456 abstracts were extracted and 13 papers meeting eligibility criteria were analyzed. Concepts identified related to quality of information for patient education are categorized as expert-driven, popularity-driven, or heuristic-driven measures. These include (in descending order): (1) quality of content in 10/13 (77%), (2) view count in 9/13 (69%), (3) health professional opinion in 8/13 (62%), (4) adequate length or duration in 6/13 (46%), (5) public ratings in 5/13 (39%), (6) adequate title, tags, and description in 5/13 (39%), (7) good description or a comprehensive narrative in 4/13 (31%), (8) evidence-based practices included in video in 4/13 (31%), (9) suitability as a teaching tool in 4/13 (31%), (10) technical quality in 4/13 (31%), (11) credentials provided in video in 4/13 (31%), (12) enough amount of content to identify its objective in 3/13 (23%), and (13) viewership share in 2/13 (15%). Our review confirms that the current topics linked to quality of information for patient education on YouTube are unclear and not standardized. Although expert-driven, popularity-driven, or heuristic-driven measures are used as proxies to estimate the quality of video information

  8. Delivering stable high-quality video: an SDN architecture with DASH assisting network elements

    NARCIS (Netherlands)

    J.W.M. Kleinrouweler (Jan Willem); S. Cabrero Barros (Sergio); P.S. Cesar Garcia (Pablo Santiago)

    2016-01-01

    textabstractDynamic adaptive streaming over HTTP (DASH) is a simple, but effective, technology for video streaming over the Internet. It provides adaptive streaming while being highly scalable at the side of the content providers. However, the mismatch between TCP and the adaptive bursty nature of

  9. Dependency of human target detection performance on clutter and quality of supporting image analysis algorithms in a video surveillance task

    Science.gov (United States)

    Huber, Samuel; Dunau, Patrick; Wellig, Peter; Stein, Karin

    2017-10-01

    Background: In target detection, the success rates depend strongly on human observer performances. Two prior studies tested the contributions of target detection algorithms and prior training sessions. The aim of this Swiss-German cooperation study was to evaluate the dependency of human observer performance on the quality of supporting image analysis algorithms. Methods: The participants were presented 15 different video sequences. Their task was to detect all targets in the shortest possible time. Each video sequence showed a heavily cluttered simulated public area from a different viewing angle. In each video sequence, the number of avatars in the area was altered to 100, 150 and 200 subjects. The number of targets appearing was kept at 10%. The number of marked targets varied from 0, 5, 10, 20 up to 40 marked subjects while keeping the positive predictive value of the detection algorithm at 20%. During the task, workload level was assessed by applying an acoustic secondary task. Detection rates and detection times for the targets were analyzed using inferential statistics. Results: The study found Target Detection Time to increase and Target Detection Rates to decrease with increasing numbers of avatars. The same is true for the Secondary Task Reaction Time while there was no effect on Secondary Task Hit Rate. Furthermore, we found a trend for a u-shaped correlation between the numbers of markings and RTST indicating increased workload. Conclusion: The trial results may indicate useful criteria for the design of training and support of observers in observational tasks.

  10. SU-E-T-255: Development of a Michigan Quality Assurance (MQA) Database for Clinical Machine Operations

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, D [University of Michigan Hospital, Ann Arbor, MI (United States)

    2015-06-15

    Purpose: A unified database system was developed to allow accumulation, review and analysis of quality assurance (QA) data for measurement, treatment, imaging and simulation equipment in our department. Recording these data in a database allows a unified and structured approach to review and analysis of data gathered using commercial database tools. Methods: A clinical database was developed to track records of quality assurance operations on linear accelerators, a computed tomography (CT) scanner, high dose rate (HDR) afterloader and imaging systems such as on-board imaging (OBI) and Calypso in our department. The database was developed using Microsoft Access database and visual basic for applications (VBA) programming interface. Separate modules were written for accumulation, review and analysis of daily, monthly and annual QA data. All modules were designed to use structured query language (SQL) as the basis of data accumulation and review. The SQL strings are dynamically re-written at run time. The database also features embedded documentation, storage of documents produced during QA activities and the ability to annotate all data within the database. Tests are defined in a set of tables that define test type, specific value, and schedule. Results: Daily, Monthly and Annual QA data has been taken in parallel with established procedures to test MQA. The database has been used to aggregate data across machines to examine the consistency of machine parameters and operations within the clinic for several months. Conclusion: The MQA application has been developed as an interface to a commercially available SQL engine (JET 5.0) and a standard database back-end. The MQA system has been used for several months for routine data collection.. The system is robust, relatively simple to extend and can be migrated to a commercial SQL server.

  11. SU-E-T-255: Development of a Michigan Quality Assurance (MQA) Database for Clinical Machine Operations

    International Nuclear Information System (INIS)

    Roberts, D

    2015-01-01

    Purpose: A unified database system was developed to allow accumulation, review and analysis of quality assurance (QA) data for measurement, treatment, imaging and simulation equipment in our department. Recording these data in a database allows a unified and structured approach to review and analysis of data gathered using commercial database tools. Methods: A clinical database was developed to track records of quality assurance operations on linear accelerators, a computed tomography (CT) scanner, high dose rate (HDR) afterloader and imaging systems such as on-board imaging (OBI) and Calypso in our department. The database was developed using Microsoft Access database and visual basic for applications (VBA) programming interface. Separate modules were written for accumulation, review and analysis of daily, monthly and annual QA data. All modules were designed to use structured query language (SQL) as the basis of data accumulation and review. The SQL strings are dynamically re-written at run time. The database also features embedded documentation, storage of documents produced during QA activities and the ability to annotate all data within the database. Tests are defined in a set of tables that define test type, specific value, and schedule. Results: Daily, Monthly and Annual QA data has been taken in parallel with established procedures to test MQA. The database has been used to aggregate data across machines to examine the consistency of machine parameters and operations within the clinic for several months. Conclusion: The MQA application has been developed as an interface to a commercially available SQL engine (JET 5.0) and a standard database back-end. The MQA system has been used for several months for routine data collection.. The system is robust, relatively simple to extend and can be migrated to a commercial SQL server

  12. Video x-ray progressive scanning: new technique for decreasing x-ray exposure without decreasing image quality during cardiac catheterization

    International Nuclear Information System (INIS)

    Holmes, D.R. Jr.; Bove, A.A.; Wondrow, M.A.; Gray, J.E.

    1986-01-01

    A newly developed video x-ray progressive scanning system improves image quality, decreases radiation exposure, and can be added to any pulsed fluoroscopic x-ray system using a video display without major system modifications. With use of progressive video scanning, the radiation entrance exposure rate measured with a vascular phantom was decreased by 32 to 53% in comparison with a conventional fluoroscopic x-ray system. In addition to this substantial decrease in radiation exposure, the quality of the image was improved because of less motion blur and artifact. Progressive video scanning has the potential for widespread application to all pulsed fluoroscopic x-ray systems. Use of this technique should make cardiac catheterization procedures and all other fluoroscopic procedures safer for the patient and the involved medical and paramedical staff

  13. Broadcast-quality-stereoscopic video in a time-critical entertainment and corporate environment

    Science.gov (United States)

    Gay, Jean-Philippe

    1995-03-01

    `reality present: Peter Gabrial and Cirque du Soleil' is a 12 minute original work directed and produced by Doug Brown, Jean-Philippe Gay & A. Coogan, which showcases creative content applications of commercial stereoscopic video equipment. For production, a complete equipment package including a Steadicam mount was used in support of the Ikegami LK-33 camera. Remote production units were fielded in the time critical, on-stage and off-stage environments of 2 major live concerts: Peter Gabriel's Secret World performance at the San Diego Sports Arena, and Cirque du Soleil's Saltimbanco performance in Chicago. Twin 60 Hz video channels were captured on Beta SP for maximum post production flexibility. Digital post production and field sequential mastering were effected in D-2 format at studio facilities in Los Angeles. The program was world premiered to a large public at the World of Music, Arts and Dance festivals in Los Angeles and San Francisco, in late 1993. It was presented to the artists in Los Angeles, Montreal and Washington D.C. Additional presentations have been made using a broad range of commercial and experimental stereoscopic video equipment, including projection systems, LCD and passive eyewear, and digital signal processors. Technical packages for live presentation have been fielded on site and off, through to the present.

  14. An automated DICOM database capable of arbitrary data mining (including radiation dose indicators) for quality monitoring.

    Science.gov (United States)

    Wang, Shanshan; Pavlicek, William; Roberts, Catherine C; Langer, Steve G; Zhang, Muhong; Hu, Mengqi; Morin, Richard L; Schueler, Beth A; Wellnitz, Clinton V; Wu, Teresa

    2011-04-01

    The U.S. National Press has brought to full public discussion concerns regarding the use of medical radiation, specifically x-ray computed tomography (CT), in diagnosis. A need exists for developing methods whereby assurance is given that all diagnostic medical radiation use is properly prescribed, and all patients' radiation exposure is monitored. The "DICOM Index Tracker©" (DIT) transparently captures desired digital imaging and communications in medicine (DICOM) tags from CT, nuclear imaging equipment, and other DICOM devices across an enterprise. Its initial use is recording, monitoring, and providing automatic alerts to medical professionals of excursions beyond internally determined trigger action levels of radiation. A flexible knowledge base, aware of equipment in use, enables automatic alerts to system administrators of newly identified equipment models or software versions so that DIT can be adapted to the new equipment or software. A dosimetry module accepts mammography breast organ dose, skin air kerma values from XA modalities, exposure indices from computed radiography, etc. upon receipt. The American Association of Physicists in Medicine recommended a methodology for effective dose calculations which are performed with CT units having DICOM structured dose reports. Web interface reporting is provided for accessing the database in real-time. DIT is DICOM-compliant and, thus, is standardized for international comparisons. Automatic alerts currently in use include: email, cell phone text message, and internal pager text messaging. This system extends the utility of DICOM for standardizing the capturing and computing of radiation dose as well as other quality measures.

  15. Everything is ok on YouTube! Quality assessment of YouTube videos on the topic of phacoemulsification in eyes with small pupil.

    Science.gov (United States)

    Aykut, Aslan; Kukner, Amber Senel; Karasu, Bugra; Palancıglu, Yeliz; Atmaca, Fatih; Aydogan, Tumay

    2018-01-22

    Usage of YouTube as an educational tool is gaining attention in academic research. To date, there has been no study on the content and quality of eye surgery videos on YouTube. The aim of this study was to analyze YouTube videos on phacoemulsification in eyes with small pupil. We searched for the phrases "small pupil cataract surgery," "small pupil phacoemulsification," "small pupil cataract surgery complications," and "small pupil phacoemulsification complications" in January 2015. Each resulting video was evaluated by all authors, and Krippendorff's alpha was calculated to measure agreement. Videos were classified according to pupil size (small/very small) in the beginning of the surgery, and whether pupillary diameter was large enough to continue surgery safely after pupillary dilation by the surgeon in the video (safe/not safe). Methods of dilatation were also analyzed. Any stated ocular comorbidity or surgical complications were noted. A total of 96 videos were reviewed. No mechanical intervention for pupillary dilatation was performed in 46 videos. Fifty-eight operated eyes had no stated ocular comorbidity. Ninety-five operations ended successfully without major complication. There was fair agreement between the evaluators regarding pupil sizes (Kα = 0.670) but poor agreement regarding safety (Kα = 0.337). YouTube videos on small pupil phacoemulsification have low complication rates when compared to the literature, although no reliable mechanical dilatation methods are used in almost half of these videos. Until YouTube's place in e-learning becomes clearer, we suggest that viewers be cautious regarding small pupil phacoemulsification videos on YouTube.

  16. Impact of database quality in knowledge-based treatment planning for prostate cancer.

    Science.gov (United States)

    Wall, Phillip D H; Carver, Robert L; Fontenot, Jonas D

    2018-03-13

    This article investigates dose-volume prediction improvements in a common knowledge-based planning (KBP) method using a Pareto plan database compared with using a conventional, clinical plan database. Two plan databases were created using retrospective, anonymized data of 124 volumetric modulated arc therapy (VMAT) prostate cancer patients. The clinical plan database (CPD) contained planning data from each patient's clinically treated VMAT plan, which were manually optimized by various planners. The multicriteria optimization database (MCOD) contained Pareto-optimal plan data from VMAT plans created using a standardized multicriteria optimization protocol. Overlap volume histograms, incorporating fractional organ at risk volumes only within the treatment fields, were computed for each patient and used to match new patient anatomy to similar database patients. For each database patient, CPD and MCOD KBP predictions were generated for D 10 , D 30 , D 50 , D 65 , and D 80 of the bladder and rectum in a leave-one-out manner. Prediction achievability was evaluated through a replanning study on a subset of 31 randomly selected database patients using the best KBP predictions, regardless of plan database origin, as planning goals. MCOD predictions were significantly lower than CPD predictions for all 5 bladder dose-volumes and rectum D 50 (P = .004) and D 65 (P databases affects the performance and achievability of dose-volume predictions from a common knowledge-based planning approach for prostate cancer. Bladder and rectum dose-volume predictions derived from a database of standardized Pareto-optimal plans were compared with those derived from clinical plans manually designed by various planners. Dose-volume predictions from the Pareto plan database were significantly lower overall than those from the clinical plan database, without compromising achievability. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Coverage and quality: A comparison of Web of Science and Scopus databases for reporting faculty nursing publication metrics.

    Science.gov (United States)

    Powell, Kimberly R; Peterson, Shenita R

    Web of Science and Scopus are the leading databases of scholarly impact. Recent studies outside the field of nursing report differences in journal coverage and quality. A comparative analysis of nursing publications reported impact. Journal coverage by each database for the field of nursing was compared. Additionally, publications by 2014 nursing faculty were collected in both databases and compared for overall coverage and reported quality, as modeled by Scimajo Journal Rank, peer review status, and MEDLINE inclusion. Individual author impact, modeled by the h-index, was calculated by each database for comparison. Scopus offered significantly higher journal coverage. For 2014 faculty publications, 100% of journals were found in Scopus, Web of Science offered 82%. No significant difference was found in the quality of reported journals. Author h-index was found to be higher in Scopus. When reporting faculty publications and scholarly impact, academic nursing programs may be better represented by Scopus, without compromising journal quality. Programs with strong interdisciplinary work should examine all areas of strength to ensure appropriate coverage. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. A quality assurance study on the accuracy of measuring physical function under current conditions for use of clinical video telehealth.

    Science.gov (United States)

    Hoenig, Helen; Tate, Latoya; Dumbleton, Sarina; Montgomery, Christy; Morgan, Michelle; Landerman, Lawrence R; Caves, Kevin

    2013-05-01

    To determine whether conditions for use of clinical video telehealth technology might affect the accuracy of measures of physical function. Repeated measures. Veterans Administration Medical Center. Three healthy adult volunteers for a sample size of n=30 independent trials for each of 3 physical function tasks. None. Three tasks capturing differing aspects of physical function: fine-motor coordination (number of finger taps in 30s), gross-motor coordination (number of gait deviations in 10ft [3.05m]), and clinical spatial relations (identifying the proper height for a cane randomly preset ±0-2in [5.1cm] from optimal), with performance simultaneously assessed in person and video recorded. Interrater reliability and criterion validity were determined for the measurement of these 3 tasks scored according to 5 methods: (1) in person (community standard), (2) slow motion review of the video recording (criterion standard), and (3-5) full speed review at 3 Internet bandwidths (64kps, 384kps, and 768kps). Fine-motor coordination-Interrater reliability was variable (r=.43-.81) and criterion validity was poor at 64kps and 384kps, but both were acceptable at 768kps (reliability r=.74, validity β=.81). Gross-motor coordination-Interreliability was variable (range r=.53-.75) and criterion validity was poor at all bandwidths (β=.28-.47). Motionless spatial relations-Excellent reliability (r=.92-.97) and good criterion validity (β=.84-.89) at all the tested bandwidths. Internet bandwidth had differing effects on measurement validity and reliability for the fine-motor task, the gross-motor task, and spatial relations, with results for some tasks at some transmission speeds well below acceptable quality standards and community standards. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  19. An Autonomic Framework for Integrating Security and Quality of Service Support in Databases

    Science.gov (United States)

    Alomari, Firas

    2013-01-01

    The back-end databases of multi-tiered applications are a major data security concern for enterprises. The abundance of these systems and the emergence of new and different threats require multiple and overlapping security mechanisms. Therefore, providing multiple and diverse database intrusion detection and prevention systems (IDPS) is a critical…

  20. Quality, language, subdiscipline and promotion were associated with article accesses on Physiotherapy Evidence Database (PEDro).

    Science.gov (United States)

    Yamato, Tiê P; Arora, Mohit; Stevens, Matthew L; Elkins, Mark R; Moseley, Anne M

    2018-03-01

    To quantify the relationship between the number of times articles are accessed on the Physiotherapy Evidence Database (PEDro) and the article characteristics. A secondary aim was to examine the relationship between accesses and the number of citations of articles. The study was conducted to derive prediction models for the number of accesses of articles indexed on PEDro from factors that may influence an article's accesses. All articles available on PEDro from August 2014 to January 2015 were included. We extracted variables relating to the algorithm used to present PEDro search results (research design, year of publication, PEDro score, source of systematic review (Cochrane or non-Cochrane)) plus language, subdiscipline of physiotherapy, and whether articles were promoted to PEDro users. Three predictive models were examined using multiple regression analysis. Citation and journal impact factor were downloaded. There were 29,313 articles indexed in this period. We identified seven factors that predicted the number of accesses. More accesses were noted for factors related to the algorithm used to present PEDro search results (synthesis research (i.e., guidelines and reviews), recent articles, Cochrane reviews, and higher PEDro score) plus publication in English and being promoted to PEDro users. The musculoskeletal, neurology, orthopaedics, sports, and paediatrics subdisciplines were associated with more accesses. We also found that there was no association between number of accesses and citations. The number of times an article is accessed on PEDro is partly predicted by how condensed and high quality the evidence it contains is. Copyright © 2017 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  1. Content-based video retrieval by example video clip

    Science.gov (United States)

    Dimitrova, Nevenka; Abdel-Mottaleb, Mohamed

    1997-01-01

    This paper presents a novel approach for video retrieval from a large archive of MPEG or Motion JPEG compressed video clips. We introduce a retrieval algorithm that takes a video clip as a query and searches the database for clips with similar contents. Video clips are characterized by a sequence of representative frame signatures, which are constructed from DC coefficients and motion information (`DC+M' signatures). The similarity between two video clips is determined by using their respective signatures. This method facilitates retrieval of clips for the purpose of video editing, broadcast news retrieval, or copyright violation detection.

  2. MOnthly TEmperature DAtabase of Spain 1951-2010: MOTEDAS. (1) Quality control

    Science.gov (United States)

    Peña-Angulo, Dhais; Cortesi, Nicola; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; González-Hidalgo, José Carlos

    2014-05-01

    The HIDROCAES project (Impactos Hidrológicos del Calentamiento Global en España, Spanish Ministery of Research CGL2011-27574-C02-01) is focused on the high resolution in the Spanish continental land of the warming processes during the 1951-2010. To do that the Department of Geography (University of Zaragoza, Spain), the Hydrometeorological Service (Brno Division, Chezck Republic) and the ISAC-CNR (Bologna, Italy) are developing the new dataset MOTEDAS (MOnthly TEmperature DAtabase of Spain), from which we present a collection of poster to show (1) the general structure of dataset and quality control; (2) the analyses of spatial correlation of monthly mean values of maximum (Tmax) and minimum (Tmin temperature; (3) the reconstruction processes of series and high resolution grid developing; (4) the first initial results of trend analyses of annual, seasonal and monthly range mean values. MOTEDAS has been created after exhaustive analyses and quality control of the original digitalized data of the Spanish National Meteorological Agency (Agencia Estatal de Meteorología, AEMET). Quality control was applied without any prior reconstruction, i.e. on original series. Then, from the total amount of series stored at AEMet archives (more than 4680) we selected only those series with at least 10 years of data (i.e. 120 months, 3066 series) to apply a quality control and reconstruction processes (see Poster MOTEDAS 3). Length of series was Tmin, upper and lower thresholds of absolute data, etc), and by comparison with reference series (see Poster MOTEDAS 3, about reconstruction). Anomalous data were considered when difference between Candidate and Reference series were higher than three times the interquartile distance. The total amount of monthly suspicious data recognized and discarded at the end of this analyses was 7832 data for Tmin, and 8063 for Tmax data; they represent less than 0,8% of original total monthly data, for both Tmax and Tmin. No spatial pattern was

  3. Efficient data replication for the delivery of high-quality video content over P2P VoD advertising networks

    Science.gov (United States)

    Ho, Chien-Peng; Yu, Jen-Yu; Lee, Suh-Yin

    2011-12-01

    Recent advances in modern television systems have had profound consequences for the scalability, stability, and quality of transmitted digital data signals. This is of particular significance for peer-to-peer (P2P) video-on-demand (VoD) related platforms, faced with an immediate and growing demand for reliable service delivery. In response to demands for high-quality video, the key objectives in the construction of the proposed framework were user satisfaction with perceived video quality and the effective utilization of available resources on P2P VoD networks. This study developed a peer-based promoter to support online advertising in P2P VoD networks based on an estimation of video distortion prior to the replication of data stream chunks. The proposed technology enables the recovery of lost video using replicated stream chunks in real time. Load balance is achieved by adjusting the replication level of each candidate group according to the degree-of-distortion, thereby enabling a significant reduction in server load and increased scalability in the P2P VoD system. This approach also promotes the use of advertising as an efficient tool for commercial promotion. Results indicate that the proposed system efficiently satisfies the given fault tolerances.

  4. Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games

    Science.gov (United States)

    Alber, Julia M.; Watson, Anna M.; Barnett, Tracey E.; Mercado, Rebeccah

    2015-01-01

    Abstract Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development. PMID:26167842

  5. Risk Factor, Job Stress and Quality of Life in Workers With Lower Extremity Pain Who Use Video Display Terminals.

    Science.gov (United States)

    Choi, Sehoon; Jang, Seong Ho; Lee, Kyu Hoon; Kim, Mi Jung; Park, Si-Bog; Han, Seung Hoon

    2018-02-01

    To investigate the general characteristics of video display terminal (VDT) workers with lower extremity pain, to identify the risk factors of work-related lower extremity pain, and to examine the relationship between work stress and health-related quality of life. A questionnaire about the general characteristics of the survey group and the musculoskeletal symptom was used. A questionnaire about job stress used the Korean Occupational Stress Scale and medical outcome study 36-item Short Form Health Survey (SF-36) to assess health-related quality of life. There were 1,711 subjects in the lower extremity group and 2,208 subjects in the control group. Age, sex, hobbies, and feeling of loading affected lower extremity pain as determined in a crossover analysis of all variables with and without lower extremity pain. There were no statistically significant difference between the two groups in terms of job stress and SF-36 values of the pain and control groups. Job stress in VDT workers was higher than average, and the quality of life decreased as the stress increased. Factors such as younger age, women, hobbies other than exercise, and feeling of loading influenced lower extremity pain of workers. Further long-term follow-up and supplementary studies are needed to identify risk factors for future lower extremity pain, taking into account ergonomic factors such as worker's posture.

  6. A novel frame-level constant-distortion bit allocation for smooth H.264/AVC video quality

    Science.gov (United States)

    Liu, Li; Zhuang, Xinhua

    2009-01-01

    It is known that quality fluctuation has a major negative effect on visual perception. In previous work, we introduced a constant-distortion bit allocation method [1] for H.263+ encoder. However, the method in [1] can not be adapted to the newest H.264/AVC encoder directly as the well-known chicken-egg dilemma resulted from the rate-distortion optimization (RDO) decision process. To solve this problem, we propose a new two stage constant-distortion bit allocation (CDBA) algorithm with enhanced rate control for H.264/AVC encoder. In stage-1, the algorithm performs RD optimization process with a constant quantization QP. Based on prediction residual signals from stage-1 and target distortion for smooth video quality purpose, the frame-level bit target is allocated by using a close-form approximations of ratedistortion relationship similar to [1], and a fast stage-2 encoding process is performed with enhanced basic unit rate control. Experimental results show that, compared with original rate control algorithm provided by H.264/AVC reference software JM12.1, the proposed constant-distortion frame-level bit allocation scheme reduces quality fluctuation and delivers much smoother PSNR on all testing sequences.

  7. Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games.

    Science.gov (United States)

    Alber, Julia M; Watson, Anna M; Barnett, Tracey E; Mercado, Rebeccah; Bernhardt, Jay M

    2015-07-01

    Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development.

  8. Patient characteristics of smokers undergoing lumbar spine surgery: an analysis from the Quality Outcomes Database.

    Science.gov (United States)

    Asher, Anthony L; Devin, Clinton J; McCutcheon, Brandon; Chotai, Silky; Archer, Kristin R; Nian, Hui; Harrell, Frank E; McGirt, Matthew; Mummaneni, Praveen V; Shaffrey, Christopher I; Foley, Kevin; Glassman, Steven D; Bydon, Mohamad

    2017-12-01

    OBJECTIVE In this analysis the authors compare the characteristics of smokers to nonsmokers using demographic, socioeconomic, and comorbidity variables. They also investigate which of these characteristics are most strongly associated with smoking status. Finally, the authors investigate whether the association between known patient risk factors and disability outcome is differentially modified by patient smoking status for those who have undergone surgery for lumbar degeneration. METHODS A total of 7547 patients undergoing degenerative lumbar surgery were entered into a prospective multicenter registry (Quality Outcomes Database [QOD]). A retrospective analysis of the prospectively collected data was conducted. Patients were dichotomized as smokers (current smokers) and nonsmokers. Multivariable logistic regression analysis fitted for patient smoking status and subsequent measurement of variable importance was performed to identify the strongest patient characteristics associated with smoking status. Multivariable linear regression models fitted for 12-month Oswestry Disability Index (ODI) scores in subsets of smokers and nonsmokers was performed to investigate whether differential effects of risk factors by smoking status might be present. RESULTS In total, 18% (n = 1365) of patients were smokers and 82% (n = 6182) were nonsmokers. In a multivariable logistic regression analysis, the factors significantly associated with patients' smoking status were sex (p smoker (p = 0.0008), while patients with coronary artery disease had greater odds of being a smoker (p = 0.044). Patients' propensity for smoking was also significantly associated with higher American Society of Anesthesiologists (ASA) class (p smokers and nonsmokers. CONCLUSIONS Using a large, national, multiinstitutional registry, the authors described the profile of patients who undergo lumbar spine surgery and its association with their smoking status. Compared with nonsmokers, smokers were younger, male

  9. The Nencki Affective Picture System (NAPS): introduction to a novel, standardized, wide-range, high-quality, realistic picture database.

    Science.gov (United States)

    Marchewka, Artur; Zurawski, Łukasz; Jednoróg, Katarzyna; Grabowska, Anna

    2014-06-01

    Selecting appropriate stimuli to induce emotional states is essential in affective research. Only a few standardized affective stimulus databases have been created for auditory, language, and visual materials. Numerous studies have extensively employed these databases using both behavioral and neuroimaging methods. However, some limitations of the existing databases have recently been reported, including limited numbers of stimuli in specific categories or poor picture quality of the visual stimuli. In the present article, we introduce the Nencki Affective Picture System (NAPS), which consists of 1,356 realistic, high-quality photographs that are divided into five categories (people, faces, animals, objects, and landscapes). Affective ratings were collected from 204 mostly European participants. The pictures were rated according to the valence, arousal, and approach-avoidance dimensions using computerized bipolar semantic slider scales. Normative ratings for the categories are presented for each dimension. Validation of the ratings was obtained by comparing them to ratings generated using the Self-Assessment Manikin and the International Affective Picture System. In addition, the physical properties of the photographs are reported, including luminance, contrast, and entropy. The new database, with accompanying ratings and image parameters, allows researchers to select a variety of visual stimulus materials specific to their experimental questions of interest. The NAPS system is freely accessible to the scientific community for noncommercial use by request at http://naps.nencki.gov.pl .

  10. A No-Reference Modular Video Quality Prediction Model for H.265/HEVC and VP9 Codecs on a Mobile Device

    Directory of Open Access Journals (Sweden)

    Debajyoti Pal

    2017-01-01

    Full Text Available We propose a modular no-reference video quality prediction model for videos that are encoded with H.265/HEVC and VP9 codecs and viewed on mobile devices. The impairments which can affect video transmission are classified into two broad types depending upon which layer of the TCP/IP model they originated from. Impairments from the network layer are called the network QoS factors, while those from the application layer are called the application/payload QoS factors. Initially we treat the network and application QoS factors separately and find out the 1 : 1 relationship between the respective QoS factors and the corresponding perceived video quality or QoE. The mapping from the QoS to the QoE domain is based upon a decision variable that gives an optimal performance. Next, across each group we choose multiple QoS factors and find out the QoE for such multifactor impaired videos by using an additive, multiplicative, and regressive approach. We refer to these as the integrated network and application QoE, respectively. At the end, we use a multiple regression approach to combine the network and application QoE for building the final model. We also use an Artificial Neural Network approach for building the model and compare its performance with the regressive approach.

  11. Improving quality of breast cancer surgery through development of a national breast cancer surgical outcomes (BRCASO research database

    Directory of Open Access Journals (Sweden)

    Aiello Bowles Erin J

    2012-04-01

    Full Text Available Abstract Background Common measures of surgical quality are 30-day morbidity and mortality, which poorly describe breast cancer surgical quality with extremely low morbidity and mortality rates. Several national quality programs have collected additional surgical quality measures; however, program participation is voluntary and results may not be generalizable to all surgeons. We developed the Breast Cancer Surgical Outcomes (BRCASO database to capture meaningful breast cancer surgical quality measures among a non-voluntary sample, and study variation in these measures across providers, facilities, and health plans. This paper describes our study protocol, data collection methods, and summarizes the strengths and limitations of these data. Methods We included 4524 women ≥18 years diagnosed with breast cancer between 2003-2008. All women with initial breast cancer surgery performed by a surgeon employed at the University of Vermont or three Cancer Research Network (CRN health plans were eligible for inclusion. From the CRN institutions, we collected electronic administrative data including tumor registry information, Current Procedure Terminology codes for breast cancer surgeries, surgeons, surgical facilities, and patient demographics. We supplemented electronic data with medical record abstraction to collect additional pathology and surgery detail. All data were manually abstracted at the University of Vermont. Results The CRN institutions pre-filled 30% (22 out of 72 of elements using electronic data. The remaining elements, including detailed pathology margin status and breast and lymph node surgeries, required chart abstraction. The mean age was 61 years (range 20-98 years; 70% of women were diagnosed with invasive ductal carcinoma, 20% with ductal carcinoma in situ, and 10% with invasive lobular carcinoma. Conclusions The BRCASO database is one of the largest, multi-site research resources of meaningful breast cancer surgical quality data

  12. Objective assessment of the impact of frame rate on video quality

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Korhonen, Jari; Forchhammer, Søren

    2012-01-01

    In this paper, we present a novel objective quality metric that takes the impact of frame rate into account. The proposed metric uses PSNR, frame rate and a content dependent parameter that can easily be obtained from spatial and temporal activity indices. The results have been validated on data ...

  13. Understanding the role of social context and user factors in video quality of experience

    NARCIS (Netherlands)

    Zhu, Y.; Heynderickx, I.E.J.; Redi, J.A.

    2015-01-01

    Quality of Experience is a concept to reflect the level of satisfaction of a user with a multimedia content, service or system. So far, the objective (i.e., computational) approaches to measure QoE have been mostly based on the analysis of the media technical properties. However, recent studies have

  14. Does peer coaching with video feedback improve the quality of teachers' reflections on own professional behaviour?

    NARCIS (Netherlands)

    J. van den Akker; dr Rita Schildwacht; Dr. S. Bolhuis

    2008-01-01

    Meetings with other professionals are considered crucial for enhancing the quality of teachers' reflections. However, little is yet known about how any beneficial effects of such meetings are brought about. This study explores the peer coach's roles and their influences on the learning processes of

  15. The Quality Control Algorithms Used in the Creation of NASA Kennedy Space Center Lightning Protection System Towers Meteorological Database

    Science.gov (United States)

    Orcutt, John M.; Brenton, James C.

    2016-01-01

    An accurate database of meteorological data is essential for designing any aerospace vehicle and for preparing launch commit criteria. Meteorological instrumentation were recently placed on the three Lightning Protection System (LPS) towers at Kennedy Space Center (KSC) launch complex 39B (LC-39B), which provide a unique meteorological dataset existing at the launch complex over an extensive altitude range. Data records of temperature, dew point, relative humidity, wind speed, and wind direction are produced at 40, 78, 116, and 139 m at each tower. The Marshall Space Flight Center Natural Environments Branch (EV44) received an archive that consists of one-minute averaged measurements for the period of record of January 2011 - April 2015. However, before the received database could be used EV44 needed to remove any erroneous data from within the database through a comprehensive quality control (QC) process. The QC process applied to the LPS towers' meteorological data is similar to other QC processes developed by EV44, which were used in the creation of meteorological databases for other towers at KSC. The QC process utilized in this study has been modified specifically for use with the LPS tower database. The QC process first includes a check of each individual sensor. This check includes removing any unrealistic data and checking the temporal consistency of each variable. Next, data from all three sensors at each height are checked against each other, checked against climatology, and checked for sensors that erroneously report a constant value. Then, a vertical consistency check of each variable at each tower is completed. Last, the upwind sensor at each level is selected to minimize the influence of the towers and other structures at LC-39B on the measurements. The selection process for the upwind sensor implemented a study of tower-induced turbulence. This paper describes in detail the QC process, QC results, and the attributes of the LPS towers meteorological

  16. Index of prolonged air leak score validation in case of video-assisted thoracoscopic surgery anatomical lung resection: results of a nationwide study based on the French national thoracic database, EPITHOR.

    Science.gov (United States)

    Orsini, Bastien; Baste, Jean Marc; Gossot, Dominique; Berthet, Jean Philippe; Assouad, Jalal; Dahan, Marcel; Bernard, Alain; Thomas, Pascal Alexandre

    2015-10-01

    The incidence rate of prolonged air leak (PAL) after lobectomy, defined as any air leak prolonged beyond 7 days, can be estimated to be in between 6 and 15%. In 2011, the Epithor group elaborated an accurate predictive score for PAL after open lung resections, so-called IPAL (index of prolonged air leak), from a nation-based surgical cohort constituted between 2004 and 2008. Since 2008, video-assisted thoracic surgery (VATS) has become popular in France among the thoracic surgical community, reaching almost 14% of lobectomies performed with this method in 2012. This minimally invasive approach was reported as a means to reduce the duration of chest tube drainage. The aim of our study was thus to validate the IPAL scoring system in patients having received VATS anatomical lung resections. We collected all anatomical VATS lung resections (lobectomy and segmentectomy) registered in the French national general thoracic surgery database (EPITHOR) between 2009 and 2012. The area under the receiver operating characteristic (ROC) curve estimated the discriminating value of the IPAL score. The slope value described the relation between the predicted and observed incidences of PALs. The Hosmer-Lemeshow test was also used to estimate the quality of adequacy between predicted and observed values. A total of 1233 patients were included: 1037 (84%) lobectomies and 196 (16%) segmentectomies. In 1099 cases (89.1%), the resection was performed for a malignant disease. Ninety-six patients (7.7%) presented with a PAL. The IPAL score provided a satisfactory predictive value with an area under the ROC curve of 0.72 (0.67-0.77). The value of the slope, 1.25 (0.9-1.58), and the Hosmer-Lemeshow test (χ(2) = 11, P = 0.35) showed that predicted and observed values were adequate. The IPAL score is valid for the estimation of the predictive risk of PAL after VATS lung resections. It may thus a priori be used to characterize any surgical population submitted to potential preventive measures

  17. Assessing the quality of life history information in publicly available databases.

    Science.gov (United States)

    Thorson, James T; Cope, Jason M; Patrick, Wesley S

    2014-01-01

    Single-species life history parameters are central to ecological research and management, including the fields of macro-ecology, fisheries science, and ecosystem modeling. However, there has been little independent evaluation of the precision and accuracy of the life history values in global and publicly available databases. We therefore develop a novel method based on a Bayesian errors-in-variables model that compares database entries with estimates from local experts, and we illustrate this process by assessing the accuracy and precision of entries in FishBase, one of the largest and oldest life history databases. This model distinguishes biases among seven life history parameters, two types of information available in FishBase (i.e., published values and those estimated from other parameters), and two taxa (i.e., bony and cartilaginous fishes) relative to values from regional experts in the United States, while accounting for additional variance caused by sex- and region-specific life history traits. For published values in FishBase, the model identifies a small positive bias in natural mortality and negative bias in maximum age, perhaps caused by unacknowledged mortality caused by fishing. For life history values calculated by FishBase, the model identified large and inconsistent biases. The model also demonstrates greatest precision for body size parameters, decreased precision for values derived from geographically distant populations, and greatest between-sex differences in age at maturity. We recommend that our bias and precision estimates be used in future errors-in-variables models as a prior on measurement errors. This approach is broadly applicable to global databases of life history traits and, if used, will encourage further development and improvements in these databases.

  18. The scientific production on data quality in big data: a study in the Web of Science database

    Directory of Open Access Journals (Sweden)

    Priscila Basto Fagundes

    2017-11-01

    Full Text Available More and more, the big data theme has attracted interest in researchers from different areas of knowledge, among them information scientists who need to understand their concepts and applications in order to contribute with new proposals for the management of the information generated from the data stored in these environments. The objective of this article is to present a survey of publications about data quality in big data in the Web of Science database until the year 2016. Will be presented the total number of publications indexed in the database, the number of publications per year, the location the origin of the research and a synthesis of the studies found. The survey in the database was conducted in July 2017 and resulted in a total of 23 publications. In order to make it possible to present a summary of the publications in this article, searches were made of the full texts of all the publications on the Internet and read the ones that were available. With this survey it was possible to conclude that the studies on data quality in big data had their publications starting in 2013, most of which present literature reviews and few effective proposals for the monitoring and management of data quality in environments with large volumes of data. Therefore, it is intended with this survey to contribute and foster new research on the context of data quality in big data environments.

  19. A new multimodal interactive way of subjective scoring of 3D video quality of experience

    Science.gov (United States)

    Kim, Taewan; Lee, Kwanghyun; Lee, Sanghoon; Bovik, Alan C.

    2014-03-01

    People that watch today's 3D visual programs, such as 3D cinema, 3D TV and 3D games, experience wide and dynamically varying ranges of 3D visual immersion and 3D quality of experience (QoE). It is necessary to be able to deploy reliable methodologies that measure each viewers subjective experience. We propose a new methodology that we call Multimodal Interactive Continuous Scoring of Quality (MICSQ). MICSQ is composed of a device interaction process between the 3D display and a separate device (PC, tablet, etc.) used as an assessment tool, and a human interaction process between the subject(s) and the device. The scoring process is multimodal, using aural and tactile cues to help engage and focus the subject(s) on their tasks. Moreover, the wireless device interaction process makes it possible for multiple subjects to assess 3D QoE simultaneously in a large space such as a movie theater, and at di®erent visual angles and distances.

  20. Sound for digital video

    CERN Document Server

    Holman, Tomlinson

    2013-01-01

    Achieve professional quality sound on a limited budget! Harness all new, Hollywood style audio techniques to bring your independent film and video productions to the next level.In Sound for Digital Video, Second Edition industry experts Tomlinson Holman and Arthur Baum give you the tools and knowledge to apply recent advances in audio capture, video recording, editing workflow, and mixing to your own film or video with stunning results. This fresh edition is chockfull of techniques, tricks, and workflow secrets that you can apply to your own projects from preproduction

  1. Big hits on the small screen: an evaluation of concussion-related videos on YouTube.

    Science.gov (United States)

    Williams, David; Sullivan, S John; Schneiders, Anthony G; Ahmed, Osman Hassan; Lee, Hopin; Balasundaram, Arun Prasad; McCrory, Paul R

    2014-01-01

    YouTube is one of the largest social networking websites, allowing users to upload and view video content that provides entertainment and conveys many messages, including those related to health conditions, such as concussion. However, little is known about the content of videos relating to concussion. To identify and classify the content of concussion-related videos available on YouTube. An observational study using content analysis. YouTube's video database was systematically searched using 10 search terms selected from MeSH and Google Adwords. The 100 videos with the largest view counts were chosen from the identified videos. These videos and their accompanying text were analysed for purpose, source and description of content by a panel of assessors who classified them into data-driven thematic categories. 434 videos met the inclusion criteria and the 100 videos with the largest view counts were chosen. The most common categories of the videos were the depiction of a sporting injury (37%) and news reports (25%). News and media organisations were the predominant source (51%) of concussion-related videos on YouTube, with very few being uploaded by professional or academic organisations. The median number of views per video was 26 191. Although a wide range of concussion-related videos were identified, there is a need for healthcare and educational organisations to explore YouTube as a medium for the dissemination of quality-controlled information on sports concussion.

  2. On use of image quality metrics for perceptual blur modeling: image/video compression case

    Science.gov (United States)

    Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn

    2018-02-01

    Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.

  3. Quality of sickness certification in primary health care: a retrospective database study.

    Science.gov (United States)

    Skånér, Ylva; Arrelöv, Britt; Backlund, Lars G; Fresk, Magdalena; Aström, Amanda Waleh; Nilsson, Gunnar H

    2013-04-12

    In the period 2004-2009, national and regional initiatives were developed in Sweden to improve the quality of sickness certificates. Parameters for assessing the quality of sickness certificates in primary health care have been proposed. The aim of this study was to measure the quality of sickness certification in primary health care by means of assessing sickness certificates issued between 2004 and 2009 in Stockholm. This was a retrospective study using data retrieved from sickness certificates contained in the electronic patient records of 21 primary health care centres in Stockholm County covering six consecutive years. A total number of 236 441 certificates were used in the current study. Seven quality parameters were chosen as outcome measures. Descriptive statistics and regression models with time, sex and age group as explanatory variables were used. During the study period, the quality of the sickness certification practice improved as the number of days on first certification decreased and the proportion of duly completely and acceptable certificates increased. Assessment of need for vocational rehabilitation and giving a prognosis for return to work were not significantly improved during the same period. Time was the most influential variable. The quality of sickness certification practice improved for most of the parameters, although additional efforts to improve the quality of sickness certificates are needed. Measures, such as reminders, compulsory certificate fields and structured guidance, could be useful tools to achieve this objective.

  4. Tidying up international nucleotide sequence databases: ecological, geographical and sequence quality annotation of its sequences of mycorrhizal fungi.

    Science.gov (United States)

    Tedersoo, Leho; Abarenkov, Kessy; Nilsson, R Henrik; Schüssler, Arthur; Grelet, Gwen-Aëlle; Kohout, Petr; Oja, Jane; Bonito, Gregory M; Veldre, Vilmar; Jairus, Teele; Ryberg, Martin; Larsson, Karl-Henrik; Kõljalg, Urmas

    2011-01-01

    Sequence analysis of the ribosomal RNA operon, particularly the internal transcribed spacer (ITS) region, provides a powerful tool for identification of mycorrhizal fungi. The sequence data deposited in the International Nucleotide Sequence Databases (INSD) are, however, unfiltered for quality and are often poorly annotated with metadata. To detect chimeric and low-quality sequences and assign the ectomycorrhizal fungi to phylogenetic lineages, fungal ITS sequences were downloaded from INSD, aligned within family-level groups, and examined through phylogenetic analyses and BLAST searches. By combining the fungal sequence database UNITE and the annotation and search tool PlutoF, we also added metadata from the literature to these accessions. Altogether 35,632 sequences belonged to mycorrhizal fungi or originated from ericoid and orchid mycorrhizal roots. Of these sequences, 677 were considered chimeric and 2,174 of low read quality. Information detailing country of collection, geographical coordinates, interacting taxon and isolation source were supplemented to cover 78.0%, 33.0%, 41.7% and 96.4% of the sequences, respectively. These annotated sequences are publicly available via UNITE (http://unite.ut.ee/) for downstream biogeographic, ecological and taxonomic analyses. In European Nucleotide Archive (ENA; http://www.ebi.ac.uk/ena/), the annotated sequences have a special link-out to UNITE. We intend to expand the data annotation to additional genes and all taxonomic groups and functional guilds of fungi.

  5. Traffic Management of Video on Demand: An Analysis of Investments for Improving the End User’s Quality of Experience

    Directory of Open Access Journals (Sweden)

    Francesca Di Pillo

    2016-05-01

    Full Text Available The current escalation in user demand for web contents, particularly Video on Demand (VoD, is causing a continu‐ ing increase in both the types of web traffic and the volumes of data transmitted. The greater demand arises from the new means of communication employed by individuals and companies, as well as the development of readily usable applications distributed by ‘app stores’. In this paper, we suggest that the stakeholders of a VoD frame‐ work, the Content Providers (CPs and the Internet Service Providers (telcos/ISPs, should guarantee a solid Quality of Experience (QoE to the end user through two potential investments: either in ultra-broadband (UBB or in the technologies for the acceleration of web content, known as the Content Delivery Network (CDN and Transparent Internet Caching (TIC. The aim of the paper is to analyse these investments in terms of providers' profits. The base hypothesis is that the investments are subsidized by the CPs, which, in recent years, have indeed been directing a large part of their revenues towards investments in network infrastructure.

  6. A New Database Facilitates Characterization of Flavonoid Intake, Sources, and Positive Associations with Diet Quality among US Adults.

    Science.gov (United States)

    Sebastian, Rhonda S; Wilkinson Enns, Cecilia; Goldman, Joseph D; Martin, Carrie L; Steinfeldt, Lois C; Murayi, Theophile; Moshfegh, Alanna J

    2015-06-01

    Epidemiologic studies demonstrate inverse associations between flavonoid intake and chronic disease risk. However, lack of comprehensive databases of the flavonoid content of foods has hindered efforts to fully characterize population intakes and determine associations with diet quality. Using a newly released database of flavonoid values, this study sought to describe intake and sources of total flavonoids and 6 flavonoid classes and identify associations between flavonoid intake and the Healthy Eating Index (HEI) 2010. One day of 24-h dietary recall data from adults aged ≥ 20 y (n = 5420) collected in What We Eat in America (WWEIA), NHANES 2007-2008, were analyzed. Flavonoid intakes were calculated using the USDA Flavonoid Values for Survey Foods and Beverages 2007-2008. Regression analyses were conducted to provide adjusted estimates of flavonoid intake, and linear trends in total and component HEI scores by flavonoid intake were assessed using orthogonal polynomial contrasts. All analyses were weighted to be nationally representative. Mean intake of flavonoids was 251 mg/d, with flavan-3-ols accounting for 81% of intake. Non-Hispanic whites had significantly higher (P empty calories increased (P < 0.001) across flavonoid intake quartiles. A new database that permits comprehensive estimation of flavonoid intakes in WWEIA, NHANES 2007-2008; identification of their major food/beverage sources; and determination of associations with dietary quality will lead to advances in research on relations between flavonoid intake and health. Findings suggest that diet quality, as measured by HEI, is positively associated with flavonoid intake. © 2015 American Society for Nutrition.

  7. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos ... Your Arthritis Managing Chronic Pain and Depression in Arthritis Nutrition & Rheumatoid Arthritis Arthritis and Health-related Quality of Life ...

  8. SU-D-BRB-02: Combining a Commercial Autoplanning Engine with Database Dose Predictions to Further Improve Plan Quality

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, SP; Moore, JA; Hui, X; Cheng, Z; McNutt, TR [Johns Hopkins University, Baltimore, MD (United States); DeWeese, TL; Tran, P; Quon, H [John Hopkins Hospital, Baltimore, MD (United States); Bzdusek, K [Philips, Fitchburg, WI (United States); Kumar, P [Philips India Limited, Bangalore, Karnataka (India)

    2016-06-15

    Purpose: Database dose predictions and a commercial autoplanning engine both improve treatment plan quality in different but complimentary ways. The combination of these planning techniques is hypothesized to further improve plan quality. Methods: Four treatment plans were generated for each of 10 head and neck (HN) and 10 prostate cancer patients, including Plan-A: traditional IMRT optimization using clinically relevant default objectives; Plan-B: traditional IMRT optimization using database dose predictions; Plan-C: autoplanning using default objectives; and Plan-D: autoplanning using database dose predictions. One optimization was used for each planning method. Dose distributions were normalized to 95% of the planning target volume (prostate: 8000 cGy; HN: 7000 cGy). Objectives used in plan optimization and analysis were the larynx (25%, 50%, 90%), left and right parotid glands (50%, 85%), spinal cord (0%, 50%), rectum and bladder (0%, 20%, 50%, 80%), and left and right femoral heads (0%, 70%). Results: All objectives except larynx 25% and 50% resulted in statistically significant differences between plans (Friedman’s χ{sup 2} ≥ 11.2; p ≤ 0.011). Maximum dose to the rectum (Plans A-D: 8328, 8395, 8489, 8537 cGy) and bladder (Plans A-D: 8403, 8448, 8527, 8569 cGy) were significantly increased. All other significant differences reflected a decrease in dose. Plans B-D were significantly different from Plan-A for 3, 17, and 19 objectives, respectively. Plans C-D were also significantly different from Plan-B for 8 and 13 objectives, respectively. In one case (cord 50%), Plan-D provided significantly lower dose than plan C (p = 0.003). Conclusion: Combining database dose predictions with a commercial autoplanning engine resulted in significant plan quality differences for the greatest number of objectives. This translated to plan quality improvements in most cases, although special care may be needed for maximum dose constraints. Further evaluation is warranted

  9. Dashboard Videos

    Science.gov (United States)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  10. Comparing subjective image quality measurement methods for the creation of public databases

    Science.gov (United States)

    Redi, Judith; Liu, Hantao; Alers, Hani; Zunino, Rodolfo; Heynderickx, Ingrid

    2010-01-01

    The Single Stimulus (SS) method is often chosen to collect subjective data testing no-reference objective metrics, as it is straightforward to implement and well standardized. At the same time, it exhibits some drawbacks; spread between different assessors is relatively large, and the measured ratings depend on the quality range spanned by the test samples, hence the results from different experiments cannot easily be merged . The Quality Ruler (QR) method has been proposed to overcome these inconveniences. This paper compares the performance of the SS and QR method for pictures impaired by Gaussian blur. The research goal is, on one hand, to analyze the advantages and disadvantages of both methods for quality assessment and, on the other, to make quality data of blur impaired images publicly available. The obtained results show that the confidence intervals of the QR scores are narrower than those of the SS scores. This indicates that the QR method enhances consistency across assessors. Moreover, QR scores exhibit a higher linear correlation with the distortion applied. In summary, for the purpose of building datasets of subjective quality, the QR approach seems promising from the viewpoint of both consistency and repeatability.

  11. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  12. Using Maslow's pyramid and the national database of nursing quality indicators(R) to attain a healthier work environment.

    Science.gov (United States)

    Groff-Paris, Lisa; Terhaar, Mary

    2010-12-07

    The strongest predictor of nurse job dissatisfaction and intent to leave is that of stress in the practice environment. Good communication, control over practice, decision making at the bedside, teamwork, and nurse empowerment have been found to increase nurse satisfaction and decrease turnover. In this article we share our experience of developing a rapid-design process to change the approach to performance improvement so as to increase engagement, empowerment, effectiveness, and the quality of the professional practice environment. Meal and non-meal breaks were identified as the target area for improvement. Qualitative and quantitative data support the success of this project. We begin this article with a review of literature related to work environment and retention and a presentation of the frameworks used to improve the work environment, specifically Maslow's theory of the Hierarchy of Inborn Needs and the National Database of Nursing Quality Indicators Survey. We then describe our performance improvement project and share our conclusion and recommendations.

  13. The relationship between overall quality of life and its subdimensions was influenced by culture: analysis of an international database

    DEFF Research Database (Denmark)

    Scott, Neil W; Fayers, Peter M; Aaronson, Neil K

    2008-01-01

    OBJECTIVE: To investigate whether geographic and cultural factors influence the relationship between the global health status quality of life (QL) scale score of the European Organisation for Research and Treatment of Cancer QLQ-C30 questionnaire and seven other subscales representing fatigue, pain......, physical, role, emotional, cognitive, and social functioning. STUDY DESIGN AND SETTING: A large international database of QLQ-C30 responses was assembled. A linear regression model was developed predicting the QL scale score and including interactions between geographical/cultural groupings and the seven...... other scale scores. RESULTS: The pain subscale appeared to have relatively greater influence and fatigue relatively lower influence for those from other European regions compared with respondents from the UK when predicting overall quality of life (QoL). For Scandinavia physical functioning appeared...

  14. Learning a Continuous-Time Streaming Video QoE Model.

    Science.gov (United States)

    Ghadiyaram, Deepti; Pan, Janice; Bovik, Alan C

    2018-05-01

    Over-the-top adaptive video streaming services are frequently impacted by fluctuating network conditions that can lead to rebuffering events (stalling events) and sudden bitrate changes. These events visually impact video consumers' quality of experience (QoE) and can lead to consumer churn. The development of models that can accurately predict viewers' instantaneous subjective QoE under such volatile network conditions could potentially enable the more efficient design of quality-control protocols for media-driven services, such as YouTube, Amazon, Netflix, and so on. However, most existing models only predict a single overall QoE score on a given video and are based on simple global video features, without accounting for relevant aspects of human perception and behavior. We have created a QoE evaluator, called the time-varying QoE Indexer, that accounts for interactions between stalling events, analyzes the spatial and temporal content of a video, predicts the perceptual video quality, models the state of the client-side data buffer, and consequently predicts continuous-time quality scores that agree quite well with human opinion scores. The new QoE predictor also embeds the impact of relevant human cognitive factors, such as memory and recency, and their complex interactions with the video content being viewed. We evaluated the proposed model on three different video databases and attained standout QoE prediction performance.

  15. Automated Quality Control of in Situ Soil Moisture from the North American Soil Moisture Database Using NLDAS-2 Products

    Science.gov (United States)

    Ek, M. B.; Xia, Y.; Ford, T.; Wu, Y.; Quiring, S. M.

    2015-12-01

    The North American Soil Moisture Database (NASMD) was initiated in 2011 to provide support for developing climate forecasting tools, calibrating land surface models and validating satellite-derived soil moisture algorithms. The NASMD has collected data from over 30 soil moisture observation networks providing millions of in situ soil moisture observations in all 50 states as well as Canada and Mexico. It is recognized that the quality of measured soil moisture in NASMD is highly variable due to the diversity of climatological conditions, land cover, soil texture, and topographies of the stations and differences in measurement devices (e.g., sensors) and installation. It is also recognized that error, inaccuracy and imprecision in the data set can have significant impacts on practical operations and scientific studies. Therefore, developing an appropriate quality control procedure is essential to ensure the data is of the best quality. In this study, an automated quality control approach is developed using the North American Land Data Assimilation System phase 2 (NLDAS-2) Noah soil porosity, soil temperature, and fraction of liquid and total soil moisture to flag erroneous and/or spurious measurements. Overall results show that this approach is able to flag unreasonable values when the soil is partially frozen. A validation example using NLDAS-2 multiple model soil moisture products at the 20 cm soil layer showed that the quality control procedure had a significant positive impact in Alabama, North Carolina, and West Texas. It had a greater impact in colder regions, particularly during spring and autumn. Over 433 NASMD stations have been quality controlled using the methodology proposed in this study, and the algorithm will be implemented to control data quality from the other ~1,200 NASMD stations in the near future.

  16. Proceedings of the International Workshop on Quality in Databases and Management of Uncertain Data (QDBMUD2008)

    NARCIS (Netherlands)

    de Keijzer, Ander; van Keulen, Maurice; Missier, P.; Lin, X.

    2008-01-01

    The ability to detect and correct errors in the data, and more broadly to develop techniques for data quality assessment, has long been recognized as critical to the functionality of a large number of applications, in areas ranging from business management to data-intensive science. While many of

  17. Dynamic Database for Quality Indicators Comparison in Education. Working Paper N. 04/2010

    Science.gov (United States)

    Poliandri, Donatella; Cardone, Michele; Muzzioli, Paola; Romiti, Sara

    2010-01-01

    The purpose of this study is to explore aspects and indicators most commonly used to assess the quality of education systems in different countries through the comparison of 12 national publications describing the state of the educational system. To compare indicators the CIPP model was chosen. This model is organized in four main parts: Context,…

  18. Comprehensive national database of tree effects on air quality and human health in the United States

    Science.gov (United States)

    Satoshi Hirabayashi; David J. Nowak

    2016-01-01

    Trees remove air pollutants through dry deposition processes depending upon forest structure, meteorology, and air quality that vary across space and time. Employing nationally available forest, weather, air pollution and human population data for 2010, computer simulations were performed for deciduous and evergreen trees with varying leaf area index for rural and...

  19. Risk factors for 30-day reoperation and 3-month readmission: analysis from the Quality and Outcomes Database lumbar spine registry.

    Science.gov (United States)

    Wadhwa, Rishi K; Ohya, Junichi; Vogel, Todd D; Carreon, Leah Y; Asher, Anthony L; Knightly, John J; Shaffrey, Christopher I; Glassman, Steven D; Mummaneni, Praveen V

    2017-08-01

    OBJECTIVE The aim of this paper was to use a prospective, longitudinal, multicenter outcome registry of patients undergoing surgery for lumbar degenerative disease in order to assess the incidence and factors associated with 30-day reoperation and 90-day readmission. METHODS Prospectively collected data from 9853 patients from the Quality and Outcomes Database (QOD; formerly known as the N 2 QOD [National Neurosurgery Quality and Outcomes Database]) lumbar spine registry were retrospectively analyzed. Multivariate binomial regression analysis was performed to identify factors associated with 30-day reoperation and 90-day readmission after surgery for lumbar degenerative disease. A subgroup analysis of Medicare patients stratified by age (readmission rate was 6.3%. Multivariate analysis demonstrated that higher ASA class (OR 1.46 per class, 95% CI 1.25-1.70) and history of depression (OR 1.27, 95% CI 1.04-1.54) were factors associated with 90-day readmission. Medicare beneficiaries had a higher rate of 90-day readmissions compared with those who had private insurance (OR 1.43, 95% CI 1.17-1.76). Medicare patients readmission included higher ASA class and a history of depression. The 90-day readmission rates were higher for Medicare beneficiaries than for those who had private insurance. Medicare patients < 65 years of age were more likely to undergo reoperation within 30 days and to be readmitted within 90 days after their index surgery.

  20. Management of forest vegetation data series: the role of database in the frame of Quality Assurance procedure

    Directory of Open Access Journals (Sweden)

    Vincenzo SMARGIASSI

    2002-09-01

    Full Text Available If data from diachronic records on permanent areas are to be made available, the quality of the historic sequences must be standardised, preserved, organised and checked in such a way as to permit continuous input and comparison. The "Ground Vegetation Assessment" group of the CONECOFOR programme designed a database with extended search capability to ensure rapid and precise access to data. The vegetation is analysed within a network of permanent plots, based on field surveys conducted at community and population level. Assessments include specific, stratified and overall cover estimates as well as detailed cover scores and density of aboveground shoots (respectively on 24 100 m2 and 100 0.25 m2 sampling units. In addition to archiving data, the database runs functions to check their validity. The integrity of the dataset and its conformation to the user defined range can be assessed, and the entire sequence can be validated before the new data is saved in the database. Subsequent cross-checks among attributes allow further tests of validity and precision. These functions are an integral part of the overall Quality Assurance Control system. The data are organised into seasonal surveys, plots and sampling units. Each species has a field code, with reference to a second archive of coded nomenclature established at a European level. A section for addition and deletion of data makes output available according to the appropriate EC regulations. The system guarantees the visualisation of a certain number of simple statistics, and also permits export of analytic data to external statistical tools.

  1. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  2. Assessing the Impact of Fires on Air Quality in the Southeastern U.S. with a Unified Prescribed Burning Database

    Science.gov (United States)

    Garcia Menendez, F.; Afrin, S.

    2017-12-01

    Prescribed fires are used extensively across the Southeastern United States and are a major source of air pollutant emissions in the region. These land management projects can adversely impact local and regional air quality. However, the emissions and air pollution impacts of prescribed fires remain largely uncertain. Satellite data, commonly used to estimate fire emissions, is often unable to detect the low-intensity, short-lived prescribed fires characteristic of the region. Additionally, existing ground-based prescribed burn records are incomplete, inconsistent and scattered. Here we present a new unified database of prescribed fire occurrence and characteristics developed from systemized digital burn permit records collected from public and private land management organizations in the Southeast. This bottom-up fire database is used to analyze the correlation between high PM2.5 concentrations measured by monitoring networks in southern states and prescribed fire occurrence at varying spatial and temporal scales. We show significant associations between ground-based records of prescribed fire activity and the observational air quality record at numerous sites by applying regression analysis and controlling confounding effects of meteorology. Furthermore, we demonstrate that the response of measured PM2.5 concentrations to prescribed fire estimates based on burning permits is significantly stronger than their response to satellite fire observations from MODIS (moderate-resolution imaging spectroradiometer) and geostationary satellites or prescribed fire emissions data in the National Emissions Inventory. These results show the importance of bottom-up smoke emissions estimates and reflect the need for improved ground-based fire data to advance air quality impacts assessments focused on prescribed burning.

  3. Video pedagogy

    OpenAIRE

    Länsitie, Janne; Stevenson, Blair; Männistö, Riku; Karjalainen, Tommi; Karjalainen, Asko

    2016-01-01

    The short film is an introduction to the concept of video pedagogy. The five categories of video pedagogy further elaborate how videos can be used as a part of instruction and learning process. Most pedagogical videos represent more than one category. A video itself doesn’t necessarily define the category – the ways in which the video is used as a part of pedagogical script are more defining factors. What five categories did you find? Did you agree with the categories, or are more...

  4. Intelligent control for scalable video processing

    NARCIS (Netherlands)

    Wüst, C.C.

    2006-01-01

    In this thesis we study a problem related to cost-effective video processing in software by consumer electronics devices, such as digital TVs. Video processing is the task of transforming an input video signal into an output video signal, for example to improve the quality of the signal. This

  5. Video enhancement : content classification and model selection

    NARCIS (Netherlands)

    Hu, H.

    2010-01-01

    The purpose of video enhancement is to improve the subjective picture quality. The field of video enhancement includes a broad category of research topics, such as removing noise in the video, highlighting some specified features and improving the appearance or visibility of the video content. The

  6. A User's Guide to the Comprehensive Water Quality Database for Groundwater in the Vicinity of the Nevada Test Site, Rev. No.: 1

    International Nuclear Information System (INIS)

    Farnham, Irene

    2006-01-01

    This water quality database (viz.GeochemXX.mdb) has been developed as part of the Underground Test Area (UGTA) Program with the cooperation of several agencies actively participating in ongoing evaluation and characterization activities under contract to the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office (NNSA/NSO). The database has been constructed to provide up-to-date, comprehensive, and quality controlled data in a uniform format for the support of current and future projects. This database provides a valuable tool for geochemical and hydrogeologic evaluations of the Nevada Test Site (NTS) and surrounding region. Chemistry data have been compiled for groundwater within the NTS and the surrounding region. These data include major ions, organic compounds, trace elements, radionuclides, various field parameters, and environmental isotopes. Colloid data are also included in the database. The GeochemXX.mdb database is distributed on an annual basis. The extension ''XX'' within the database title is replaced by the last two digits of the release year (e.g., Geochem06 for the version released during the 2006 fiscal year). The database is distributed via compact disc (CD) and is also uploaded to the Common Data Repository (CDR) in order to make it available to all agencies with DOE intranet access. This report provides an explanation of the database configuration and summarizes the general content and utility of the individual data tables. In addition to describing the data, subsequent sections of this report provide the data user with an explanation of the quality assurance/quality control (QA/QC) protocols for this database

  7. [The National Database of the Regional Collaborative Rheumatic Centers as a tool for clinical epidemiology and quality assessment in rheumatology].

    Science.gov (United States)

    Zink, Angela; Huscher, Dörte; Listing, Joachim

    2003-01-01

    The national database of the German Collaborative Arthritis Centres is a well-established tool for the observation and assessment of health care delivery to patients with rheumatic diseases in Germany. The discussion of variations in treatment practices contributes to the internal quality assessment in the participating arthritis centres. This documentation has shown deficits in primary health care including late referral to a rheumatologist, undertreatment with disease-modifying drugs and complementary therapies. In rheumatology, there is a trend towards early, intensive medical treatment including combination therapy. The frequency and length of inpatient hospital and rehabilitation treatments is decreasing, while active physiotherapy in outpatient care has been increased. Specific deficits have been identified concerning the provision of occupational therapy services and patient education.

  8. LAGOS-NE: a multi-scaled geospatial and temporal database of lake ecological context and water quality for thousands of US lakes

    Science.gov (United States)

    Soranno, Patricia A.; Bacon, Linda C.; Beauchene, Michael; Bednar, Karen E.; Bissell, Edward G.; Boudreau, Claire K.; Boyer, Marvin G.; Bremigan, Mary T.; Carpenter, Stephen R.; Carr, Jamie W.; Cheruvelil, Kendra S.; Christel, Samuel T.; Claucherty, Matt; Collins, Sarah M.; Conroy, Joseph D.; Downing, John A.; Dukett, Jed; Fergus, C. Emi; Filstrup, Christopher T.; Funk, Clara; Gonzalez, Maria J.; Green, Linda T.; Gries, Corinna; Halfman, John D.; Hamilton, Stephen K.; Hanson, Paul C.; Henry, Emily N.; Herron, Elizabeth M.; Hockings, Celeste; Jackson, James R.; Jacobson-Hedin, Kari; Janus, Lorraine L.; Jones, William W.; Jones, John R.; Keson, Caroline M.; King, Katelyn B.S.; Kishbaugh, Scott A.; Lapierre, Jean-Francois; Lathrop, Barbara; Latimore, Jo A.; Lee, Yuehlin; Lottig, Noah R.; Lynch, Jason A.; Matthews, Leslie J.; McDowell, William H.; Moore, Karen E.B.; Neff, Brian; Nelson, Sarah J.; Oliver, Samantha K.; Pace, Michael L.; Pierson, Donald C.; Poisson, Autumn C.; Pollard, Amina I.; Post, David M.; Reyes, Paul O.; Rosenberry, Donald; Roy, Karen M.; Rudstam, Lars G.; Sarnelle, Orlando; Schuldt, Nancy J.; Scott, Caren E.; Skaff, Nicholas K.; Smith, Nicole J.; Spinelli, Nick R.; Stachelek, Joseph J.; Stanley, Emily H.; Stoddard, John L.; Stopyak, Scott B.; Stow, Craig A.; Tallant, Jason M.; Tan, Pang-Ning; Thorpe, Anthony P.; Vanni, Michael J.; Wagner, Tyler; Watkins, Gretchen; Weathers, Kathleen C.; Webster, Katherine E.; White, Jeffrey D.; Wilmes, Marcy K.; Yuan, Shuai

    2017-01-01

    Understanding the factors that affect water quality and the ecological services provided by freshwater ecosystems is an urgent global environmental issue. Predicting how water quality will respond to global changes not only requires water quality data, but also information about the ecological context of individual water bodies across broad spatial extents. Because lake water quality is usually sampled in limited geographic regions, often for limited time periods, assessing the environmental controls of water quality requires compilation of many data sets across broad regions and across time into an integrated database. LAGOS-NE accomplishes this goal for lakes in the northeastern-most 17 US states.LAGOS-NE contains data for 51 101 lakes and reservoirs larger than 4 ha in 17 lake-rich US states. The database includes 3 data modules for: lake location and physical characteristics for all lakes; ecological context (i.e., the land use, geologic, climatic, and hydrologic setting of lakes) for all lakes; and in situ measurements of lake water quality for a subset of the lakes from the past 3 decades for approximately 2600–12 000 lakes depending on the variable. The database contains approximately 150 000 measures of total phosphorus, 200 000 measures of chlorophyll, and 900 000 measures of Secchi depth. The water quality data were compiled from 87 lake water quality data sets from federal, state, tribal, and non-profit agencies, university researchers, and citizen scientists. This database is one of the largest and most comprehensive databases of its type because it includes both in situ measurements and ecological context data. Because ecological context can be used to study a variety of other questions about lakes, streams, and wetlands, this database can also be used as the foundation for other studies of freshwaters at broad spatial and ecological scales.

  9. LAGOS-NE: a multi-scaled geospatial and temporal database of lake ecological context and water quality for thousands of US lakes.

    Science.gov (United States)

    Soranno, Patricia A; Bacon, Linda C; Beauchene, Michael; Bednar, Karen E; Bissell, Edward G; Boudreau, Claire K; Boyer, Marvin G; Bremigan, Mary T; Carpenter, Stephen R; Carr, Jamie W; Cheruvelil, Kendra S; Christel, Samuel T; Claucherty, Matt; Collins, Sarah M; Conroy, Joseph D; Downing, John A; Dukett, Jed; Fergus, C Emi; Filstrup, Christopher T; Funk, Clara; Gonzalez, Maria J; Green, Linda T; Gries, Corinna; Halfman, John D; Hamilton, Stephen K; Hanson, Paul C; Henry, Emily N; Herron, Elizabeth M; Hockings, Celeste; Jackson, James R; Jacobson-Hedin, Kari; Janus, Lorraine L; Jones, William W; Jones, John R; Keson, Caroline M; King, Katelyn B S; Kishbaugh, Scott A; Lapierre, Jean-Francois; Lathrop, Barbara; Latimore, Jo A; Lee, Yuehlin; Lottig, Noah R; Lynch, Jason A; Matthews, Leslie J; McDowell, William H; Moore, Karen E B; Neff, Brian P; Nelson, Sarah J; Oliver, Samantha K; Pace, Michael L; Pierson, Donald C; Poisson, Autumn C; Pollard, Amina I; Post, David M; Reyes, Paul O; Rosenberry, Donald O; Roy, Karen M; Rudstam, Lars G; Sarnelle, Orlando; Schuldt, Nancy J; Scott, Caren E; Skaff, Nicholas K; Smith, Nicole J; Spinelli, Nick R; Stachelek, Joseph J; Stanley, Emily H; Stoddard, John L; Stopyak, Scott B; Stow, Craig A; Tallant, Jason M; Tan, Pang-Ning; Thorpe, Anthony P; Vanni, Michael J; Wagner, Tyler; Watkins, Gretchen; Weathers, Kathleen C; Webster, Katherine E; White, Jeffrey D; Wilmes, Marcy K; Yuan, Shuai

    2017-12-01

    Understanding the factors that affect water quality and the ecological services provided by freshwater ecosystems is an urgent global environmental issue. Predicting how water quality will respond to global changes not only requires water quality data, but also information about the ecological context of individual water bodies across broad spatial extents. Because lake water quality is usually sampled in limited geographic regions, often for limited time periods, assessing the environmental controls of water quality requires compilation of many data sets across broad regions and across time into an integrated database. LAGOS-NE accomplishes this goal for lakes in the northeastern-most 17 US states.LAGOS-NE contains data for 51 101 lakes and reservoirs larger than 4 ha in 17 lake-rich US states. The database includes 3 data modules for: lake location and physical characteristics for all lakes; ecological context (i.e., the land use, geologic, climatic, and hydrologic setting of lakes) for all lakes; and in situ measurements of lake water quality for a subset of the lakes from the past 3 decades for approximately 2600-12 000 lakes depending on the variable. The database contains approximately 150 000 measures of total phosphorus, 200 000 measures of chlorophyll, and 900 000 measures of Secchi depth. The water quality data were compiled from 87 lake water quality data sets from federal, state, tribal, and non-profit agencies, university researchers, and citizen scientists. This database is one of the largest and most comprehensive databases of its type because it includes both in situ measurements and ecological context data. Because ecological context can be used to study a variety of other questions about lakes, streams, and wetlands, this database can also be used as the foundation for other studies of freshwaters at broad spatial and ecological scales. © The Author 2017. Published by Oxford University Press.

  10. LAGOS-NE: a multi-scaled geospatial and temporal database of lake ecological context and water quality for thousands of US lakes

    Science.gov (United States)

    Bacon, Linda C; Beauchene, Michael; Bednar, Karen E; Bissell, Edward G; Boudreau, Claire K; Boyer, Marvin G; Bremigan, Mary T; Carpenter, Stephen R; Carr, Jamie W; Christel, Samuel T; Claucherty, Matt; Conroy, Joseph D; Downing, John A; Dukett, Jed; Filstrup, Christopher T; Funk, Clara; Gonzalez, Maria J; Green, Linda T; Gries, Corinna; Halfman, John D; Hamilton, Stephen K; Hanson, Paul C; Henry, Emily N; Herron, Elizabeth M; Hockings, Celeste; Jackson, James R; Jacobson-Hedin, Kari; Janus, Lorraine L; Jones, William W; Jones, John R; Keson, Caroline M; King, Katelyn B S; Kishbaugh, Scott A; Lathrop, Barbara; Latimore, Jo A; Lee, Yuehlin; Lottig, Noah R; Lynch, Jason A; Matthews, Leslie J; McDowell, William H; Moore, Karen E B; Neff, Brian P; Nelson, Sarah J; Oliver, Samantha K; Pace, Michael L; Pierson, Donald C; Poisson, Autumn C; Pollard, Amina I; Post, David M; Reyes, Paul O; Rosenberry, Donald O; Roy, Karen M; Rudstam, Lars G; Sarnelle, Orlando; Schuldt, Nancy J; Scott, Caren E; Smith, Nicole J; Spinelli, Nick R; Stachelek, Joseph J; Stanley, Emily H; Stoddard, John L; Stopyak, Scott B; Stow, Craig A; Tallant, Jason M; Thorpe, Anthony P; Vanni, Michael J; Wagner, Tyler; Watkins, Gretchen; Weathers, Kathleen C; Webster, Katherine E; White, Jeffrey D; Wilmes, Marcy K; Yuan, Shuai

    2017-01-01

    Abstract Understanding the factors that affect water quality and the ecological services provided by freshwater ecosystems is an urgent global environmental issue. Predicting how water quality will respond to global changes not only requires water quality data, but also information about the ecological context of individual water bodies across broad spatial extents. Because lake water quality is usually sampled in limited geographic regions, often for limited time periods, assessing the environmental controls of water quality requires compilation of many data sets across broad regions and across time into an integrated database. LAGOS-NE accomplishes this goal for lakes in the northeastern-most 17 US states. LAGOS-NE contains data for 51 101 lakes and reservoirs larger than 4 ha in 17 lake-rich US states. The database includes 3 data modules for: lake location and physical characteristics for all lakes; ecological context (i.e., the land use, geologic, climatic, and hydrologic setting of lakes) for all lakes; and in situ measurements of lake water quality for a subset of the lakes from the past 3 decades for approximately 2600–12 000 lakes depending on the variable. The database contains approximately 150 000 measures of total phosphorus, 200 000 measures of chlorophyll, and 900 000 measures of Secchi depth. The water quality data were compiled from 87 lake water quality data sets from federal, state, tribal, and non-profit agencies, university researchers, and citizen scientists. This database is one of the largest and most comprehensive databases of its type because it includes both in situ measurements and ecological context data. Because ecological context can be used to study a variety of other questions about lakes, streams, and wetlands, this database can also be used as the foundation for other studies of freshwaters at broad spatial and ecological scales. PMID:29053868

  11. Role of Video Games in Improving Health-Related Outcomes

    Science.gov (United States)

    Primack, Brian A.; Carroll, Mary V.; McNamara, Megan; Klem, Mary Lou; King, Brandy; Rich, Michael O.; Chan, Chun W.; Nayak, Smita

    2012-01-01

    Context Video games represent a multibillion-dollar industry in the U.S. Although video gaming has been associated with many negative health consequences, it may also be useful for therapeutic purposes. The goal of this study was to determine whether video games may be useful in improving health outcomes. Evidence acquisition Literature searches were performed in February 2010 in six databases: the Center on Media and Child Health Database of Research, MEDLINE, CINAHL, PsycINFO, EMBASE, and the Cochrane Central Register of Controlled Trials. Reference lists were hand-searched to identify additional studies. Only RCTs that tested the effect of video games on a positive, clinically relevant health consequence were included. Study selection criteria were strictly defined and applied by two researchers working independently. Study background information (e.g., location, funding source), sample data (e.g., number of study participants, demographics), intervention and control details, outcomes data, and quality measures were abstracted independently by two researchers. Evidence synthesis Of 1452 articles retrieved using the current search strategy, 38 met all criteria for inclusion. Eligible studies used video games to provide physical therapy, psychological therapy, improved disease self-management, health education, distraction from discomfort, increased physical activity, and skills training for clinicians. Among the 38 studies, a total of 195 health outcomes were examined. Video games improved 69% of psychological therapy outcomes, 59% of physical therapy outcomes, 50% of physical activity outcomes, 46% of clinician skills outcomes, 42% of health education outcomes, 42% of pain distraction outcomes, and 37% of disease self-management outcomes. Study quality was generally poor; for example, two thirds (66%) of studies had follow-up periods of video games to improve health outcomes, particularly in the areas of psychological therapy and physical therapy. RCTs with

  12. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  13. Environmental geochemistry and sediment quality in Lake Pontchartrain: database development and review

    Science.gov (United States)

    Manheim, Frank T.; Flowers, George C.; McIntire, Andrew G.; Marot, Marcie; Holmes, Charles

    1997-01-01

    This paper reports on preliminary results of a project to develop a comprehensive data base of chemical and environmental information on sediments from Lake Pontchartrain, Louisiana, and surrounding water bodies. The goal is to evaluate all data for reliability and comparability, and to make it widely accessible and useful to all users. Methods for processing heterogeneous, historical data follow previous methods employed in the Boston Harbor and Massachusetts Bay area. Data from 11 different data sets, encompassing about 900 total samples, have been entered to date. Questionable or anomalous data were noted in a minority of cases. Problems tend to follow distinct patterns and are relatively easy to identify. Hence, comparability of data has not proven to be the major obstacle to synthesis efforts that was anticipated in earlier years (NRC, 1989). Quality-controlled data sets show that the bulk of sediment samples in the more central parts of Lake Pontchartrain have values within normal background for heavy metals like Cu, Pb, and Zn. The same or lower concentrations were found in the vicinity of the Bonnet Carre Spillway, representing influx from the Mississippi River. Mean concentrations for Cu, Pb, and Zn were 17, 21, and 74 µg/g (total dissolution analyses), respectively. However, values as high as 267 µg/g Pb and comparable increases for other metal and organic contaminants are found in sediments within 2 km of the coastal strip of New Orleans. Additional sampling in such areas and in other inland coastal waterways is needed, since such levels are above the threshold for potential toxic effects on benthic organisms, according to effects-based screening criteria. The most contaminated sites, Bayou Trepagnier and Bayou Bonfouca, involve industrial areas where waste discharge has now been controlled or remediated, but where sediments may retain large concentrations of contaminants, e.g. tenths of a percent of Pb, Cr, and Zn or more for Bayou Trepagnier.

  14. Video on the Internet: An introduction to the digital encoding, compression, and transmission of moving image data.

    Science.gov (United States)

    Boudier, T; Shotton, D M

    1999-01-01

    In this paper, we seek to provide an introduction to the fast-moving field of digital video on the Internet, from the viewpoint of the biological microscopist who might wish to store or access videos, for instance in image databases such as the BioImage Database (http://www.bioimage.org). We describe and evaluate the principal methods used for encoding and compressing moving image data for digital storage and transmission over the Internet, which involve compromises between compression efficiency and retention of image fidelity, and describe the existing alternate software technologies for downloading or streaming compressed digitized videos using a Web browser. We report the results of experiments on video microscopy recordings and three-dimensional confocal animations of biological specimens to evaluate the compression efficiencies of the principal video compression-decompression algorithms (codecs) and to document the artefacts associated with each of them. Because MPEG-1 gives very high compression while yet retaining reasonable image quality, these studies lead us to recommend that video databases should store both a high-resolution original version of each video, ideally either uncompressed or losslessly compressed, and a separate edited and highly compressed MPEG-1 preview version that can be rapidly downloaded for interactive viewing by the database user. Copyright 1999 Academic Press.

  15. NIRO: a database of all X-ray units in use in Lower Saxony to improve radiation protection and quality control

    International Nuclear Information System (INIS)

    Brueggemeyer, H.; Siewert, T.

    1995-01-01

    The paper gives an overview on the structure and intention of a database on all X ray units in Lower Saxony. For every X ray unit tested some technical and administrative data were sent to the database. All institutions and authorities in Lower Saxony related to X ray safety are connected to the database. As all X ray units are re-inspected every 5 years this data is used to update the data. This database is a tool to identify and supervise all x ray units, to find all units of a special type in case of defect clusters, for statistical purposes and for administration demands. Some examples for possible statistical evaluations are given. This database has become a very important and helpful tool to secure a good and complete performance of radiation protection and quality control throughout Lower Saxony. (Author)

  16. A comparison of the quality of image acquisition between the incident dark field and sidestream dark field video-microscopes

    NARCIS (Netherlands)

    E. Gilbert-Kawai; J. Coppel (Jonny); V. Bountziouka (Vassiliki); C. Ince (Can); D. Martin (Daniel)

    2016-01-01

    markdownabstract__Background__ The ‘Cytocam’ is a third generation video-microscope, which enables real time visualisation of the in vivo microcirculation. Based upon the principle of incident dark field (IDF) illumination, this hand held computer-controlled device was designed to address the

  17. A comparison of the quality of image acquisition between the incident dark field and sidestream dark field video-microscopes

    NARCIS (Netherlands)

    Gilbert-Kawai, Edward; Coppel, Jonny; Bountziouka, Vassiliki; Ince, Can; Martin, Daniel; Ahuja, V.; Aref-Adib, G.; Burnham, R.; Chisholm, A.; Clarke, K.; Coates, D.; Coates, M.; Cook, D.; Cox, M.; Dhillon, S.; Dougall, C.; Doyle, P.; Duncan, P.; Edsell, M.; Edwards, L.; Evans, L.; Gardiner, P.; Grocott, M.; Gunning, P.; Hart, N.; Harrington, J.; Harvey, J.; Holloway, C.; Howard, D.; Hurlbut, D.; Imray, C.; Jonas, M.; van der Kaaij, J.; Khosravi, M.; Kolfschoten, N.; Levett, D.; Luery, H.; Luks, A.; Martin, D.; McMorrow, R.; Meale, P.; Mitchell, K.; Montgomery, H.; Morgan, G.; Morgan, J.; Murray, A.; Mythen, M.; Newman, S.; O'Dwyer, M.; Pate, J.

    2016-01-01

    Background: The 'Cytocam' is a third generation video-microscope, which enables real time visualisation of the in vivo microcirculation. Based upon the principle of incident dark field (IDF) illumination, this hand held computer-controlled device was designed to address the technical limitations of

  18. Surgery Risk Assessment (SRA) Database

    Data.gov (United States)

    Department of Veterans Affairs — The Surgery Risk Assessment (SRA) database is part of the VA Surgical Quality Improvement Program (VASQIP). This database contains assessments of selected surgical...

  19. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  20. 'Trifecta' outcomes of robot-assisted partial nephrectomy in solitary kidney: a Vattikuti Collective Quality Initiative (VCQI) database analysis.

    Science.gov (United States)

    Arora, Sohrab; Abaza, Ronney; Adshead, James M; Ahlawat, Rajesh K; Challacombe, Benjamin J; Dasgupta, Prokar; Gandaglia, Giorgio; Moon, Daniel A; Yuvaraja, Thyavihally B; Capitanio, Umberto; Larcher, Alessandro; Porpiglia, Francesco; Porter, James R; Mottrie, Alexander; Bhandari, Mahendra; Rogers, Craig

    2018-01-01

    To analyse the outcomes of robot-assisted partial nephrectomy (RAPN) in patients with a solitary kidney in a large multi-institutional database. In all, 2755 patients in the Vattikuti Collective Quality Initiative database underwent RAPN by 22 surgeons at 14 centres in nine countries. Of these patients, 74 underwent RAPN with a solitary kidney between 2007 and 2016. We retrospectively analysed the functional and oncological outcomes of these 74 patients. A 'trifecta' of outcomes was assessed, with trifecta defined as a warm ischaemia time (WIT) of negative surgical margins, and no complications intraoperatively or within 3 months of RAPN. All 74 patients underwent RAPN successfully with one conversion to radical nephrectomy. The median (interquartile range [IQR]) operative time was 180 (142-230) min. Early unclamping was used in 11 (14.9%) patients and zero ischaemia was used in 12 (16.2%). Trifecta outcomes were achieved in 38 of 66 patients (57.6%). The median (IQR) WIT was 15.5 (8.75-20.0) min for the entire cohort. The overall complication rate was 24.1% and the rate of Clavien-Dindo grade ≤II complications was 16.3%. Positive surgical margins were present in four cases (5.4%). The median (IQR) follow-up was 10.5 (2.12-24.0) months. The median drop in estimated glomerular filtration rate at 3 months was 7.0 mL/min/1.72 m 2 (11.01%). Our findings suggest that RAPN is a safe and effective treatment option for select renal tumours in solitary kidneys in terms of a trifecta of negative surgical margins, WIT of <20 min, and low operative and perioperative morbidity. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.

  1. Conformationally selective multidimensional chemical shift ranges in proteins from a PACSY database purged using intrinsic quality criteria

    International Nuclear Information System (INIS)

    Fritzsching, Keith J.; Hong, Mei; Schmidt-Rohr, Klaus

    2016-01-01

    We have determined refined multidimensional chemical shift ranges for intra-residue correlations ( 13 C– 13 C, 15 N– 13 C, etc.) in proteins, which can be used to gain type-assignment and/or secondary-structure information from experimental NMR spectra. The chemical-shift ranges are the result of a statistical analysis of the PACSY database of >3000 proteins with 3D structures (1,200,207 13 C chemical shifts and >3 million chemical shifts in total); these data were originally derived from the Biological Magnetic Resonance Data Bank. Using relatively simple non-parametric statistics to find peak maxima in the distributions of helix, sheet, coil and turn chemical shifts, and without the use of limited “hand-picked” data sets, we show that ∼94 % of the 13 C NMR data and almost all 15 N data are quite accurately referenced and assigned, with smaller standard deviations (0.2 and 0.8 ppm, respectively) than recognized previously. On the other hand, approximately 6 % of the 13 C chemical shift data in the PACSY database are shown to be clearly misreferenced, mostly by ca. −2.4 ppm. The removal of the misreferenced data and other outliers by this purging by intrinsic quality criteria (PIQC) allows for reliable identification of secondary maxima in the two-dimensional chemical-shift distributions already pre-separated by secondary structure. We demonstrate that some of these correspond to specific regions in the Ramachandran plot, including left-handed helix dihedral angles, reflect unusual hydrogen bonding, or are due to the influence of a following proline residue. With appropriate smoothing, significantly more tightly defined chemical shift ranges are obtained for each amino acid type in the different secondary structures. These chemical shift ranges, which may be defined at any statistical threshold, can be used for amino-acid type assignment and secondary-structure analysis of chemical shifts from intra-residue cross peaks by inspection or by using a

  2. Study of recycled concrete aggregate quality and its relationship with recycled concrete compressive strength using database analysis

    Directory of Open Access Journals (Sweden)

    González-Taboada, I.

    2016-09-01

    Full Text Available This work studies the physical and mechanical properties of recycled concrete aggregate (recycled aggregate from concrete waste and their influence in structural recycled concrete compressive strength. For said purpose, a database has been developed with the experimental results of 152 works selected from over 250 international references. The processed database results indicate that the most sensitive properties of recycled aggregate quality are density and absorption. Moreover, the study analyses how the recycled aggregate (both percentage and quality and the mixing procedure (pre-soaking or adding extra water influence the recycled concrete strength of different categories (high or low water to cement ratios. When recycled aggregate absorption is low (under 5%, pre-soaking or adding extra water to avoid loss in workability will negatively affect concrete strength (due to the bleeding effect, whereas with high water absorption this does not occur and both of the aforementioned correcting methods can be accurately employed.El estudio analiza las propiedades físico-mecánicas de los áridos reciclados de hormigón (procedentes de residuos de hormigón y su influencia en la resistencia a compresión del hormigón reciclado estructural. Para ello se ha desarrollado una base de datos con resultados de 152 trabajos seleccionados a partir de más de 250 referencias internacionales. Los resultados del tratamiento de la base indican que densidad y absorción son las propiedades más sensibles a la calidad del árido reciclado. Además, este estudio analiza cómo el árido reciclado (porcentaje y calidad y el procedimiento de mezcla (presaturación o adición de agua extra influyen en la resistencia del hormigón reciclado de diferentes categorías (alta o baja relación agua-cemento. Cuando la absorción es baja (inferior al 5% presaturar o añadir agua para evitar pérdidas de trabajabilidad afectan negativamente a la resistencia (debido al bleeding

  3. Conformationally selective multidimensional chemical shift ranges in proteins from a PACSY database purged using intrinsic quality criteria

    Energy Technology Data Exchange (ETDEWEB)

    Fritzsching, Keith J., E-mail: kfritzsc@brandeis.edu [Brandeis University, Department of Chemistry (United States); Hong, Mei [Massachusetts Institute of Technology, Department of Chemistry (United States); Schmidt-Rohr, Klaus, E-mail: srohr@brandeis.edu [Brandeis University, Department of Chemistry (United States)

    2016-02-15

    We have determined refined multidimensional chemical shift ranges for intra-residue correlations ({sup 13}C–{sup 13}C, {sup 15}N–{sup 13}C, etc.) in proteins, which can be used to gain type-assignment and/or secondary-structure information from experimental NMR spectra. The chemical-shift ranges are the result of a statistical analysis of the PACSY database of >3000 proteins with 3D structures (1,200,207 {sup 13}C chemical shifts and >3 million chemical shifts in total); these data were originally derived from the Biological Magnetic Resonance Data Bank. Using relatively simple non-parametric statistics to find peak maxima in the distributions of helix, sheet, coil and turn chemical shifts, and without the use of limited “hand-picked” data sets, we show that ∼94 % of the {sup 13}C NMR data and almost all {sup 15}N data are quite accurately referenced and assigned, with smaller standard deviations (0.2 and 0.8 ppm, respectively) than recognized previously. On the other hand, approximately 6 % of the {sup 13}C chemical shift data in the PACSY database are shown to be clearly misreferenced, mostly by ca. −2.4 ppm. The removal of the misreferenced data and other outliers by this purging by intrinsic quality criteria (PIQC) allows for reliable identification of secondary maxima in the two-dimensional chemical-shift distributions already pre-separated by secondary structure. We demonstrate that some of these correspond to specific regions in the Ramachandran plot, including left-handed helix dihedral angles, reflect unusual hydrogen bonding, or are due to the influence of a following proline residue. With appropriate smoothing, significantly more tightly defined chemical shift ranges are obtained for each amino acid type in the different secondary structures. These chemical shift ranges, which may be defined at any statistical threshold, can be used for amino-acid type assignment and secondary-structure analysis of chemical shifts from intra

  4. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  5. Elective Stoma Reversal Has a Higher Incidence of Postoperative Clostridium Difficile Infection Compared With Elective Colectomy: An Analysis Using the American College of Surgeons National Surgical Quality Improvement Program and Targeted Colectomy Databases.

    Science.gov (United States)

    Skancke, Matthew; Vaziri, Khashayar; Umapathi, Bindu; Amdur, Richard; Radomski, Michal; Obias, Vincent

    2018-05-01

    Clostridium difficile infection is caused by the proliferation of a gram-positive anaerobic bacteria after medical or surgical intervention and can result in toxic complications, emergent surgery, and death. This analysis evaluates the incidence of C difficile infection in elective restoration of intestinal continuity compared with elective colon resection. This was a retrospective database review of the 2015 American College of Surgeons National Surgical Quality Improvement Project and targeted colectomy database. The intervention cohort was defined as the primary Current Procedural Terminology codes for ileostomy/colostomy reversal (44227, 44620, 44625, and 44626) and International Classification of Diseases codes for ileostomy/colostomy status (VV44.2, VV44.3, VV55.2, VV55.3, Z93.2, Z93.3, Z43.3, and Z43.2). A total of 2235 patients underwent elective stoma reversal compared with 10403 patients who underwent elective colon resection. Multivariate regression modeling of the impact of stoma reversal on postoperative C difficile infection risk was used as the study intervention. The incidence of C difficile infection in the 30 days after surgery was measured. The incidence of C difficile infection in the 30-day postoperative period was significantly higher (3.04% vs 1.25%; p difficile infection incidence in the 30-day postoperative period. The study was limited because it was a retrospective database review with observational bias. Patients who undergo elective stoma reversal have a higher incidence of postoperative C difficile infection compared with patients who undergo an elective colectomy. Given the impact of postoperative C difficile infection, a heightened sense of suspicion should be given to symptomatic patients after stoma reversal. See at Video Abstract at http://links.lww.com/DCR/A553.

  6. Recommendations of the DNA Commission of the International Society for Forensic Genetics (ISFG) on quality control of autosomal Short Tandem Repeat allele frequency databasing (STRidER)

    DEFF Research Database (Denmark)

    Bodner, Martin; Bastisch, Ingo; Butler, John M.

    2016-01-01

    for mitochondrial mtDNA, and YHRD for Y-chromosomal loci) that centralized quality control and data curation is essential to minimize error. The concepts employed for quality control involve software-aided likelihood-of-genotype, phylogenetic, and population genetic checks that allow the researchers to compare...... on the previously established ENFSI DNA WG STRbASE and applies standard concepts established for haploid and autosomal markers as well as novel tools to reduce error and increase the quality of autosomal STR data. The platform constitutes a significant improvement and innovation for the scientific community....... There is currently no agreed procedure of performing quality control of STR allele frequency databases, and the reliability and accuracy of the data are largely based on the responsibility of the individual contributing research groups. It has been demonstrated with databases of haploid markers (EMPOP...

  7. Introspection into institutional database allows for focused quality improvement plan in cardiac surgery: example for a new global healthcare system.

    Science.gov (United States)

    Lancaster, Elizabeth; Postel, Mackenzie; Satou, Nancy; Shemin, Richard; Benharash, Peyman

    2013-10-01

    Reducing readmission rates is vital to improving quality of care and reducing healthcare costs. In accordance with the Patient Protection and Affordable Care Act, Medicare will cut payments to hospitals with high 30-day readmission rates. We retrospectively reviewed an institutional database to identify risk factors predisposing adult cardiac surgery patients to rehospitalization within 30 days of discharge. Of 2302 adult cardiac surgery patients within the study period from 2008 to 2011, a total of 218 patients (9.5%) were readmitted within 30 days. Factors found to be significant predictors of readmission were nonwhite race (P = 0.003), government health insurance (P = 0.02), ejection fraction less than 40 per cent (P = 0.001), chronic lung disease (P improving patient care. Our data suggest that optimizing cardiopulmonary status in patients with comorbidities such as heart failure and chronic obstructive pulmonary disease, increasing directed pneumonia prophylaxis, patient education tailored to specific patient social needs, earlier patient follow-up, and better communication between inpatient and outpatient physicians may reduce readmission rates.

  8. Outcomes of operations for benign foregut disease in elderly patients: a National Surgical Quality Improvement Program database analysis.

    Science.gov (United States)

    Molena, Daniela; Mungo, Benedetto; Stem, Miloslawa; Feinberg, Richard L; Lidor, Anne O

    2014-08-01

    The development of minimally invasive operative techniques and improvement in postoperative care has made surgery a viable option to a greater number of elderly patients. Our objective was to evaluate the outcomes of laparoscopic and open foregut operation in relation to the patient age. Patients who underwent gastric fundoplication, paraesophageal hernia repair, and Heller myotomy were identified via the National Surgical Quality Improvement Program (NSQIP) database (2005-2011). Patient characteristics and outcomes were compared between five age groups (group I: ≤65 years, II: 65-69 years; III: 70-74 years; IV: 75-79 years; and V: ≥80 years). Multivariable logistic regression analysis was used to predict the impact of age and operative approach on the studied outcomes. A total of 19,388 patients were identified. Advanced age was associated with increased rate of 30-day mortality, overall morbidity, serious morbidity, and extended length of stay, regardless of the operative approach. After we adjusted for other variables, advanced age was associated with increased odds of 30-day mortality compared with patients <65 years (III: odds ratio 2.70, 95% confidence interval 1.34-5.44, P = .01; IV: 2.80, 1.35-5.81, P = .01; V: 6.12, 3.41-10.99, P < .001). Surgery for benign foregut disease in elderly patients carries a burden of mortality and morbidity that needs to be acknowledged. Copyright © 2014 Mosby, Inc. All rights reserved.

  9. The World Database for Pediatric and Congenital Heart Surgery: The Dawn of a New Era of Global Communication and Quality Improvement in Congenital Heart Disease.

    Science.gov (United States)

    St Louis, James D; Kurosawa, Hiromi; Jonas, Richard A; Sandoval, Nestor; Cervantes, Jorge; Tchervenkov, Christo I; Jacobs, Jeffery P; Sakamoto, Kisaburo; Stellin, Giovanni; Kirklin, James K

    2017-09-01

    The World Society for Pediatric and Congenital Heart Surgery was founded with the mission to "promote the highest quality comprehensive cardiac care to all patients with congenital heart disease, from the fetus to the adult, regardless of the patient's economic means, with an emphasis on excellence in teaching, research, and community service." Early on, the Society's members realized that a crucial step in meeting this goal was to establish a global database that would collect vital information, allowing cardiac surgical centers worldwide to benchmark their outcomes and improve the quality of congenital heart disease care. With tireless efforts from all corners of the globe and utilizing the vast experience and invaluable input of multiple international experts, such a platform of global information exchange was created: The World Database for Pediatric and Congenital Heart Disease went live on January 1, 2017. This database has been thoughtfully designed to produce meaningful performance and quality analyses of surgical outcomes extending beyond immediate hospital survival, allowing capture of important morbidities and mortalities for up to 1 year postoperatively. In order to advance the societal mission, this quality improvement program is available free of charge to WSPCHS members. In establishing the World Database, the Society has taken an essential step to further the process of global improvement in care for children with congenital heart disease.

  10. Akademisk video

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    Dette kapitel har fokus på metodiske problemstillinger, der opstår i forhold til at bruge (digital) video i forbindelse med forskningskommunikation, ikke mindst online. Video har længe været benyttet i forskningen til dataindsamling og forskningskommunikation. Med digitaliseringen og internettet ...

  11. Understanding why an active video game intervention did not improve motor skill and physical activity in children with developmental coordination disorder: A quantity or quality issue?

    Science.gov (United States)

    Howie, Erin K; Campbell, Amity C; Abbott, Rebecca A; Straker, Leon M

    2017-01-01

    Active video games (AVGs) have been identified as a novel strategy to improve motor skill and physical activity in clinical populations. A recent cross-over randomized trial found AVGs to be ineffective at improving motor skill and physical activity in the home-environment for children with or at-risk for developmental coordination disorder (DCD). The study purpose was to better understand why the intervention had been ineffective by examining the quantity and quality of AVG play during an AVG intervention for children with or at-risk for DCD. Participants (n=21, ages 9-12) completed the 16 week AVG intervention. Detailed quantitative and qualitative data were systematically triangulated to obtain the quantity of exposure (AVG exposure over time, patterns of exposure) and quality of use (game selection, facilitators and barriers to play). The median AVG dose (range 30-35min/day) remained relatively stable across the intervention and met the prescribed dose. Play quality was impacted by game selection, difficulty playing games, lack of time, illness, technical difficulties and boredom. The ineffectiveness of a home-based AVG intervention may be due to quality of play. Strategies to improve the quality of game play may help realize the potential benefits of AVGs as a clinical tool for children with DCD. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Bronchial blocker versus left double-lumen endotracheal tube in video-assisted thoracoscopic surgery: a randomized-controlled trial examining time and quality of lung deflation.

    Science.gov (United States)

    Bussières, Jean S; Somma, Jacques; Del Castillo, José Luis Carrasco; Lemieux, Jérôme; Conti, Massimo; Ugalde, Paula A; Gagné, Nathalie; Lacasse, Yves

    2016-07-01

    Double-lumen endotracheal tubes (DL-ETT) and bronchial blockers (BB) have both been used for lung isolation in video-assisted thoracic surgery (VATS). Though not well studied, it is widely thought that a DL-ETT provides faster and better quality lung collapse. The aim of this study was to compare a BB technique vs a left-sided DL-ETT strategy with regard to the time and quality of lung collapse during one-lung ventilation (OLV) for elective VATS. Forty patients requiring OLV for VATS were randomized to receive a BB (n = 20) or a left-sided DL-ETT (n = 20). The primary endpoint was the time from pleural opening (performed by the surgeon) until complete lung collapse. The time was evaluated offline by reviewing video recorded during the VATS. The quality of lung deflation was also graded offline using a visual scale (1 = no lung collapse; 2 = partial lung collapse; and 3 = total lung collapse) and was recorded at several time points after pleural incision. The surgeon also graded the time to complete lung collapse and quality of lung deflation during the procedure. The surgeon's guess as to which device was used for lung isolation was also recorded. Of the 40 patients enrolled in the study, 20 patients in the DL-ETT group and 18 in the BB group were analyzed. There mean (standard deviation) time to complete lung collapse of the operative lung was significantly faster using the BB compared with using the DL-ETT [7.5 (3.8) min vs 36.6 (29.1) min, respectively; mean difference, 29.1 min; 95% confidence interval, 1.8 to 7.2; P < 0.001]. Overall, a higher proportion of patients in the BB group than in the DL-ETT group achieved a quality of lung collapse score of 3 at five minutes (57% vs 6%, respectively; P < 0.004), ten minutes (73% vs 14%, respectively; P = 0.005), and 20 min (100% vs 25%, respectively; P = 0.002) after opening the pleura. The surgeon incorrectly guessed the type of device used in 78% of the BB group and 50% of the DL-ETT group (P = 0.10). The time and

  13. With better connection between utility and its customers and with more quality database toward more efficiently DSM program

    International Nuclear Information System (INIS)

    Tomasic-Skevin, S.

    1996-01-01

    In this paper new demand-side technologies and their influence on power system are described. Better connection between utility and its customers is the most important thing for build up good data-base and that data-base is base for efficient usage of DSM program. (author)

  14. Influence of Pro-Qura-generated Plans on Postimplant Dosimetric Quality: A Review of a Multi-Institutional Database

    International Nuclear Information System (INIS)

    Allen, Zachariah; Merrick, Gregory S.; Grimm, Peter; Blasko, John; Sylvester, John; Butler, Wayne; Chaudry, Usman-Ul-Haq; Sitter, Michael

    2008-01-01

    The influence of Pro-Qura-generated plans vs. community-generated plans on postprostate brachytherapy dosimetric quality was compared. In the Pro-Qura database, 2933 postplans were evaluated from 57 institutions. A total of 1803 plans were generated by Pro-Qura and 1130 by community institutions. Iodine-125 ( 125 I) plans outnumbered Palladium 103 ( 103 Pd) plans by a ratio of 3:1. Postimplant dosimetry was performed in a standardized fashion by overlapping the preimplant ultrasound and the postimplant computed tomography (CT). In this analysis, adequacy was defined as a V 100 > 80% and a D 90 of 90% to 140% for both isotopes along with a V 150 125 I and 103 Pd. The mean postimplant V 100 and D 90 were 88.6% and 101.6% vs. 89.3% and 102.3% for Pro-Qura and community plans, respectively. When analyzed in terms of the first 8 sequence groups (10 patients/sequence group) for each institution, Pro-Qura planning resulted in less postimplant variability for V 100 (86.2-89.5%) and for D 90 (97.4-103.2%) while community-generated plans had greater V 100 (85.3-91.2%) and D 90 (95.9-105.2%) ranges. In terms of sequence groups, postimplant dosimetry was deemed 'too cool' in 11% to 30% of cases and 'too hot' in 12% to 27%. On average, no clinically significant postimplant dosimetric differences were discerned between Pro-Qura and community-based planning. However, substantially greater variability was identified in the community-based plan cohort. It is possible that the Pro-Qura plan and/or the routine postimplant dosimetric evaluation may have influenced dosimetric outcomes at community-based centers

  15. Associations with HIV testing in Uganda: an analysis of the Lot Quality Assurance Sampling database 2003-2012.

    Science.gov (United States)

    Jeffery, Caroline; Beckworth, Colin; Hadden, Wilbur C; Ouma, Joseph; Lwanga, Stephen K; Valadez, Joseph J

    2016-01-01

    Beginning in 2003, Uganda used Lot Quality Assurance Sampling (LQAS) to assist district managers collect and use data to improve their human immunodeficiency virus (HIV)/AIDS program. Uganda's LQAS-database (2003-2012) covers up to 73 of 112 districts. Our multidistrict analysis of the LQAS data-set at 2003-2004 and 2012 examined gender variation among adults who ever tested for HIV over time, and attributes associated with testing. Conditional logistic regression matched men and women by community with seven model effect variables. HIV testing prevalence rose from 14% (men) and 12% (women) in 2003-2004 to 62% (men) and 80% (women) in 2012. In 2003-2004, knowing the benefits of testing (Odds Ratio [OR] = 6.09, 95% CI = 3.01-12.35), knowing where to get tested (OR = 2.83, 95% CI = 1.44-5.56), and secondary education (OR = 3.04, 95% CI = 1.19-7.77) were significantly associated with HIV testing. By 2012, knowing the benefits of testing (OR = 3.63, 95% CI = 2.25-5.83), where to get tested (OR = 5.15, 95% CI = 3.26-8.14), primary education (OR = 2.01, 95% CI = 1.39-2.91), being female (OR = 3.03, 95% CI = 2.53-3.62), and being married (OR = 1.81, 95% CI = 1.17-2.8) were significantly associated with HIV testing. HIV testing prevalence in Uganda has increased dramatically, more for women than men. Our results concurred with other authors that education, knowledge of HIV, and marriage (women only) are associated with testing for HIV and suggest that couples testing is more prevalent than other authors.

  16. Building a Quality Controlled Database of Meteorological Data from NASA Kennedy Space Center and the United States Air Force's Eastern Range

    Science.gov (United States)

    Brenton, James C.; Barbre. Robert E., Jr.; Decker, Ryan K.; Orcutt, John M.

    2018-01-01

    The National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center (MSFC) Natural Environments Branch (EV44) has provided atmospheric databases and analysis in support of space vehicle design and day-of-launch operations for NASA and commercial launch vehicle programs launching from the NASA Kennedy Space Center (KSC), co-located on the United States Air Force's Eastern Range (ER) at the Cape Canaveral Air Force Station. The ER complex is one of the most heavily instrumented sites in the United States with over 31 towers measuring various atmospheric parameters on a continuous basis. An inherent challenge with large sets of data consists of ensuring erroneous data is removed from databases, and thus excluded from launch vehicle design analyses. EV44 has put forth great effort in developing quality control (QC) procedures for individual meteorological instruments, however no standard QC procedures for all databases currently exists resulting in QC databases that have inconsistencies in variables, methodologies, and periods of record. The goal of this activity is to use the previous efforts by EV44 to develop a standardized set of QC procedures from which to build meteorological databases from KSC and the ER, while maintaining open communication with end users from the launch community to develop ways to improve, adapt and grow the QC database. Details of the QC procedures will be described. As the rate of launches increases with additional launch vehicle programs, it is becoming more important that weather databases are continually updated and checked for data quality before use in launch vehicle design and certification analyses.

  17. Video Podcasts

    DEFF Research Database (Denmark)

    Nortvig, Anne Mette; Sørensen, Birgitte Holm

    2016-01-01

    This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...

  18. Video games.

    Science.gov (United States)

    Funk, Jeanne B

    2005-06-01

    The video game industry insists that it is doing everything possible to provide information about the content of games so that parents can make informed choices; however, surveys indicate that ratings may not reflect consumer views of the nature of the content. This article describes some of the currently popular video games, as well as developments that are on the horizon, and discusses the status of research on the positive and negative impacts of playing video games. Recommendations are made to help parents ensure that children play games that are consistent with their values.

  19. Robust Watermarking of Video Streams

    Directory of Open Access Journals (Sweden)

    T. Polyák

    2006-01-01

    Full Text Available In the past few years there has been an explosion in the use of digital video data. Many people have personal computers at home, and with the help of the Internet users can easily share video files on their computer. This makes possible the unauthorized use of digital media, and without adequate protection systems the authors and distributors have no means to prevent it.Digital watermarking techniques can help these systems to be more effective by embedding secret data right into the video stream. This makes minor changes in the frames of the video, but these changes are almost imperceptible to the human visual system. The embedded information can involve copyright data, access control etc. A robust watermark is resistant to various distortions of the video, so it cannot be removed without affecting the quality of the host medium. In this paper I propose a video watermarking scheme that fulfills the requirements of a robust watermark. 

  20. Video Comparator

    International Nuclear Information System (INIS)

    Rose, R.P.

    1978-01-01

    The Video Comparator is a comparative gage that uses electronic images from two sources, a standard and an unknown. Two matched video cameras are used to obtain the electronic images. The video signals are mixed and displayed on a single video receiver (CRT). The video system is manufactured by ITP of Chatsworth, CA and is a Tele-Microscope II, Model 148. One of the cameras is mounted on a toolmaker's microscope stand and produces a 250X image of a cast. The other camera is mounted on a stand and produces an image of a 250X template. The two video images are mixed in a control box provided by ITP and displayed on a CRT. The template or the cast can be moved to align the desired features. Vertical reference lines are provided on the CRT, and a feature on the cast can be aligned with a line on the CRT screen. The stage containing the casts can be moved using a Boeckleler micrometer equipped with a digital readout, and a second feature aligned with the reference line and the distance moved obtained from the digital display

  1. Quality-controlled sea surface temperature, salinity and other measurements from the NCEI Global Thermosalinographs Database (NCEI-TSG)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This collection contains global in-situ sea surface temperature (SST), salinity (SSS) and other measurements from the NOAA NCEI Global Thermosalinographs Database...

  2. Minimally invasive versus open fusion for Grade I degenerative lumbar spondylolisthesis: analysis of the Quality Outcomes Database.

    Science.gov (United States)

    Mummaneni, Praveen V; Bisson, Erica F; Kerezoudis, Panagiotis; Glassman, Steven; Foley, Kevin; Slotkin, Jonathan R; Potts, Eric; Shaffrey, Mark; Shaffrey, Christopher I; Coric, Domagoj; Knightly, John; Park, Paul; Fu, Kai-Ming; Devin, Clinton J; Chotai, Silky; Chan, Andrew K; Virk, Michael; Asher, Anthony L; Bydon, Mohamad

    2017-08-01

    OBJECTIVE Lumbar spondylolisthesis is a degenerative condition that can be surgically treated with either open or minimally invasive decompression and instrumented fusion. Minimally invasive surgery (MIS) approaches may shorten recovery, reduce blood loss, and minimize soft-tissue damage with resultant reduced postoperative pain and disability. METHODS The authors queried the national, multicenter Quality Outcomes Database (QOD) registry for patients undergoing posterior lumbar fusion between July 2014 and December 2015 for Grade I degenerative spondylolisthesis. The authors recorded baseline and 12-month patient-reported outcomes (PROs), including Oswestry Disability Index (ODI), EQ-5D, numeric rating scale (NRS)-back pain (NRS-BP), NRS-leg pain (NRS-LP), and satisfaction (North American Spine Society satisfaction questionnaire). Multivariable regression models were fitted for hospital length of stay (LOS), 12-month PROs, and 90-day return to work, after adjusting for an array of preoperative and surgical variables. RESULTS A total of 345 patients (open surgery, n = 254; MIS, n = 91) from 11 participating sites were identified in the QOD. The follow-up rate at 12 months was 84% (83.5% [open surgery]; 85% [MIS]). Overall, baseline patient demographics, comorbidities, and clinical characteristics were similarly distributed between the cohorts. Two hundred fifty seven patients underwent 1-level fusion (open surgery, n = 181; MIS, n = 76), and 88 patients underwent 2-level fusion (open surgery, n = 73; MIS, n = 15). Patients in both groups reported significant improvement in all primary outcomes (all p open surgical groups. However, change in functional outcome scores for patients undergoing 2-level fusion was notably larger in the MIS cohort for ODI (-27 vs -16, p = 0.1), EQ-5D (0.27 vs 0.15, p = 0.08), and NRS-BP (-3.5 vs -2.7, p = 0.41); statistical significance was shown only for changes in NRS-LP scores (-4.9 vs -2.8, p = 0.02). On risk-adjusted analysis for 1

  3. Defining the minimum clinically important difference for grade I degenerative lumbar spondylolisthesis: insights from the Quality Outcomes Database.

    Science.gov (United States)

    Asher, Anthony L; Kerezoudis, Panagiotis; Mummaneni, Praveen V; Bisson, Erica F; Glassman, Steven D; Foley, Kevin T; Slotkin, Jonathan; Potts, Eric A; Shaffrey, Mark E; Shaffrey, Christopher I; Coric, Domagoj; Knightly, John J; Park, Paul; Fu, Kai-Ming; Devin, Clinton J; Archer, Kristin R; Chotai, Silky; Chan, Andrew K; Virk, Michael S; Bydon, Mohamad

    2018-01-01

    OBJECTIVE Patient-reported outcomes (PROs) play a pivotal role in defining the value of surgical interventions for spinal disease. The concept of minimum clinically important difference (MCID) is considered the new standard for determining the effectiveness of a given treatment and describing patient satisfaction in response to that treatment. The purpose of this study was to determine the MCID associated with surgical treatment for degenerative lumbar spondylolisthesis. METHODS The authors queried the Quality Outcomes Database registry from July 2014 through December 2015 for patients who underwent posterior lumbar surgery for grade I degenerative spondylolisthesis. Recorded PROs included scores on the Oswestry Disability Index (ODI), EQ-5D, and numeric rating scale (NRS) for leg pain (NRS-LP) and back pain (NRS-BP). Anchor-based (using the North American Spine Society satisfaction scale) and distribution-based (half a standard deviation, small Cohen's effect size, standard error of measurement, and minimum detectable change [MDC]) methods were used to calculate the MCID for each PRO. RESULTS A total of 441 patients (80 who underwent laminectomies alone and 361 who underwent fusion procedures) from 11 participating sites were included in the analysis. The changes in functional outcome scores between baseline and the 1-year postoperative evaluation were as follows: 23.5 ± 17.4 points for ODI, 0.24 ± 0.23 for EQ-5D, 4.1 ± 3.5 for NRS-LP, and 3.7 ± 3.2 for NRS-BP. The different calculation methods generated a range of MCID values for each PRO: 3.3-26.5 points for ODI, 0.04-0.3 points for EQ-5D, 0.6-4.5 points for NRS-LP, and 0.5-4.2 points for NRS-BP. The MDC approach appeared to be the most appropriate for calculating MCID because it provided a threshold greater than the measurement error and was closest to the average change difference between the satisfied and not-satisfied patients. On subgroup analysis, the MCID thresholds for laminectomy-alone patients were

  4. USDA's National Food and Nutrient Analysis Program (NFNAP) produces high-quality data for USDA food composition databases: Two decades of collaboration.

    Science.gov (United States)

    Haytowitz, David B; Pehrsson, Pamela R

    2018-01-01

    For nearly 20years, the National Food and Nutrient Analysis Program (NFNAP) has expanded and improved the quantity and quality of data in US Department of Agriculture's (USDA) food composition databases (FCDB) through the collection and analysis of nationally representative food samples. NFNAP employs statistically valid sampling plans, the Key Foods approach to identify and prioritize foods and nutrients, comprehensive quality control protocols, and analytical oversight to generate new and updated analytical data for food components. NFNAP has allowed the Nutrient Data Laboratory to keep up with the dynamic US food supply and emerging scientific research. Recently generated results for nationally representative food samples show marked changes compared to previous database values for selected nutrients. Monitoring changes in the composition of foods is critical in keeping FCDB up-to-date, so that they remain a vital tool in assessing the nutrient intake of national populations, as well as for providing dietary advice. Published by Elsevier Ltd.

  5. Facilitating quality control for spectra assignments of small organic molecules: nmrshiftdb2--a free in-house NMR database with integrated LIMS for academic service laboratories.

    Science.gov (United States)

    Kuhn, Stefan; Schlörer, Nils E

    2015-08-01

    nmrshiftdb2 supports with its laboratory information management system the integration of an electronic lab administration and management into academic NMR facilities. Also, it offers the setup of a local database, while full access to nmrshiftdb2's World Wide Web database is granted. This freely available system allows on the one hand the submission of orders for measurement, transfers recorded data automatically or manually, and enables download of spectra via web interface, as well as the integrated access to prediction, search, and assignment tools of the NMR database for lab users. On the other hand, for the staff and lab administration, flow of all orders can be supervised; administrative tools also include user and hardware management, a statistic functionality for accounting purposes, and a 'QuickCheck' function for assignment control, to facilitate quality control of assignments submitted to the (local) database. Laboratory information management system and database are based on a web interface as front end and are therefore independent of the operating system in use. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Measurement of the Inter-Rater Reliability Rate Is Mandatory for Improving the Quality of a Medical Database: Experience with the Paulista Lung Cancer Registry.

    Science.gov (United States)

    Lauricella, Leticia L; Costa, Priscila B; Salati, Michele; Pego-Fernandes, Paulo M; Terra, Ricardo M

    2018-06-01

    Database quality measurement should be considered a mandatory step to ensure an adequate level of confidence in data used for research and quality improvement. Several metrics have been described in the literature, but no standardized approach has been established. We aimed to describe a methodological approach applied to measure the quality and inter-rater reliability of a regional multicentric thoracic surgical database (Paulista Lung Cancer Registry). Data from the first 3 years of the Paulista Lung Cancer Registry underwent an audit process with 3 metrics: completeness, consistency, and inter-rater reliability. The first 2 methods were applied to the whole data set, and the last method was calculated using 100 cases randomized for direct auditing. Inter-rater reliability was evaluated using percentage of agreement between the data collector and auditor and through calculation of Cohen's κ and intraclass correlation. The overall completeness per section ranged from 0.88 to 1.00, and the overall consistency was 0.96. Inter-rater reliability showed many variables with high disagreement (>10%). For numerical variables, intraclass correlation was a better metric than inter-rater reliability. Cohen's κ showed that most variables had moderate to substantial agreement. The methodological approach applied to the Paulista Lung Cancer Registry showed that completeness and consistency metrics did not sufficiently reflect the real quality status of a database. The inter-rater reliability associated with κ and intraclass correlation was a better quality metric than completeness and consistency metrics because it could determine the reliability of specific variables used in research or benchmark reports. This report can be a paradigm for future studies of data quality measurement. Copyright © 2018 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  7. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  8. Mobile Video in Everyday Social Interactions

    Science.gov (United States)

    Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi

    Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.

  9. Unsupervised deep learning for real-time assessment of video streaming services

    NARCIS (Netherlands)

    Torres Vega, M.; Mocanu, D.C.; Liotta, A.

    2017-01-01

    Evaluating quality of experience in video streaming services requires a quality metric that works in real time and for a broad range of video types and network conditions. This means that, subjective video quality assessment studies, or complex objective video quality assessment metrics, which would

  10. Video Golf

    Science.gov (United States)

    1995-01-01

    George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.

  11. Effects of music videos on sleep quality in middle-aged and older adults with chronic insomnia: a randomized controlled trial.

    Science.gov (United States)

    Lai, Hui-Ling; Chang, En-Ting; Li, Yin-Ming; Huang, Chiung-Yu; Lee, Li-Hua; Wang, Hsiu-Mei

    2015-05-01

    Listening to soothing music has been used as a complementary therapy to improve sleep quality. However, there is no empirical evidence for the effects of music videos (MVs) on sleep quality in adults with insomnia as assessed by polysomnography (PSG). In this randomized crossover controlled trial, we compared the effects of a peaceful Buddhist MV intervention to a usual-care control condition before bedtime on subjective and objective sleep quality in middle-aged and older adults with chronic insomnia. The study was conducted in a hospital's sleep laboratory. We randomly assigned 38 subjects, aged 50-75 years, to an MV/usual-care sequence or a usual-care/MV sequence. After pretest data collection, testing was held on two consecutive nights, with subjects participating in one condition each night according to their assigned sequence. Each intervention lasted 30 min. Sleep was assessed using PSG and self-report questionnaires. After controlling for baseline data, sleep-onset latency was significantly shorter by approximately 2 min in the MV condition than in the usual-care condition (p = .002). The MV intervention had no significant effects relative to the usual care on any other sleep parameters assessed by PSG or self-reported sleep quality. These results suggest that an MV intervention may be effective in promoting sleep. However, the effectiveness of a Buddhist MV on sleep needs further study to develop a culturally specific insomnia intervention. Our findings also suggest that an MV intervention can serve as another option for health care providers to improve sleep onset in people with insomnia. © The Author(s) 2014.

  12. Watermarking textures in video games

    Science.gov (United States)

    Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin

    2014-02-01

    Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.

  13. Video Classification and Adaptive QoP/QoS Control for Multiresolution Video Applications on IPTV

    Directory of Open Access Journals (Sweden)

    Huang Shyh-Fang

    2012-01-01

    Full Text Available With the development of heterogeneous networks and video coding standards, multiresolution video applications over networks become important. It is critical to ensure the service quality of the network for time-sensitive video services. Worldwide Interoperability for Microwave Access (WIMAX is a good candidate for delivering video signals because through WIMAX the delivery quality based on the quality-of-service (QoS setting can be guaranteed. The selection of suitable QoS parameters is, however, not trivial for service users. Instead, what a video service user really concerns with is the video quality of presentation (QoP which includes the video resolution, the fidelity, and the frame rate. In this paper, we present a quality control mechanism in multiresolution video coding structures over WIMAX networks and also investigate the relationship between QoP and QoS in end-to-end connections. Consequently, the video presentation quality can be simply mapped to the network requirements by a mapping table, and then the end-to-end QoS is achieved. We performed experiments with multiresolution MPEG coding over WIMAX networks. In addition to the QoP parameters, the video characteristics, such as, the picture activity and the video mobility, also affect the QoS significantly.

  14. Part Two: Learning Science Through Digital Video: Student Views on Watching and Creating Videos

    Science.gov (United States)

    Wade, P.; Courtney, A. R.

    2014-12-01

    The use of digital video for science education has become common with the wide availability of video imagery. This study continues research into aspects of using digital video as a primary teaching tool to enhance student learning in undergraduate science courses. Two survey instruments were administered to undergraduate non-science majors. Survey One focused on: a) What science is being learned from watching science videos such as a "YouTube" clip of a volcanic eruption or an informational video on geologic time and b) What are student preferences with regard to their learning (e.g. using video versus traditional modes of delivery)? Survey Two addressed students' perspectives on the storytelling aspect of the video with respect to: a) sustaining interest, b) providing science information, c) style of video and d) quality of the video. Undergraduate non-science majors were the primary focus group in this study. Students were asked to view video segments and respond to a survey focused on what they learned from the segments. The storytelling aspect of each video was also addressed by students. Students watched 15-20 shorter (3-15 minute science videos) created within the last four years. Initial results of this research support that shorter video segments were preferred and the storytelling quality of each video related to student learning.

  15. Innovative Video Diagnostic Equipment for Material Science

    Science.gov (United States)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  16. OAS :: Videos

    Science.gov (United States)

    subscriptions Videos Photos Live Webcast Social Media Facebook @oasofficial Facebook Twitter @oas_official Audios Photos Social Media Facebook Twitter Newsletters Press and Communications Department Contact us at Rights Actions against Corruption C Children Civil Registry Civil Society Contact Us Culture Cyber

  17. Learning to Swim Using Video Modelling and Video Feedback within a Self-Management Program

    Science.gov (United States)

    Lao, So-An; Furlonger, Brett E.; Moore, Dennis W.; Busacca, Margherita

    2016-01-01

    Although many adults who cannot swim are primarily interested in learning by direct coaching there are options that have a focus on self-directed learning. As an alternative a self-management program combined with video modelling, video feedback and high quality and affordable video technology was used to assess its effectiveness to assisting an…

  18. Ranking Highlights in Personal Videos by Analyzing Edited Videos.

    Science.gov (United States)

    Sun, Min; Farhadi, Ali; Chen, Tseng-Hung; Seitz, Steve

    2016-11-01

    We present a fully automatic system for ranking domain-specific highlights in unconstrained personal videos by analyzing online edited videos. A novel latent linear ranking model is proposed to handle noisy training data harvested online. Specifically, given a targeted domain such as "surfing," our system mines the YouTube database to find pairs of raw and their corresponding edited videos. Leveraging the assumption that an edited video is more likely to contain highlights than the trimmed parts of the raw video, we obtain pair-wise ranking constraints to train our model. The learning task is challenging due to the amount of noise and variation in the mined data. Hence, a latent loss function is incorporated to mitigate the issues caused by the noise. We efficiently learn the latent model on a large number of videos (about 870 min in total) using a novel EM-like procedure. Our latent ranking model outperforms its classification counterpart and is fairly competitive compared with a fully supervised ranking system that requires labels from Amazon Mechanical Turk. We further show that a state-of-the-art audio feature mel-frequency cepstral coefficients is inferior to a state-of-the-art visual feature. By combining both audio-visual features, we obtain the best performance in dog activity, surfing, skating, and viral video domains. Finally, we show that impressive highlights can be detected without additional human supervision for seven domains (i.e., skating, surfing, skiing, gymnastics, parkour, dog activity, and viral video) in unconstrained personal videos.

  19. Low-latency video transmission over high-speed WPANs based on low-power video compression

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Ann

    2010-01-01

    This paper presents latency-constrained video transmission over high-speed wireless personal area networks (WPANs). Low-power video compression is proposed as an alternative to uncompressed video transmission. A video source rate control based on MINMAX quality criteria is introduced. Practical...

  20. Performance Analysis of Video Transmission Using Sequential Distortion Minimization Method for Digital Video Broadcasting Terrestrial

    Directory of Open Access Journals (Sweden)

    Novita Astin

    2016-12-01

    Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.

  1. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  2. Tile-in-ONE An integrated framework for the data quality assessment and database management for the ATLAS Tile Calorimeter

    International Nuclear Information System (INIS)

    Cunha, R; Sivolella, A; Ferreira, F; Maidantchik, C; Solans, C

    2014-01-01

    In order to ensure the proper operation of the ATLAS Tile Calorimeter and assess the quality of data, many tasks are performed by means of several tools which have been developed independently. The features are displayed into standard dashboards, dedicated to each working group, covering different areas, such as Data Quality and Calibration.

  3. Role of video games in improving health-related outcomes: a systematic review.

    Science.gov (United States)

    Primack, Brian A; Carroll, Mary V; McNamara, Megan; Klem, Mary Lou; King, Brandy; Rich, Michael; Chan, Chun W; Nayak, Smita

    2012-06-01

    Video games represent a multibillion-dollar industry in the U.S. Although video gaming has been associated with many negative health consequences, it also may be useful for therapeutic purposes. The goal of this study was to determine whether video games may be useful in improving health outcomes. Literature searches were performed in February 2010 in six databases: the Center on Media and Child Health Database of Research, MEDLINE, CINAHL, PsycINFO, EMBASE, and the Cochrane Central Register of Controlled Trials. Reference lists were hand-searched to identify additional studies. Only RCTs that tested the effect of video games on a positive, clinically relevant health consequence were included. Study selection criteria were strictly defined and applied by two researchers working independently. Study background information (e.g., location, funding source); sample data (e.g., number of study participants, demographics); intervention and control details; outcomes data; and quality measures were abstracted independently by two researchers. Of 1452 articles retrieved using the current search strategy, 38 met all criteria for inclusion. Eligible studies used video games to provide physical therapy, psychological therapy, improved disease self-management, health education, distraction from discomfort, increased physical activity, and skills training for clinicians. Among the 38 studies, a total of 195 health outcomes were examined. Video games improved 69% of psychological therapy outcomes, 59% of physical therapy outcomes, 50% of physical activity outcomes, 46% of clinician skills outcomes, 42% of health education outcomes, 42% of pain distraction outcomes, and 37% of disease self-management outcomes. Study quality was generally poor; for example, two thirds (66%) of studies had follow-up periods of video games to improve health outcomes, particularly in the areas of psychological therapy and physical therapy. RCTs with appropriate rigor will help build evidence in this

  4. Video Inpainting of Complex Scenes

    OpenAIRE

    Newson, Alasdair; Almansa, Andrés; Fradet, Matthieu; Gousseau, Yann; Pérez, Patrick

    2015-01-01

    We propose an automatic video inpainting algorithm which relies on the optimisation of a global, patch-based functional. Our algorithm is able to deal with a variety of challenging situations which naturally arise in video inpainting, such as the correct reconstruction of dynamic textures, multiple moving objects and moving background. Furthermore, we achieve this in an order of magnitude less execution time with respect to the state-of-the-art. We are also able to achieve good quality result...

  5. National Assessment Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The National Assessment Database stores and tracks state water quality assessment decisions, Total Maximum Daily Loads (TMDLs) and other watershed plans designed to...

  6. Mobile Source Observation Database (MSOD)

    Science.gov (United States)

    The Mobile Source Observation Database (MSOD) is a relational database developed by the Assessment and Standards Division (ASD) of the U.S. EPA Office of Transportation and Air Quality (formerly the Office of Mobile Sources).

  7. Social image quality

    Science.gov (United States)

    Qiu, Guoping; Kheiri, Ahmed

    2011-01-01

    Current subjective image quality assessments have been developed in the laboratory environments, under controlledconditions, and are dependent on the participation of limited numbers of observers. In this research, with the help of Web 2.0 and social media technology, a new method for building a subjective image quality metric has been developed where the observers are the Internet users. A website with a simple user interface that enables Internet users from anywhere at any time to vote for a better quality version of a pair of the same image has been constructed. Users' votes are recorded and used to rank the images according to their perceived visual qualities. We have developed three rank aggregation algorithms to process the recorded pair comparison data, the first uses a naive approach, the second employs a Condorcet method, and the third uses the Dykstra's extension of Bradley-Terry method. The website has been collecting data for about three months and has accumulated over 10,000 votes at the time of writing this paper. Results show that the Internet and its allied technologies such as crowdsourcing offer a promising new paradigm for image and video quality assessment where hundreds of thousands of Internet users can contribute to building more robust image quality metrics. We have made Internet user generated social image quality (SIQ) data of a public image database available online (http://www.hdri.cs.nott.ac.uk/siq/) to provide the image quality research community with a new source of ground truth data. The website continues to collect votes and will include more public image databases and will also be extended to include videos to collect social video quality (SVQ) data. All data will be public available on the website in due course.

  8. A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos.

    Science.gov (United States)

    Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian

    2016-04-01

    Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today's keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users' information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively.ResultsThe authors produced a prototype implementation of the proposed system, which is publicly accessible athttps://patentq.njit.edu/oer To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable

  9. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  10. Web Audio/Video Streaming Tool

    Science.gov (United States)

    Guruvadoo, Eranna K.

    2003-01-01

    In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.

  11. Tracking and recognition face in videos with incremental local sparse representation model

    Science.gov (United States)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  12. Improved image quality in abdominal CT in patients who underwent treatment for hepatocellular carcinoma with small metal implants using a raw data-based metal artifact reduction algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Sofue, Keitaro; Sugimura, Kazuro [Kobe University Graduate School of Medicine, Department of Radiology, Kobe, Hyogo (Japan); Yoshikawa, Takeshi; Ohno, Yoshiharu [Kobe University Graduate School of Medicine, Advanced Biomedical Imaging Research Center, Kobe, Hyogo (Japan); Kobe University Graduate School of Medicine, Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe, Hyogo (Japan); Negi, Noriyuki [Kobe University Hospital, Division of Radiology, Kobe, Hyogo (Japan); Inokawa, Hiroyasu; Sugihara, Naoki [Toshiba Medical Systems Corporation, Otawara, Tochigi (Japan)

    2017-07-15

    To determine the value of a raw data-based metal artifact reduction (SEMAR) algorithm for image quality improvement in abdominal CT for patients with small metal implants. Fifty-eight patients with small metal implants (3-15 mm in size) who underwent treatment for hepatocellular carcinoma were imaged with CT. CT data were reconstructed by filtered back projection with and without SEMAR algorithm in axial and coronal planes. To evaluate metal artefact reduction, mean CT number (HU and SD) and artefact index (AI) values within the liver were calculated. Two readers independently evaluated image quality of the liver and pancreas and visualization of vasculature using a 5-point visual score. HU and AI values and image quality on images with and without SEMAR were compared using the paired Student's t-test and Wilcoxon signed rank test. Interobserver agreement was evaluated using linear-weighted κ test. Mean HU and AI on images with SEMAR was significantly lower than those without SEMAR (P < 0.0001). Liver and pancreas image qualities and visualizations of vasculature were significantly improved on CT with SEMAR (P < 0.0001) with substantial or almost perfect agreement (0.62 ≤ κ ≤ 0.83). SEMAR can improve image quality in abdominal CT in patients with small metal implants by reducing metallic artefacts. (orig.)

  13. Improved image quality in abdominal CT in patients who underwent treatment for hepatocellular carcinoma with small metal implants using a raw data-based metal artifact reduction algorithm

    International Nuclear Information System (INIS)

    Sofue, Keitaro; Sugimura, Kazuro; Yoshikawa, Takeshi; Ohno, Yoshiharu; Negi, Noriyuki; Inokawa, Hiroyasu; Sugihara, Naoki

    2017-01-01

    To determine the value of a raw data-based metal artifact reduction (SEMAR) algorithm for image quality improvement in abdominal CT for patients with small metal implants. Fifty-eight patients with small metal implants (3-15 mm in size) who underwent treatment for hepatocellular carcinoma were imaged with CT. CT data were reconstructed by filtered back projection with and without SEMAR algorithm in axial and coronal planes. To evaluate metal artefact reduction, mean CT number (HU and SD) and artefact index (AI) values within the liver were calculated. Two readers independently evaluated image quality of the liver and pancreas and visualization of vasculature using a 5-point visual score. HU and AI values and image quality on images with and without SEMAR were compared using the paired Student's t-test and Wilcoxon signed rank test. Interobserver agreement was evaluated using linear-weighted κ test. Mean HU and AI on images with SEMAR was significantly lower than those without SEMAR (P < 0.0001). Liver and pancreas image qualities and visualizations of vasculature were significantly improved on CT with SEMAR (P < 0.0001) with substantial or almost perfect agreement (0.62 ≤ κ ≤ 0.83). SEMAR can improve image quality in abdominal CT in patients with small metal implants by reducing metallic artefacts. (orig.)

  14. Segmentation of object-based video of gaze communication

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Stegmann, Mikkel Bille; Forchhammer, Søren

    2005-01-01

    Aspects of video communication based on gaze interaction are considered. The overall idea is to use gaze interaction to control video, e.g. for video conferencing. Towards this goal, animation of a facial mask is demonstrated. The animation is based on images using Active Appearance Models (AAM......). Good quality reproduction of (low-resolution) coded video of an animated facial mask as low as 10-20 kbit/s using MPEG-4 object based video is demonstated....

  15. The emerging High Efficiency Video Coding standard (HEVC)

    International Nuclear Information System (INIS)

    Raja, Gulistan; Khan, Awais

    2013-01-01

    High definition video (HDV) is becoming popular day by day. This paper describes the performance analysis of latest upcoming video standard known as High Efficiency Video Coding (HEVC). HEVC is designed to fulfil all the requirements for future high definition videos. In this paper, three configurations (intra only, low delay and random access) of HEVC are analyzed using various 480p, 720p and 1080p high definition test video sequences. Simulation results show the superior objective and subjective quality of HEVC

  16. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  17. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  18. The quality of cholecystectomy in Denmark: outcome and risk factors for 20,307 patients from the national database

    DEFF Research Database (Denmark)

    Harboe, Kirstine Moll; Bardram, Linda

    2011-01-01

    included 20,307 patients (82% of all cholecystectomies). The conversion rate was 7.6%. Male sex, acute cholecystitis, and previous upper abdominal surgery were risk factors for conversion, with respective odds ratios of 1.50, 4.61, and 3.54. The mean LOS was 1.5 days, and 37.3% of the patients had same.......27%. Age older than 60 years, American Society of Anesthesiology (ASA) score exceeding 1, and open procedure were significant risk factors for all the outcomes. Body mass index (BMI) was not a risk factor for any of the outcomes. Conclusion The quality of cholecystectomy is high in Denmark, with a low......Background Laparoscopic cholecystectomy is the standard treatment for symptomatic gallstones. The quality of the procedure frequently is included in quality improvement programs, but outcome values have not been described to define the standard of care for a general population. This study included...

  19. The NASA Fireball Network Database

    Science.gov (United States)

    Moser, Danielle E.

    2011-01-01

    The NASA Meteoroid Environment Office (MEO) has been operating an automated video fireball network since late-2008. Since that time, over 1,700 multi-station fireballs have been observed. A database containing orbital data and trajectory information on all these events has recently been compiled and is currently being mined for information. Preliminary results are presented here.

  20. Is Video-Based Education an Effective Method in Surgical Education? A Systematic Review.

    Science.gov (United States)

    Ahmet, Akgul; Gamze, Kus; Rustem, Mustafaoglu; Sezen, Karaborklu Argut

    2018-02-12

    Visual signs draw more attention during the learning process. Video is one of the most effective tool including a lot of visual cues. This systematic review set out to explore the influence of video in surgical education. We reviewed the current evidence for the video-based surgical education methods, discuss the advantages and disadvantages on the teaching of technical and nontechnical surgical skills. This systematic review was conducted according to the guidelines defined in the preferred reporting items for systematic reviews and meta-analyses statement. The electronic databases: the Cochrane Library, Medline (PubMED), and ProQuest were searched from their inception to the 30 January 2016. The Medical Subject Headings (MeSH) terms and keywords used were "video," "education," and "surgery." We analyzed all full-texts, randomised and nonrandomised clinical trials and observational studies including video-based education methods about any surgery. "Education" means a medical resident's or student's training and teaching process; not patients' education. We did not impose restrictions about language or publication date. A total of nine articles which met inclusion criteria were included. These trials enrolled 507 participants and the total number of participants per trial ranged from 10 to 172. Nearly all of the studies reviewed report significant knowledge gain from video-based education techniques. The findings of this systematic review provide fair to good quality studies to demonstrate significant gains in knowledge compared with traditional teaching. Additional video to simulator exercise or 3D animations has beneficial effects on training time, learning duration, acquisition of surgical skills, and trainee's satisfaction. Video-based education has potential for use in surgical education as trainees face significant barriers in their practice. This method is effective according to the recent literature. Video should be used in addition to standard techniques

  1. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  2. Content-based TV sports video retrieval using multimodal analysis

    Science.gov (United States)

    Yu, Yiqing; Liu, Huayong; Wang, Hongbin; Zhou, Dongru

    2003-09-01

    In this paper, we propose content-based video retrieval, which is a kind of retrieval by its semantical contents. Because video data is composed of multimodal information streams such as video, auditory and textual streams, we describe a strategy of using multimodal analysis for automatic parsing sports video. The paper first defines the basic structure of sports video database system, and then introduces a new approach that integrates visual stream analysis, speech recognition, speech signal processing and text extraction to realize video retrieval. The experimental results for TV sports video of football games indicate that the multimodal analysis is effective for video retrieval by quickly browsing tree-like video clips or inputting keywords within predefined domain.

  3. The Danish Fracture Database can monitor quality of fracture-related surgery, surgeons' experience level and extent of supervision

    DEFF Research Database (Denmark)

    Andersen, M. J.; Gromov, K.; Brix, M.

    2014-01-01

    INTRODUCTION: The importance of supervision and of surgeons' level of experience in relation to patient outcome have been demonstrated in both hip fracture and arthroplasty surgery. The aim of this study was to describe the surgeons' experience level and the extent of supervision for: 1) fracture-related...... surgery in general; 2) the three most frequent primary operations and reoperations; and 3) primary operations during and outside regular working hours. MATERIAL AND METHODS: A total of 9,767 surgical procedures were identified from the Danish Fracture Database (DFDB). Procedures were grouped based...... procedures by junior residents grew from 30% during to 40% (p related surgery. The extent of supervision was generally high; however, a third of the primary procedures performed by junior...

  4. The Danish Fracture Database can monitor quality of fracture-related surgery, surgeons' experience level and extent of supervision

    DEFF Research Database (Denmark)

    Andersen, Morten Jon; Gromov, Kirill; Brix, Michael

    2014-01-01

    INTRODUCTION: The importance of supervision and of surgeons' level of experience in relation to patient outcome have been demonstrated in both hip fracture and arthroplasty surgery. The aim of this study was to describe the surgeons' experience level and the extent of supervision for: 1) fracture......-related surgery in general; 2) the three most frequent primary operations and reoperations; and 3) primary operations during and outside regular working hours. MATERIAL AND METHODS: A total of 9,767 surgical procedures were identified from the Danish Fracture Database (DFDB). Procedures were grouped based...... on the surgeons' level of experience, extent of supervision, type (primary, planned secondary or reoperation), classification (AO Müller), and whether they were performed during or outside regular hours. RESULTS: Interns and junior residents combined performed 46% of all procedures. A total of 90% of surgeries...

  5. The Danish Fracture Database can monitor quality of fracture-related surgery, surgeons' experience level and extent of supervision

    DEFF Research Database (Denmark)

    Andersen, M. J.; Gromov, K.; Brix, M.

    2014-01-01

    INTRODUCTION: The importance of supervision and of surgeons' level of experience in relation to patient outcome have been demonstrated in both hip fracture and arthroplasty surgery. The aim of this study was to describe the surgeons' experience level and the extent of supervision for: 1) fracture......-related surgery in general; 2) the three most frequent primary operations and reoperations; and 3) primary operations during and outside regular working hours. MATERIAL AND METHODS: A total of 9,767 surgical procedures were identified from the Danish Fracture Database (DFDB). Procedures were grouped based...... on the surgeons' level of experience, extent of supervision, type (primary, planned secondary or reoperation), classification (AO Muller), and whether they were performed during or outside regular hours. RESULTS: Interns and junior residents combined performed 46% of all procedures. A total of 90% of surgeries...

  6. The relationship between overall quality of life and its subdimensions was influenced by culture: analysis of an international database

    NARCIS (Netherlands)

    Scott, Neil W.; Fayers, Peter M.; Aaronson, Neil K.; Bottomley, Andrew; de Graeff, Alexander; Groenvold, Mogens; Koller, Michael; Petersen, Morten A.; Sprangers, Mirjam A. G.

    2008-01-01

    OBJECTIVE: To investigate whether geographic and cultural factors influence the relationship between the global health status quality of life (QL) scale score of the European Organisation for Research and Treatment of Cancer QLQ-C30 questionnaire and seven other subscales representing fatigue, pain,

  7. The relationship between overall quality of life and its subdimensions was influenced by culture : analysis of an international database

    NARCIS (Netherlands)

    Scott, Nell W.; Fayers, Peter M.; Aaronson, Neil K.; Bottomley, Andrew; de Graeff, Alexander; Groenvold, Mogens; Koller, Michael; Petersen, Morten A.; Sprangers, Mirjarn A. G.

    Objective: To investigate whether geographic and cultural factors influence the relationship between the global health status quality of life (QL) scale score of the European Organisation for Research and Treatment of Cancer QLQ-C30 questionnaire and seven other subscales representing fatigue, pain,

  8. Translating visions of transparency and quality development: The transformation of clinical databases in the Danish hospital field

    DEFF Research Database (Denmark)

    Kousgaard, Lars Marius Brostrøm

    2011-01-01

    One of the most significant developments in the quest for quality, transparency, and accountability in healthcare is the construction and the implementation of indicator-based technologies. In Denmark, this development has been relatively pronounced, and based on an extensive document study...

  9. Database Optimizing Services

    Directory of Open Access Journals (Sweden)

    Adrian GHENCEA

    2010-12-01

    Full Text Available Almost every organization has at its centre a database. The database provides support for conducting different activities, whether it is production, sales and marketing or internal operations. Every day, a database is accessed for help in strategic decisions. The satisfaction therefore of such needs is entailed with a high quality security and availability. Those needs can be realised using a DBMS (Database Management System which is, in fact, software for a database. Technically speaking, it is software which uses a standard method of cataloguing, recovery, and running different data queries. DBMS manages the input data, organizes it, and provides ways of modifying or extracting the data by its users or other programs. Managing the database is an operation that requires periodical updates, optimizing and monitoring.

  10. National Database of Geriatrics

    DEFF Research Database (Denmark)

    Kannegaard, Pia Nimann; Vinding, Kirsten L; Hare-Bruun, Helle

    2016-01-01

    AIM OF DATABASE: The aim of the National Database of Geriatrics is to monitor the quality of interdisciplinary diagnostics and treatment of patients admitted to a geriatric hospital unit. STUDY POPULATION: The database population consists of patients who were admitted to a geriatric hospital unit....... Geriatric patients cannot be defined by specific diagnoses. A geriatric patient is typically a frail multimorbid elderly patient with decreasing functional ability and social challenges. The database includes 14-15,000 admissions per year, and the database completeness has been stable at 90% during the past......, percentage of discharges with a rehabilitation plan, and the part of cases where an interdisciplinary conference has taken place. Data are recorded by doctors, nurses, and therapists in a database and linked to the Danish National Patient Register. DESCRIPTIVE DATA: Descriptive patient-related data include...

  11. Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Hansen, Ulla Darling; Gradel, Kim Oren; Larsen, Michael Due

    2016-01-01

    , complications if relevant, implants used if relevant, 3-6-month postoperative recording of symptoms, if any. A set of clinical quality indicators is being maintained by the steering committee for the database and is published in an annual report which also contains extensive descriptive statistics. The database......The Danish Urogynaecological Database is established in order to ensure high quality of treatment for patients undergoing urogynecological surgery. The database contains details of all women in Denmark undergoing incontinence surgery or pelvic organ prolapse surgery amounting to ~5,200 procedures...... has a completeness of over 90% of all urogynecological surgeries performed in Denmark. Some of the main variables have been validated using medical records as gold standard. The positive predictive value was above 90%. The data are used as a quality monitoring tool by the hospitals and in a number...

  12. Smart Video Communication for Social Groups - The Vconect Project

    NARCIS (Netherlands)

    M. Ursu; P. Stollenmayer; D. Williams; P. Torres; P.S. Cesar Garcia (Pablo Santiago); N. Farber; E. Geelhoed

    2014-01-01

    htmlabstractThis article introduces the Vconect project. Vconect (Video Communications for Networked Communities) is a collaborative European research and development project dealing with high-quality enriched video as a medium for mass communication within social communities. The technical

  13. Digital cinema video compression

    Science.gov (United States)

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  14. Improved image quality in abdominal CT in patients who underwent treatment for hepatocellular carcinoma with small metal implants using a raw data-based metal artifact reduction algorithm.

    Science.gov (United States)

    Sofue, Keitaro; Yoshikawa, Takeshi; Ohno, Yoshiharu; Negi, Noriyuki; Inokawa, Hiroyasu; Sugihara, Naoki; Sugimura, Kazuro

    2017-07-01

    To determine the value of a raw data-based metal artifact reduction (SEMAR) algorithm for image quality improvement in abdominal CT for patients with small metal implants. Fifty-eight patients with small metal implants (3-15 mm in size) who underwent treatment for hepatocellular carcinoma were imaged with CT. CT data were reconstructed by filtered back projection with and without SEMAR algorithm in axial and coronal planes. To evaluate metal artefact reduction, mean CT number (HU and SD) and artefact index (AI) values within the liver were calculated. Two readers independently evaluated image quality of the liver and pancreas and visualization of vasculature using a 5-point visual score. HU and AI values and image quality on images with and without SEMAR were compared using the paired Student's t-test and Wilcoxon signed rank test. Interobserver agreement was evaluated using linear-weighted κ test. Mean HU and AI on images with SEMAR was significantly lower than those without SEMAR (P small metal implants by reducing metallic artefacts. • SEMAR algorithm significantly reduces metallic artefacts from small implants in abdominal CT. • SEMAR can improve image quality of the liver in dynamic CECT. • Confidence visualization of hepatic vascular anatomies can also be improved by SEMAR.

  15. The Educational Efficacy of Distinct Information Delivery Systems in Modified Video Games

    Science.gov (United States)

    Moshirnia, Andrew; Israel, Maya

    2010-01-01

    Despite the increasing popularity of many commercial video games, this popularity is not shared by educational video games. Modified video games, however, can bridge the gap in quality between commercial and education video games by embedding educational content into popular commercial video games. This study examined how different information…

  16. Interpreting the quality of health care database studies on the comparative effectiveness of oral anticoagulants in routine care

    Directory of Open Access Journals (Sweden)

    Schneeweiss S

    2013-09-01

    Full Text Available Sebastian Schneeweiss, Krista F Huybrechts, Joshua J Gagne Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women’s Hospital and Harvard Medical School, Boston, MA, USA Background: Dabigatran, an oral direct thrombin inhibitor, has now been available for 2 years in the US for the prevention of stroke in patients with nonvalvular atrial fibrillation, and direct Xa inhibitors are also starting to enter the market. Studies examining the effects of new oral anticoagulants in health care databases are beginning to emerge. The purpose of this study was to describe the validity of early published observational studies on the comparative safety and effectiveness of new oral anticoagulants in patients with atrial fibrillation. Methods: We identified published nonrandomized post-marketing studies (articles or conference abstracts or posters and critically appraised their internal validity, with a particular focus on their ability to control confounding and other biases. Results: Two full-length journal articles, three conference posters, two conference presentation abstracts, and a US Food and Drug Administration analysis form the basis of the early comparative effectiveness and safety experience with new oral anticoagulants. Some published studies exhibit substantial biases and have insufficient precision for several important endpoints. Several studies suffer from biases arising from comparing ongoing users of the older drug, warfarin, who seem to tolerate it, to initiators of the new treatment who may have switched from warfarin or have had no prior experience with anticoagulants. Analyses tended to not adjust or not adjust adequately for confounding, and unsound propensity score application was also observed. Several studies introduced selection bias by excluding patients who died during follow-up and by restricting the study population to those with continuous database enrollment following cohort entry. We

  17. Semantic web technologies for video surveillance metadata

    OpenAIRE

    Poppe, Chris; Martens, Gaëtan; De Potter, Pieterjan; Van de Walle, Rik

    2012-01-01

    Video surveillance systems are growing in size and complexity. Such systems typically consist of integrated modules of different vendors to cope with the increasing demands on network and storage capacity, intelligent video analytics, picture quality, and enhanced visual interfaces. Within a surveillance system, relevant information (like technical details on the video sequences, or analysis results of the monitored environment) is described using metadata standards. However, different module...

  18. Selectively De-animating and Stabilizing Videos

    Science.gov (United States)

    2014-12-11

    motions intact. Video textures [97, 65, 7, 77] are a well-known approach for seamlessly looping stochastic motions. Like cinema - graphs, a video...domain of input videos to portraits. We all use portrait photographs to express our identities online. Portraits are often the first visuals seen by...quality of our result, we show some comparisons of our automated cinema - graphs against our user driven method described in Chapter 3 in Figure 4.7

  19. The Danish Anaesthesia Database

    DEFF Research Database (Denmark)

    Antonsen, Kristian; Rosenstock, Charlotte Vallentin; Lundstrøm, Lars Hyldborg

    2016-01-01

    AIM OF DATABASE: The aim of the Danish Anaesthesia Database (DAD) is the nationwide collection of data on all patients undergoing anesthesia. Collected data are used for quality assurance, quality development, and serve as a basis for research projects. STUDY POPULATION: The DAD was founded in 2004....... In addition, an annual DAD report is a benchmark for departments nationwide. CONCLUSION: The DAD is covering the anesthetic process for the majority of patients undergoing anesthesia in Denmark. Data in the DAD are increasingly used for both quality and research projects....

  20. From patient care to research: a validation study examining the factors contributing to data quality in a primary care electronic medical record database.

    Science.gov (United States)

    Coleman, Nathan; Halas, Gayle; Peeler, William; Casaclang, Natalie; Williamson, Tyler; Katz, Alan

    2015-02-05

    Electronic Medical Records (EMRs) are increasingly used in the provision of primary care and have been compiled into databases which can be utilized for surveillance, research and informing practice. The primary purpose of these records is for the provision of individual patient care; validation and examination of underlying limitations is crucial for use for research and data quality improvement. This study examines and describes the validity of chronic disease case definition algorithms and factors affecting data quality in a primary care EMR database. A retrospective chart audit of an age stratified random sample was used to validate and examine diagnostic algorithms applied to EMR data from the Manitoba Primary Care Research Network (MaPCReN), part of the Canadian Primary Care Sentinel Surveillance Network (CPCSSN). The presence of diabetes, hypertension, depression, osteoarthritis and chronic obstructive pulmonary disease (COPD) was determined by review of the medical record and compared to algorithm identified cases to identify discrepancies and describe the underlying contributing factors. The algorithm for diabetes had high sensitivity, specificity and positive predictive value (PPV) with all scores being over 90%. Specificities of the algorithms were greater than 90% for all conditions except for hypertension at 79.2%. The largest deficits in algorithm performance included poor PPV for COPD at 36.7% and limited sensitivity for COPD, depression and osteoarthritis at 72.0%, 73.3% and 63.2% respectively. Main sources of discrepancy included missing coding, alternative coding, inappropriate diagnosis detection based on medications used for alternate indications, inappropriate exclusion due to comorbidity and loss of data. Comparison to medical chart review shows that at MaPCReN the CPCSSN case finding algorithms are valid with a few limitations. This study provides the basis for the validated data to be utilized for research and informs users of its

  1. Development of P4140 video data wall projector; Video data wall projector

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, H.; Inoue, H. [Toshiba Corp., Tokyo (Japan)

    1998-12-01

    The P4140 is a 3 cathode-ray tube (CRT) video data wall projector for super video graphics array (SVGA) signals. It is used as an image display unit, providing a large screen when several sets are put together. A high-quality picture has been realized by higher resolution and improved color uniformity technology. A new convergence adjustment system has also been developed through the optimal combination of digital and analog technologies. This video data wall installation has been greatly enhanced by the automation of cubes and cube performance settings. The P4140 video data wall projector can be used for displaying not only data but video as well. (author)

  2. Modeling 3D Unknown object by Range Finder and Video Camera ...

    African Journals Online (AJOL)

    real world); proprioceptive and exteroceptive sensors allowing the recreating of the 3D geometric database of an environment (virtual world). The virtual world is projected onto a video display terminal (VDT). Computer-generated and video ...

  3. A Directory of English Language Teaching Videos.

    Science.gov (United States)

    Falsetti, Julie, Comp.

    This third edition of the video directory updates previous editions and alphabetically lists videos, by title. It is designed to assist in the teaching of English or the training of teachers of English. Information included are format, standard, variety, use, target, level, price, duration, quality, support materials included, distributor, year…

  4. Skype resilience to high motion videos

    NARCIS (Netherlands)

    Exarchakos, G.; Druda, L.; Menkovski, V.; Bellavista, P.; Liotta, A.

    Skype is one of the most popular video call services in the current Internet world. One of its strengths is the use of an adaptive mechanism to match the constraints of the underlying network. This work is focused on how this mechanism can maximize the video quality as perceived by the viewers using

  5. Large-Scale Query-by-Image Video Retrieval Using Bloom Filters

    OpenAIRE

    Araujo, Andre; Chaves, Jason; Lakshman, Haricharan; Angst, Roland; Girod, Bernd

    2016-01-01

    We consider the problem of using image queries to retrieve videos from a database. Our focus is on large-scale applications, where it is infeasible to index each database video frame independently. Our main contribution is a framework based on Bloom filters, which can be used to index long video segments, enabling efficient image-to-video comparisons. Using this framework, we investigate several retrieval architectures, by considering different types of aggregation and different functions to ...

  6. DEIMOS – an Open Source Image Database

    Directory of Open Access Journals (Sweden)

    M. Blazek

    2011-12-01

    Full Text Available The DEIMOS (DatabasE of Images: Open Source is created as an open-source database of images and videos for testing, verification and comparing of various image and/or video processing techniques such as enhancing, compression and reconstruction. The main advantage of DEIMOS is its orientation to various application fields – multimedia, television, security, assistive technology, biomedicine, astronomy etc. The DEIMOS is/will be created gradually step-by-step based upon the contributions of team members. The paper is describing basic parameters of DEIMOS database including application examples.

  7. The Quality Control Algorithms Used in the Process of Creating the NASA Kennedy Space Center Lightning Protection System Towers Meteorological Database

    Science.gov (United States)

    Orcutt, John M.; Brenton, James C.

    2016-01-01

    The methodology and the results of the quality control (QC) process of the meteorological data from the Lightning Protection System (LPS) towers located at Kennedy Space Center (KSC) launch complex 39B (LC-39B) are documented in this paper. Meteorological data are used to design a launch vehicle, determine operational constraints, and to apply defined constraints on day-of-launch (DOL). In order to properly accomplish these tasks, a representative climatological database of meteorological records is needed because the database needs to represent the climate the vehicle will encounter. Numerous meteorological measurement towers exist at KSC; however, the engineering tasks need measurements at specific heights, some of which can only be provided by a few towers. Other than the LPS towers, Tower 313 is the only tower that provides observations up to 150 m. This tower is located approximately 3.5 km from LC-39B. In addition, data need to be QC'ed to remove erroneous reports that could pollute the results of an engineering analysis, mislead the development of operational constraints, or provide a false image of the atmosphere at the tower's location.

  8. Airborne Video Surveillance

    National Research Council Canada - National Science Library

    Blask, Steven

    2002-01-01

    The DARPA Airborne Video Surveillance (AVS) program was established to develop and promote technologies to make airborne video more useful, providing capabilities that achieve a UAV force multiplier...

  9. High Quality Unigenes and Microsatellite Markers from Tissue Specific Transcriptome and Development of a Database in Clusterbean (Cyamopsis tetragonoloba, L. Taub

    Directory of Open Access Journals (Sweden)

    Hukam C. Rawal

    2017-11-01

    Full Text Available Clusterbean (Cyamopsis tetragonoloba L. Taub, is an important industrial, vegetable and forage crop. This crop owes its commercial importance to the presence of guar gum (galactomannans in its endosperm which is used as a lubricant in a range of industries. Despite its relevance to agriculture and industry, genomic resources available in this crop are limited. Therefore, the present study was undertaken to generate RNA-Seq based transcriptome from leaf, shoot, and flower tissues. A total of 145 million high quality Illumina reads were assembled using Trinity into 127,706 transcripts and 48,007 non-redundant high quality (HQ unigenes. We annotated 79% unigenes against Plant Genes from the National Center for Biotechnology Information (NCBI, Swiss-Prot, Pfam, gene ontology (GO and KEGG databases. Among the annotated unigenes, 30,020 were assigned with 116,964 GO terms, 9984 with EC and 6111 with 137 KEGG pathways. At different fragments per kilobase of transcript per millions fragments sequenced (FPKM levels, genes were found expressed higher in flower tissue followed by shoot and leaf. Additionally, we identified 8687 potential simple sequence repeats (SSRs with an average frequency of one SSR per 8.75 kb. A total of 28 amplified SSRs in 21 clusterbean genotypes resulted in polymorphism in 13 markers with average polymorphic information content (PIC of 0.21. We also constructed a database named ‘ClustergeneDB’ for easy retrieval of unigenes and the microsatellite markers. The tissue specific genes identified and the molecular marker resources developed in this study is expected to aid in genetic improvement of clusterbean for its end use.

  10. SU-F-P-35: A Multi-Institutional Plan Quality Checking Tool Built On Oncospace: A Shared Radiation Oncology Database System

    International Nuclear Information System (INIS)

    Bowers, M; Robertson, S; Moore, J; Wong, J; Phillips, M; Hendrickson, K; Evans, K; McNutt, T

    2016-01-01

    Purpose: Late toxicity from radiation to critical structures limits the possible dose in Radiation Therapy. Perfectly conformal treatment of a target is not realizable, so the clinician must accept a certain level of collateral radiation to nearby OARs. But how much? General guidelines exist for healthy tissue sparing which guide RT treatment planning, but are these guidelines good enough to create the optimal plan given the individualized patient anatomy? We propose a means to evaluate the planned dose level to an OAR using a multi-institutional data-store of previously treated patients, so a clinician might reconsider planning objectives. Methods: The tool is built on Oncospace, a federated data-store system, which consists of planning data import, web based analysis tools, and a database containing:1) DVHs: dose by percent volume delivered to each ROI for each patient previously treated and included in the database.2) Overlap Volume Histograms (OVHs): Anatomical measure defined as the percent volume of an ROI within a given distance to target structures.Clinicians know what OARs are important to spare. For any ROI, Oncospace knows for which patients’ anatomy that ROI was harder to plan in the past (the OVH is less). The planned dose should be close to the least dose of previous patients. The tool displays the dose those OARs were subjected to, and the clinician can make a determination about the planning objectives used.Multiple institutions contribute to the Oncospace Consortium, and their DVH and OVH data are combined and color coded in the output. Results: The Oncospace website provides a plan quality display tool which identifies harder to treat patients, and graphically displays the dose delivered to them for comparison with the proposed plan. Conclusion: The Oncospace Consortium manages a data-store of previously treated patients which can be used for quality checking new plans. Grant funding by Elekta.

  11. SU-F-P-35: A Multi-Institutional Plan Quality Checking Tool Built On Oncospace: A Shared Radiation Oncology Database System

    Energy Technology Data Exchange (ETDEWEB)

    Bowers, M; Robertson, S; Moore, J; Wong, J [Johns Hopkins University, Baltimore, MD (United States); Phillips, M [University Washington, Seattle, WA (United States); Hendrickson, K; Evans, K [University of Washington, Seattle, WA (United States); McNutt, T [Johns Hopkins University, Severna Park, MD (United States)

    2016-06-15

    Purpose: Late toxicity from radiation to critical structures limits the possible dose in Radiation Therapy. Perfectly conformal treatment of a target is not realizable, so the clinician must accept a certain level of collateral radiation to nearby OARs. But how much? General guidelines exist for healthy tissue sparing which guide RT treatment planning, but are these guidelines good enough to create the optimal plan given the individualized patient anatomy? We propose a means to evaluate the planned dose level to an OAR using a multi-institutional data-store of previously treated patients, so a clinician might reconsider planning objectives. Methods: The tool is built on Oncospace, a federated data-store system, which consists of planning data import, web based analysis tools, and a database containing:1) DVHs: dose by percent volume delivered to each ROI for each patient previously treated and included in the database.2) Overlap Volume Histograms (OVHs): Anatomical measure defined as the percent volume of an ROI within a given distance to target structures.Clinicians know what OARs are important to spare. For any ROI, Oncospace knows for which patients’ anatomy that ROI was harder to plan in the past (the OVH is less). The planned dose should be close to the least dose of previous patients. The tool displays the dose those OARs were subjected to, and the clinician can make a determination about the planning objectives used.Multiple institutions contribute to the Oncospace Consortium, and their DVH and OVH data are combined and color coded in the output. Results: The Oncospace website provides a plan quality display tool which identifies harder to treat patients, and graphically displays the dose delivered to them for comparison with the proposed plan. Conclusion: The Oncospace Consortium manages a data-store of previously treated patients which can be used for quality checking new plans. Grant funding by Elekta.

  12. Video Tutorial of Continental Food

    Science.gov (United States)

    Nurani, A. S.; Juwaedah, A.; Mahmudatussa'adah, A.

    2018-02-01

    This research is motivated by the belief in the importance of media in a learning process. Media as an intermediary serves to focus on the attention of learners. Selection of appropriate learning media is very influential on the success of the delivery of information itself both in terms of cognitive, affective and skills. Continental food is a course that studies food that comes from Europe and is very complex. To reduce verbalism and provide more real learning, then the tutorial media is needed. Media tutorials that are audio visual can provide a more concrete learning experience. The purpose of this research is to develop tutorial media in the form of video. The method used is the development method with the stages of analyzing the learning objectives, creating a story board, validating the story board, revising the story board and making video tutorial media. The results show that the making of storyboards should be very thorough, and detailed in accordance with the learning objectives to reduce errors in video capture so as to save time, cost and effort. In video capturing, lighting, shooting angles, and soundproofing make an excellent contribution to the quality of tutorial video produced. In shooting should focus more on tools, materials, and processing. Video tutorials should be interactive and two-way.

  13. Federal databases

    International Nuclear Information System (INIS)

    Welch, M.J.; Welles, B.W.

    1988-01-01

    Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest

  14. Physics and Video Analysis

    Science.gov (United States)

    Allain, Rhett

    2016-05-01

    We currently live in a world filled with videos. There are videos on YouTube, feature movies and even videos recorded with our own cameras and smartphones. These videos present an excellent opportunity to not only explore physical concepts, but also inspire others to investigate physics ideas. With video analysis, we can explore the fantasy world in science-fiction films. We can also look at online videos to determine if they are genuine or fake. Video analysis can be used in the introductory physics lab and it can even be used to explore the make-believe physics embedded in video games. This book covers the basic ideas behind video analysis along with the fundamental physics principles used in video analysis. The book also includes several examples of the unique situations in which video analysis can be used.

  15. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... YouTube Videos » NEI YouTube Videos: Amblyopia Listen NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration ... Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: ...

  16. A video authentication technique

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1987-01-01

    Unattended video surveillance systems are particularly vulnerable to the substitution of false video images into the cable that connects the camera to the video recorder. New technology has made it practical to insert a solid state video memory into the video cable, freeze a video image from the camera, and hold this image as long as desired. Various techniques, such as line supervision and sync detection, have been used to detect video cable tampering. The video authentication technique described in this paper uses the actual video image from the camera as the basis for detecting any image substitution made during the transmission of the video image to the recorder. The technique, designed for unattended video systems, can be used for any video transmission system where a two-way digital data link can be established. The technique uses similar microprocessor circuitry at the video camera and at the video recorder to select sample points in the video image for comparison. The gray scale value of these points is compared at the recorder controller and if the values agree within limits, the image is authenticated. If a significantly different image was substituted, the comparison would fail at a number of points and the video image would not be authenticated. The video authentication system can run as a stand-alone system or at the request of another system

  17. The Danish Sarcoma Database

    Directory of Open Access Journals (Sweden)

    Jorgensen PH

    2016-10-01

    Full Text Available Peter Holmberg Jørgensen,1 Gunnar Schwarz Lausten,2 Alma B Pedersen3 1Tumor Section, Department of Orthopedic Surgery, Aarhus University Hospital, Aarhus, 2Tumor Section, Department of Orthopedic Surgery, Rigshospitalet, Copenhagen, 3Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark Aim: The aim of the database is to gather information about sarcomas treated in Denmark in order to continuously monitor and improve the quality of sarcoma treatment in a local, a national, and an international perspective. Study population: Patients in Denmark diagnosed with a sarcoma, both skeletal and ekstraskeletal, are to be registered since 2009. Main variables: The database contains information about appearance of symptoms; date of receiving referral to a sarcoma center; date of first visit; whether surgery has been performed elsewhere before referral, diagnosis, and treatment; tumor characteristics such as location, size, malignancy grade, and growth pattern; details on treatment (kind of surgery, amount of radiation therapy, type and duration of chemotherapy; complications of treatment; local recurrence and metastases; and comorbidity. In addition, several quality indicators are registered in order to measure the quality of care provided by the hospitals and make comparisons between hospitals and with international standards. Descriptive data: Demographic patient-specific data such as age, sex, region of living, comorbidity, World Health Organization's International Classification of Diseases – tenth edition codes and TNM Classification of Malignant Tumours, and date of death (after yearly coupling to the Danish Civil Registration System. Data quality and completeness are currently secured. Conclusion: The Danish Sarcoma Database is population based and includes sarcomas occurring in Denmark since 2009. It is a valuable tool for monitoring sarcoma incidence and quality of treatment and its improvement, postoperative

  18. Danish Palliative Care Database

    DEFF Research Database (Denmark)

    Grønvold, Mogens; Adsersen, Mathilde; Hansen, Maiken Bang

    2016-01-01

    Aims: The aim of the Danish Palliative Care Database (DPD) is to monitor, evaluate, and improve the clinical quality of specialized palliative care (SPC) (ie, the activity of hospital-based palliative care teams/departments and hospices) in Denmark. Study population: The study population is all...... patients were registered in DPD during the 5 years 2010–2014. Of those registered, 96% had cancer. Conclusion: DPD is a national clinical quality database for SPC having clinically relevant variables and high data and patient completeness....

  19. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  20. Applied learning-based color tone mapping for face recognition in video surveillance system

    Science.gov (United States)

    Yew, Chuu Tian; Suandi, Shahrel Azmin

    2012-04-01

    In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.

  1. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  2. Intelligent keyframe extraction for video printing

    Science.gov (United States)

    Zhang, Tong

    2004-10-01

    Nowadays most digital cameras have the functionality of taking short video clips, with the length of video ranging from several seconds to a couple of minutes. The purpose of this research is to develop an algorithm which extracts an optimal set of keyframes from each short video clip so that the user could obtain proper video frames to print out. In current video printing systems, keyframes are normally obtained by evenly sampling the video clip over time. Such an approach, however, may not reflect highlights or regions of interest in the video. Keyframes derived in this way may also be improper for video printing in terms of either content or image quality. In this paper, we present an intelligent keyframe extraction approach to derive an improved keyframe set by performing semantic analysis of the video content. For a video clip, a number of video and audio features are analyzed to first generate a candidate keyframe set. These features include accumulative color histogram and color layout differences, camera motion estimation, moving object tracking, face detection and audio event detection. Then, the candidate keyframes are clustered and evaluated to obtain a final keyframe set. The objective is to automatically generate a limited number of keyframes to show different views of the scene; to show different people and their actions in the scene; and to tell the story in the video shot. Moreover, frame extraction for video printing, which is a rather subjective problem, is considered in this work for the first time, and a semi-automatic approach is proposed.

  3. RDD Databases

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...

  4. Snowstorm Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Snowstorm Database is a collection of over 500 snowstorms dating back to 1900 and updated operationally. Only storms having large areas of heavy snowfall (10-20...

  5. Dealer Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May 1,...

  6. Digital video for the desktop

    CERN Document Server

    Pender, Ken

    1999-01-01

    Practical introduction to creating and editing high quality video on the desktop. Using examples from a variety of video applications, benefit from a professional's experience, step-by-step, through a series of workshops demonstrating a wide variety of techniques. These include producing short films, multimedia and internet presentations, animated graphics and special effects.The opportunities for the independent videomaker have never been greater - make sure you bring your understanding fully up to date with this invaluable guide.No prior knowledge of the technology is assumed, with explanati

  7. Video based OER: Production, discovery, dissemination

    OpenAIRE

    Gibbs, Graham R.

    2012-01-01

    This paper reports lessons learned from a range of ESRC, HEA and Jisc funded projects. Four dimensions will be discussed, economic costs, quality, dissemination and pedagogy.\\ud \\ud Cost issues include the expense of making video, and the variety of skills and expertise required such as interviewing, scripting and editing. Quality issues are similar to those in broadcast video but not as great. However, there are specific requirements for special needs and issues around copyright and licensin...

  8. Rare Disease Video Portal

    OpenAIRE

    Sánchez Bocanegra, Carlos Luis

    2011-01-01

    Rare Disease Video Portal (RD Video) is a portal web where contains videos from Youtube including all details from 12 channels of Youtube. Rare Disease Video Portal (RD Video) es un portal web que contiene los vídeos de Youtube incluyendo todos los detalles de 12 canales de Youtube. Rare Disease Video Portal (RD Video) és un portal web que conté els vídeos de Youtube i que inclou tots els detalls de 12 Canals de Youtube.

  9. National database

    DEFF Research Database (Denmark)

    Kristensen, Helen Grundtvig; Stjernø, Henrik

    1995-01-01

    Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....

  10. Distributed video coding with multiple side information

    DEFF Research Database (Denmark)

    Huang, Xin; Brites, C.; Ascenso, J.

    2009-01-01

    Distributed Video Coding (DVC) is a new video coding paradigm which mainly exploits the source statistics at the decoder based on the availability of some decoder side information. The quality of the side information has a major impact on the DVC rate-distortion (RD) performance in the same way...... the quality of the predictions had a major impact in predictive video coding. In this paper, a DVC solution exploiting multiple side information is proposed; the multiple side information is generated by frame interpolation and frame extrapolation targeting to improve the side information of a single...

  11. [Efficacy of interventions with video games consoles in stroke patients: a systematic review].

    Science.gov (United States)

    Ortiz-Huerta, J H; Perez-de-Heredia-Torres, M; Guijo-Blanco, V; Santamaria-Vazquez, M

    2018-01-16

    In recent years video games and games consoles have been developed that are potentially useful in rehabilitation, which has led to studies conducted to evaluate the degree of efficacy of these treatments for people following a stroke. To analyse the literature available related to the effectiveness of applying video games consoles in the functional recovery of the upper extremities in subjects who have survived a stroke. A review of the literature was conducted in the CINHAL, Medline, PEDro, PsycArticles, PsycInfo, Science Direct, Scopus and Web of Science databases, using the query terms 'video game', 'stroke', 'hemiplegia', 'upper extremity' and 'hemiparesis'. After applying the eligibility criteria (clinical trials published between 2007 and 2017, whose participants were adults who had suffered a stroke with involvement of the upper extremity and who used video games), the scientific quality of the selected studies was rated by means of the PEDro scale. Eleven valid clinical trials were obtained for the systematic review. The studies that were selected, all of which were quantitative, presented different data and the inferential results indicated different levels of significance between control and experimental groups (82%) or between the different types of treatment (18%). The use of video games consoles is a useful complement for the conventional rehabilitation of the upper extremities of persons who have survived a stroke, since it increases rehabilitation time and enhances the recovery of motor functioning. Nevertheless, homogeneous intervention protocols need to be implemented in order to standardise the intervention.

  12. Scalable video on demand adaptive Internet-based distribution

    CERN Document Server

    Zink, Michael

    2013-01-01

    In recent years, the proliferation of available video content and the popularity of the Internet have encouraged service providers to develop new ways of distributing content to clients. Increasing video scaling ratios and advanced digital signal processing techniques have led to Internet Video-on-Demand applications, but these currently lack efficiency and quality. Scalable Video on Demand: Adaptive Internet-based Distribution examines how current video compression and streaming can be used to deliver high-quality applications over the Internet. In addition to analysing the problems

  13. The Danish Nonmelanoma Skin Cancer Dermatology Database

    DEFF Research Database (Denmark)

    Lamberg, Anna Lei; Sølvsten, Henrik; Lei, Ulrikke

    2016-01-01

    AIM OF DATABASE: The Danish Nonmelanoma Skin Cancer Dermatology Database was established in 2008. The aim of this database was to collect data on nonmelanoma skin cancer (NMSC) treatment and improve its treatment in Denmark. NMSC is the most common malignancy in the western countries and represents...... treatment. The database has revealed that overall, the quality of care of NMSC in Danish dermatological clinics is high, and the database provides the necessary data for continuous quality assurance....

  14. Enhancement system of nighttime infrared video image and visible video image

    Science.gov (United States)

    Wang, Yue; Piao, Yan

    2016-11-01

    Visibility of Nighttime video image has a great significance for military and medicine areas, but nighttime video image has so poor quality that we can't recognize the target and background. Thus we enhance the nighttime video image by fuse infrared video image and visible video image. According to the characteristics of infrared and visible images, we proposed improved sift algorithm andαβ weighted algorithm to fuse heterologous nighttime images. We would deduced a transfer matrix from improved sift algorithm. The transfer matrix would rapid register heterologous nighttime images. And theαβ weighted algorithm can be applied in any scene. In the video image fusion system, we used the transfer matrix to register every frame and then used αβ weighted method to fuse every frame, which reached the time requirement soft video. The fused video image not only retains the clear target information of infrared video image, but also retains the detail and color information of visible video image and the fused video image can fluency play.

  15. Dynamic Video Streaming in Caching-enabled Wireless Mobile Networks

    OpenAIRE

    Liang, C.; Hu, S.

    2017-01-01

    Recent advances in software-defined mobile networks (SDMNs), in-network caching, and mobile edge computing (MEC) can have great effects on video services in next generation mobile networks. In this paper, we jointly consider SDMNs, in-network caching, and MEC to enhance the video service in next generation mobile networks. With the objective of maximizing the mean measurement of video quality, an optimization problem is formulated. Due to the coupling of video data rate, computing resource, a...

  16. Dress like a Star: Retrieving Fashion Products from Videos

    OpenAIRE

    Garcia, Noa; Vogiatzis, George

    2017-01-01

    This work proposes a system for retrieving clothing and fashion products from video content. Although films and television are the perfect showcase for fashion brands to promote their products, spectators are not always aware of where to buy the latest trends they see on screen. Here, a framework for breaking the gap between fashion products shown on videos and users is presented. By relating clothing items and video frames in an indexed database and performing frame retrieval with temporal a...

  17. The Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Guldberg, Rikke; Brostrøm, Søren; Hansen, Jesper Kjær

    2013-01-01

    in the DugaBase from 1 January 2009 to 31 October 2010, using medical records as a reference. RESULTS: A total of 16,509 urogynaecological procedures were registered in the DugaBase by 31 December 2010. The database completeness has increased by calendar time, from 38.2 % in 2007 to 93.2 % in 2010 for public......INTRODUCTION AND HYPOTHESIS: The Danish Urogynaecological Database (DugaBase) is a nationwide clinical database established in 2006 to monitor, ensure and improve the quality of urogynaecological surgery. We aimed to describe its establishment and completeness and to validate selected variables....... This is the first study based on data from the DugaBase. METHODS: The database completeness was calculated as a comparison between urogynaecological procedures reported to the Danish National Patient Registry and to the DugaBase. Validity was assessed for selected variables from a random sample of 200 women...

  18. INIST: databases reorientation

    International Nuclear Information System (INIS)

    Bidet, J.C.

    1995-01-01

    INIST is a CNRS (Centre National de la Recherche Scientifique) laboratory devoted to the treatment of scientific and technical informations and to the management of these informations compiled in a database. Reorientation of the database content has been proposed in 1994 to increase the transfer of research towards enterprises and services, to develop more automatized accesses to the informations, and to create a quality assurance plan. The catalog of publications comprises 5800 periodical titles (1300 for fundamental research and 4500 for applied research). A science and technology multi-thematic database will be created in 1995 for the retrieval of applied and technical informations. ''Grey literature'' (reports, thesis, proceedings..) and human and social sciences data will be added to the base by the use of informations selected in the existing GRISELI and Francis databases. Strong modifications are also planned in the thematic cover of Earth sciences and will considerably reduce the geological information content. (J.S.). 1 tab

  19. The effects of video observation of chewing during lunch on masticatory ability, food intake, cognition, activities of daily living, depression, and quality of life in older adults with dementia: a study protocol of an adjusted randomized controlled trial.

    Science.gov (United States)

    Douma, Johanna G; Volkers, Karin M; Vuijk, Pieter Jelle; Scherder, Erik J A

    2016-02-04

    Masticatory functioning alters with age. However, mastication has been found to be related to, for example, cognitive functioning, food intake, and some aspects of activities of daily living. Since cognitive functioning and activities of daily living show a decline in older adults with dementia, improving masticatory functioning may be of relevance to them. A possible way to improve mastication may be showing videos of people who are chewing. Observing chewing movements may activate the mirror neuron system, which becomes also activated during the execution of that same movement. The primary hypothesis is that the observation of chewing has a beneficial effect on masticatory functioning, or, more specifically, masticatory ability of older adults with dementia. Secondary, the intervention is hypothesized to have beneficial effects on food intake, cognition, activities of daily living, depression, and quality of life. An adjusted parallel randomized controlled trial is being performed in dining rooms of residential care settings. Older adults with dementia, for whom also additional eligibility criteria apply, are randomly assigned to the experimental (videos of chewing people) or control condition (videos of nature and buildings), by drawing folded pieces of paper. Participants who are able to watch each other's videos are assigned to the same study condition. The intervention takes place during lunchtime, from Monday to Friday, for 3 months. During four moments of measurement, masticatory ability, food intake, cognitive functioning, activities of daily living, depression, and quality of life are assessed. Tests administrators blind to the group allocation administer the tests to participants. The goal of this study is to examine the effects of video observation of chewing on masticatory ability and several secondary outcome measures. In this study, the observation of chewing is added to the execution of the same action (i.e., during eating). Beneficial effects on

  20. Mass-storage management for distributed image/video archives

    Science.gov (United States)

    Franchi, Santina; Guarda, Roberto; Prampolini, Franco

    1993-04-01

    The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.

  1. The Danish Sarcoma Database

    DEFF Research Database (Denmark)

    Jørgensen, Peter Holmberg; Lausten, Gunnar Schwarz; Pedersen, Alma B

    2016-01-01

    AIM: The aim of the database is to gather information about sarcomas treated in Denmark in order to continuously monitor and improve the quality of sarcoma treatment in a local, a national, and an international perspective. STUDY POPULATION: Patients in Denmark diagnosed with a sarcoma, both...... skeletal and ekstraskeletal, are to be registered since 2009. MAIN VARIABLES: The database contains information about appearance of symptoms; date of receiving referral to a sarcoma center; date of first visit; whether surgery has been performed elsewhere before referral, diagnosis, and treatment; tumor...... of Diseases - tenth edition codes and TNM Classification of Malignant Tumours, and date of death (after yearly coupling to the Danish Civil Registration System). Data quality and completeness are currently secured. CONCLUSION: The Danish Sarcoma Database is population based and includes sarcomas occurring...

  2. Danish Gynecological Cancer Database

    DEFF Research Database (Denmark)

    Sørensen, Sarah Mejer; Bjørn, Signe Frahm; Jochumsen, Kirsten Marie

    2016-01-01

    AIM OF DATABASE: The Danish Gynecological Cancer Database (DGCD) is a nationwide clinical cancer database and its aim is to monitor the treatment quality of Danish gynecological cancer patients, and to generate data for scientific purposes. DGCD also records detailed data on the diagnostic measures...... data forms as follows: clinical data, surgery, pathology, pre- and postoperative care, complications, follow-up visits, and final quality check. DGCD is linked with additional data from the Danish "Pathology Registry", the "National Patient Registry", and the "Cause of Death Registry" using the unique...... Danish personal identification number (CPR number). DESCRIPTIVE DATA: Data from DGCD and registers are available online in the Statistical Analysis Software portal. The DGCD forms cover almost all possible clinical variables used to describe gynecological cancer courses. The only limitation...

  3. Experiment Databases

    Science.gov (United States)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  4. Adaptive live multicast video streaming of SVC with UEP FEC

    Science.gov (United States)

    Lev, Avram; Lasry, Amir; Loants, Maoz; Hadar, Ofer

    2014-09-01

    Ideally, video streaming systems should provide the best quality video a user's device can handle without compromising on downloading speed. In this article, an improved video transmission system is presented which dynamically enhances the video quality based on a user's current network state and repairs errors from data lost in the video transmission. The system incorporates three main components: Scalable Video Coding (SVC) with three layers, multicast based on Receiver Layered Multicast (RLM) and an UnEqual Forward Error Correction (FEC) algorithm. The SVC provides an efficient method for providing different levels of video quality, stored as enhancement layers. In the presented system, a proportional-integral-derivative (PID) controller was implemented to dynamically adjust the video quality, adding or subtracting quality layers as appropriate. In addition, an FEC algorithm was added to compensate for data lost in transmission. A two dimensional FEC was used. The FEC algorithm came from the Pro MPEG code of practice #3 release 2. Several bit errors scenarios were tested (step function, cosine wave) with different bandwidth size and error values were simulated. The suggested scheme which includes SVC video encoding with 3 layers over IP Multicast with Unequal FEC algorithm was investigated under different channel conditions, variable bandwidths and different bit error rates. The results indicate improvement of the video quality in terms of PSNR over previous transmission schemes.

  5. Teaching Historians with Databases.

    Science.gov (United States)

    Burton, Vernon

    1993-01-01

    Asserts that, although pressures to publish have detracted from the quality of teaching at the college level, recent innovations in educational technology have created opportunities for instructional improvement. Describes the use of computer-assisted instruction and databases in college-level history courses. (CFR)

  6. LabData database sub-systems for post-processing and quality control of stable isotope and gas chromatography measurements

    Science.gov (United States)

    Suckow, A. O.

    2013-12-01

    Measurements need post-processing to obtain results that are comparable between laboratories. Raw data may need to be corrected for blank, memory, drift (change of reference values with time), linearity (dependence of reference on signal height) and normalized to international reference materials. Post-processing parameters need to be stored for traceability of results. State of the art stable isotope correction schemes are available based on MS Excel (Geldern and Barth, 2012; Gröning, 2011) or MS Access (Coplen, 1998). These are specialized to stable isotope measurements only, often only to the post-processing of a special run. Embedding of algorithms into a multipurpose database system was missing. This is necessary to combine results of different tracers (3H, 3He, 2H, 18O, CFCs, SF6...) or geochronological tools (Sediment dating e.g. with 210Pb, 137Cs), to relate to attribute data (submitter, batch, project, geographical origin, depth in core, well information etc.) and for further interpretation tools (e.g. lumped parameter modelling). Database sub-systems to the LabData laboratory management system (Suckow and Dumke, 2001) are presented for stable isotopes and for gas chromatographic CFC and SF6 measurements. The sub-system for stable isotopes allows the following post-processing: 1. automated import from measurement software (Isodat, Picarro, LGR), 2. correction for sample-to sample memory, linearity, drift, and renormalization of the raw data. The sub-system for gas chromatography covers: 1. storage of all raw data 2. storage of peak integration parameters 3. correction for blank, efficiency and linearity The user interface allows interactive and graphical control of the post-processing and all corrections by export to and plot in MS Excel and is a valuable tool for quality control. The sub-databases are integrated into LabData, a multi-user client server architecture using MS SQL server as back-end and an MS Access front-end and installed in four

  7. Implementation of an interactive database interface utilizing HTML, PHP, JavaScript, and MySQL in support of water quality assessments in the Northeastern North Carolina Pasquotank Watershed

    Science.gov (United States)

    Guion, A., Jr.; Hodgkins, H.

    2015-12-01

    The Center of Excellence in Remote Sensing Education and Research (CERSER) has implemented three research projects during the summer Research Experience for Undergraduates (REU) program gathering water quality data for local waterways. The data has been compiled manually utilizing pen and paper and then entered into a spreadsheet. With the spread of electronic devices capable of interacting with databases, the development of an electronic method of entering and manipulating the water quality data was pursued during this project. This project focused on the development of an interactive database to gather, display, and analyze data collected from local waterways. The database and entry form was built in MySQL on a PHP server allowing participants to enter data from anywhere Internet access is available. This project then researched applying this data to the Google Maps site to provide labeling and information to users. The NIA server at http://nia.ecsu.edu is used to host the application for download and for storage of the databases. Water Quality Database Team members included the authors plus Derek Morris Jr., Kathryne Burton and Mr. Jeff Wood as mentor.

  8. Appending Limited Clinical Data to an Administrative Database for Acute Myocardial Infarction Patients: The Impact on the Assessment of Hospital Quality.

    Science.gov (United States)

    Hannan, Edward L; Samadashvili, Zaza; Cozzens, Kimberly; Jacobs, Alice K; Venditti, Ferdinand J; Holmes, David R; Berger, Peter B; Stamato, Nicholas J; Hughes, Suzanne; Walford, Gary

    2016-05-01

    Hospitals' risk-standardized mortality rates and outlier status (significantly higher/lower rates) are reported by the Centers for Medicare and Medicaid Services (CMS) for acute myocardial infarction (AMI) patients using Medicare claims data. New York now has AMI claims data with blood pressure and heart rate added. The objective of this study was to see whether the appended database yields different hospital assessments than standard claims data. New York State clinically appended claims data for AMI were used to create 2 different risk models based on CMS methods: 1 with and 1 without the added clinical data. Model discrimination was compared, and differences between the models in hospital outlier status and tertile status were examined. Mean arterial pressure and heart rate were both significant predictors of mortality in the clinically appended model. The C statistic for the model with the clinical variables added was significantly higher (0.803 vs. 0.773, Pthe assessment of hospital mortality outliers for AMI. The strategy of adding limited but important clinical data elements to administrative datasets should be considered when evaluating hospital quality for procedures and other medical conditions.

  9. ACE (I/D polymorphism and response to treatment in coronary artery disease: a comprehensive database and meta-analysis involving study quality evaluation

    Directory of Open Access Journals (Sweden)

    Kitsios Georgios

    2009-06-01

    Full Text Available Abstract Background The role of angiotensin-converting enzyme (ACE gene insertion/deletion (I/D polymorphism in modifying the response to treatment modalities in coronary artery disease is controversial. Methods PubMed was searched and a database of 58 studies with detailed information regarding ACE I/D polymorphism and response to treatment in coronary artery disease was created. Eligible studies were synthesized using meta-analysis methods, including cumulative meta-analysis. Heterogeneity and study quality issues were explored. Results Forty studies involved invasive treatments (coronary angioplasty or coronary artery by-pass grafting and 18 used conservative treatment options (including anti-hypertensive drugs, lipid lowering therapy and cardiac rehabilitation procedures. Clinical outcomes were investigated by 11 studies, while 47 studies focused on surrogate endpoints. The most studied outcome was the restenosis following coronary angioplasty (34 studies. Heterogeneity among studies (p ACE I/D polymorphism on the response to treatment for the rest outcomes (coronary events, endothelial dysfunction, left ventricular remodeling, progression/regression of atherosclerosis, individual studies showed significance; however, results were discrepant and inconsistent. Conclusion In view of available evidence, genetic testing of ACE I/D polymorphism prior to clinical decision making is not currently justified. The relation between ACE genetic variation and response to treatment in CAD remains an unresolved issue. The results of long-term and properly designed prospective studies hold the promise for pharmacogenetically tailored therapy in CAD.

  10. Effect of Playing Video Games on Laparoscopic Skills Performance: A Systematic Review.

    Science.gov (United States)

    Glassman, Daniel; Yiasemidou, Marina; Ishii, Hiro; Somani, Bhaskar Kumar; Ahmed, Kamran; Biyani, Chandra Shekhar

    2016-02-01

    The advances in both video games and minimally invasive surgery have allowed many to consider the potential positive relationship between the two. This review aims to evaluate outcomes of studies that investigated the correlation between video game skills and performance in laparoscopic surgery. A systematic search was conducted on PubMed/Medline and EMBASE databases for the MeSH terms and keywords including "video games and laparoscopy," "computer games and laparoscopy," "Xbox and laparoscopy," "Nintendo Wii and laparoscopy," and "PlayStation and laparoscopy." Cohort, case reports, letters, editorials, bulletins, and reviews were excluded. Studies in English, with task performance as primary outcome, were included. The search period for this review was 1950 to December 2014. There were 57 abstracts identified: 4 of these were found to be duplicates; 32 were found to be nonrelevant to the research question. Overall, 21 full texts were assessed; 15 were excluded according to the Medical Education Research Study Quality Instrument quality assessment criteria. The five studies included in this review were randomized controlled trials. Playing video games was found to reduce error in two studies (P 0.002 and P 0.045). For the same studies, however, several other metrics assessed were not significantly different between the control and intervention group. One study showed a decrease in the time for the group that played video games (P 0.037) for one of two laparoscopic tasks performed. In the same study, however, when the groups were reversed (initial control group became intervention and vice versa), a difference was not demonstrated (P for peg transfer 1 - 0.465, P for cobra robe - 0.185). Finally, two further studies found no statistical difference between the game playing group and the control group's performance. There is a very limited amount of evidence to support that the use of video games enhances surgical simulation performance.

  11. Semantic-based surveillance video retrieval.

    Science.gov (United States)

    Hu, Weiming; Xie, Dan; Fu, Zhouyu; Zeng, Wenrong; Maybank, Steve

    2007-04-01

    Visual surveillance produces large amounts of video data. Effective indexing and retrieval from surveillance video databases are very important. Although there are many ways to represent the content of video clips in current video retrieval algorithms, there still exists a semantic gap between users and retrieval systems. Visual surveillance systems supply a platform for investigating semantic-based video retrieval. In this paper, a semantic-based video retrieval framework for visual surveillance is proposed. A cluster-based tracking algorithm is developed to acquire motion trajectories. The trajectories are then clustered hierarchically using the spatial and temporal information, to learn activity models. A hierarchical structure of semantic indexing and retrieval of object activities, where each individual activity automatically inherits all the semantic descriptions of the activity model to which it belongs, is proposed for accessing video clips and individual objects at the semantic level. The proposed retrieval framework supports various queries including queries by keywords, multiple object queries, and queries by sketch. For multiple object queries, succession and simultaneity restrictions, together with depth and breadth first orders, are considered. For sketch-based queries, a method for matching trajectories drawn by users to spatial trajectories is proposed. The effectiveness and efficiency of our framework are tested in a crowded traffic scene.

  12. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  13. Pressure Ulcer Risk and Prevention Practices in Pediatric Patients: A Secondary Analysis of Data from the National Database of Nursing Quality Indicators®.

    Science.gov (United States)

    Razmus, Ivy; Bergquist-Beringer, Sandra

    2017-01-01

    Little is known about pressure ulcer prevention practice among pediatric patients. To describe the frequency of pressure ulcer risk assessment in pediatric patients and pressure ulcer prevention intervention use overall and by hospital unit type, a descriptive secondary analysis was performed of data submitted to the National Database for Nursing Quality Indicators® (NDNQI®) for at least 3 of the 4 quarters in 2012. Relevant data on pressure ulcer risk from 271 hospitals across the United States extracted from the NDNQI database included patient skin and pressure ulcer risk assessment on admission, time since the last pressure ulcer risk assessment, method used to assess pressure ulcer risk, and risk status. Extracted data on pressure ulcer prevention included skin assessment, pressure-redistribution surface use, routine repositioning, nutritional support, and moisture management. These data were organized by unit type and merged with data on hospital characteristics for the analysis. The sample included 39 984 patients ages 1 day to 18 years on 678 pediatric acute care units (general pediatrics, pediatric critical care units, neonatal intensive care units, pediatric step-down units, and pediatric rehabilitation units). Descriptive statistics were used to analyze study data. Most of the pediatric patients (33 644; 89.2%) were assessed for pressure ulcer risk within 24 hours of admission. The Braden Q Scale was frequently used to assess risk on general pediatrics units (75.4%), pediatric step-down units (85.5%), pediatric critical care units (81.3%), and pediatric rehabilitation units (56.1%). In the neonatal intensive care units, another scale or method was used more often (55% to 60%) to assess pressure ulcer risk. Of the 11 203 pediatric patients (39%) determined to be at risk for pressure ulcers, the majority (10 741, 95.8%) received some kind of pressure ulcer prevention intervention during the 24 hours preceding the NDNQI pressure ulcer survey. The frequency

  14. Specialist Bibliographic Databases.

    Science.gov (United States)

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  15. Specialist Bibliographic Databases

    Science.gov (United States)

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  16. 4K Video Traffic Prediction using Seasonal Autoregressive Modeling

    Directory of Open Access Journals (Sweden)

    D. R. Marković

    2017-06-01

    Full Text Available From the perspective of average viewer, high definition video streams such as HD (High Definition and UHD (Ultra HD are increasing their internet presence year over year. This is not surprising, having in mind expansion of HD streaming services, such as YouTube, Netflix etc. Therefore, high definition video streams are starting to challenge network resource allocation with their bandwidth requirements and statistical characteristics. Need for analysis and modeling of this demanding video traffic has essential importance for better quality of service and experience support. In this paper we use an easy-to-apply statistical model for prediction of 4K video traffic. Namely, seasonal autoregressive modeling is applied in prediction of 4K video traffic, encoded with HEVC (High Efficiency Video Coding. Analysis and modeling were performed within R programming environment using over 17.000 high definition video frames. It is shown that the proposed methodology provides good accuracy in high definition video traffic modeling.

  17. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer ... me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork ...

  18. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Doctor Find a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For ... Doctor Find a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources Links Videos ...

  19. Detection of Upscale-Crop and Partial Manipulation in Surveillance Video Based on Sensor Pattern Noise

    Science.gov (United States)

    Hyun, Dai-Kyung; Ryu, Seung-Jin; Lee, Hae-Yeoun; Lee, Heung-Kyu

    2013-01-01

    In many court cases, surveillance videos are used as significant court evidence. As these surveillance videos can easily be forged, it may cause serious social issues, such as convicting an innocent person. Nevertheless, there is little research being done on forgery of surveillance videos. This paper proposes a forensic technique to detect forgeries of surveillance video based on sensor pattern noise (SPN). We exploit the scaling invariance of the minimum average correlation energy Mellin radial harmonic (MACE-MRH) correlation filter to reliably unveil traces of upscaling in videos. By excluding the high-frequency components of the investigated video and adaptively choosing the size of the local search window, the proposed method effectively localizes partially manipulated regions. Empirical evidence from a large database of test videos, including RGB (Red, Green, Blue)/infrared video, dynamic-/static-scene video and compressed video, indicates the superior performance of the proposed method. PMID:24051524

  20. Content-based video indexing and searching with wavelet transformation

    Science.gov (United States)

    Stumpf, Florian; Al-Jawad, Naseer; Du, Hongbo; Jassim, Sabah

    2006-05-01

    Biometric databases form an essential tool in the fight against international terrorism, organised crime and fraud. Various government and law enforcement agencies have their own biometric databases consisting of combination of fingerprints, Iris codes, face images/videos and speech records for an increasing number of persons. In many cases personal data linked to biometric records are incomplete and/or inaccurate. Besides, biometric data in different databases for the same individual may be recorded with different personal details. Following the recent terrorist atrocities, law enforcing agencies collaborate more than before and have greater reliance on database sharing. In such an environment, reliable biometric-based identification must not only determine who you are but also who else you are. In this paper we propose a compact content-based video signature and indexing scheme that can facilitate retrieval of multiple records in face biometric databases that belong to the same person even if their associated personal data are inconsistent. We shall assess the performance of our system using a benchmark audio visual face biometric database that has multiple videos for each subject but with different identity claims. We shall demonstrate that retrieval of relatively small number of videos that are nearest, in terms of the proposed index, to any video in the database results in significant proportion of that individual biometric data.

  1. Video Design Games

    DEFF Research Database (Denmark)

    Smith, Rachel Charlotte; Christensen, Kasper Skov; Iversen, Ole Sejer

    We introduce Video Design Games to train educators in teaching design. The Video Design Game is a workshop format consisting of three rounds in which participants observe, reflect and generalize based on video snippets from their own practice. The paper reports on a Video Design Game workshop...... in which 25 educators as part of a digital fabrication and design program were able to critically reflect on their teaching practice....

  2. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  3. Video Browsing on Handheld Devices

    Science.gov (United States)

    Hürst, Wolfgang

    Recent improvements in processing power, storage space, and video codec development enable users now to playback video on their handheld devices in a reasonable quality. However, given the form factor restrictions of such a mobile device, screen size still remains a natural limit and - as the term "handheld" implies - always will be a critical resource. This is not only true for video but any data that is processed on such devices. For this reason, developers have come up with new and innovative ways to deal with large documents in such limited scenarios. For example, if you look at the iPhone, innovative techniques such as flicking have been introduced to skim large lists of text (e.g. hundreds of entries in your music collection). Automatically adapting the zoom level to, for example, the width of table cells when double tapping on the screen enables reasonable browsing of web pages that have originally been designed for large, desktop PC sized screens. A multi touch interface allows you to easily zoom in and out of large text documents and images using two fingers. In the next section, we will illustrate that advanced techniques to browse large video files have been developed in the past years, as well. However, if you look at state-of-the-art video players on mobile devices, normally just simple, VCR like controls are supported (at least at the time of this writing) that only allow users to just start, stop, and pause video playback. If supported at all, browsing and navigation functionality is often restricted to simple skipping of chapters via two single buttons for backward and forward navigation and a small and thus not very sensitive timeline slider.

  4. The Danish Testicular Cancer database

    DEFF Research Database (Denmark)

    Daugaard, Gedske; Kier, Maria Gry Gundgaard; Bandak, Mikkel

    2016-01-01

    AIM: The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database) and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database). The aim is to improve the quality of care for patients with testicular cancer (TC......) in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. STUDY POPULATION: All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data...... collection has been performed from 1984 to 2007 and from 2013 onward, respectively. MAIN VARIABLES AND DESCRIPTIVE DATA: The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function...

  5. ADAPTIVE STREAMING OVER HTTP (DASH UNTUK APLIKASI VIDEO STREAMING

    Directory of Open Access Journals (Sweden)

    I Made Oka Widyantara

    2015-12-01

    Full Text Available This paper aims to analyze Internet-based streaming video service in the communication media with variable bit rates. The proposed scheme on Dynamic Adaptive Streaming over HTTP (DASH using the internet network that adapts to the protocol Hyper Text Transfer Protocol (HTTP. DASH technology allows a video in the video segmentation into several packages that will distreamingkan. DASH initial stage is to compress the video source to lower the bit rate video codec uses H.26. Video compressed further in the segmentation using MP4Box generates streaming packets with the specified duration. These packages are assembled into packets in a streaming media format Presentation Description (MPD or known as MPEG-DASH. Streaming video format MPEG-DASH run on a platform with the player bitdash teritegrasi bitcoin. With this scheme, the video will have several variants of the bit rates that gave rise to the concept of scalability of streaming video services on the client side. The main target of the mechanism is smooth the MPEG-DASH streaming video display on the client. The simulation results show that the scheme based scalable video streaming MPEG-DASH able to improve the quality of image display on the client side, where the procedure bufering videos can be made constant and fine for the duration of video views

  6. The Children's Video Marketplace.

    Science.gov (United States)

    Ducey, Richard V.

    This report examines a growing submarket, the children's video marketplace, which comprises broadcast, cable, and video programming for children 2 to 11 years old. A description of the tremendous growth in the availability and distribution of children's programming is presented, the economics of the children's video marketplace are briefly…

  7. Video Self-Modeling

    Science.gov (United States)

    Buggey, Tom; Ogle, Lindsey

    2012-01-01

    Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…

  8. Efficacy of Ultrasound-Guided Serratus Plane Block on Postoperative Quality of Recovery and Analgesia After Video-Assisted Thoracic Surgery: A Randomized, Triple-Blind, Placebo-Controlled Study.

    Science.gov (United States)

    Kim, Do-Hyeong; Oh, Young Jun; Lee, Jin Gu; Ha, Donghun; Chang, Young Jin; Kwak, Hyun Jeong

    2018-04-01

    The optimal regional technique for analgesia and improved quality of recovery after video-assisted thoracic surgery (a procedure associated with considerable postoperative pain) has not been established. The main objective in this study was to compare quality of recovery in patients undergoing serratus plane block (SPB) with either ropivacaine or normal saline on the first postoperative day. Secondary outcomes were analgesic outcomes, including postoperative pain intensity and opioid consumption. Ninety patients undergoing video-assisted thoracic surgery were randomized to receive ultrasound-guided SPB with 0.4 mL/kg of either 0.375% ropivacaine (SPB group) or normal saline (control group) after anesthetic induction. The primary outcome was the 40-item Quality of Recovery (QoR-40) score at 24 hours after surgery. The QoR-40 questionnaire was completed by patients the day before surgery and on postoperative days 1 and 2. Pain scores, opioid consumption, and adverse events were assessed for 2 days postoperatively. Eighty-five patients completed the study: 42 in the SPB group and 43 in the control group. The global QoR-40 scores on both postoperative days 1 and 2 were significantly higher in the SPB group than in the control group (estimated mean difference 8.5, 97.5% confidence interval [CI], 2.1-15.0, and P = .003; 8.5, 97.5% CI, 2.0-15.1, and P = .004, respectively). The overall mean difference between the SPB and control groups was 8.5 (95% CI, 3.3-13.8; P = .002). Pain scores at rest and opioid consumption were significantly lower up to 6 hours after surgery in the SPB group than in the control group. Cumulative opioid consumption was significantly lower up to 24 hours postoperatively in the SPB group. Single-injection SPB with ropivacaine enhanced the quality of recovery for 2 days postoperatively and improved postoperative analgesia during the early postoperative period in patients undergoing video-assisted thoracic surgery.

  9. The Danish Testicular Cancer database.

    Science.gov (United States)

    Daugaard, Gedske; Kier, Maria Gry Gundgaard; Bandak, Mikkel; Mortensen, Mette Saksø; Larsson, Heidi; Søgaard, Mette; Toft, Birgitte Groenkaer; Engvad, Birte; Agerbæk, Mads; Holm, Niels Vilstrup; Lauritsen, Jakob

    2016-01-01

    The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database) and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database). The aim is to improve the quality of care for patients with testicular cancer (TC) in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data collection has been performed from 1984 to 2007 and from 2013 onward, respectively. The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function, lung function, etc. A questionnaire related to late effects has been conducted, which includes questions regarding social relationships, life situation, general health status, family background, diseases, symptoms, use of medication, marital status, psychosocial issues, fertility, and sexuality. TC survivors alive on October 2014 were invited to fill in this questionnaire including 160 validated questions. Collection of questionnaires is still ongoing. A biobank including blood/sputum samples for future genetic analyses has been established. Both samples related to DaTeCa and DMCG DaTeCa database are included. The prospective DMCG DaTeCa database includes variables regarding histology, stage, prognostic group, and treatment. The DMCG DaTeCa database has existed since 2013 and is a young clinical database. It is necessary to extend the data collection in the prospective database in order to answer quality-related questions. Data from the retrospective database will be added to the prospective data. This will result in a large and very comprehensive database for future studies on TC patients.

  10. Enhance Video Film using Retnix method

    Science.gov (United States)

    Awad, Rasha; Al-Zuky, Ali A.; Al-Saleh, Anwar H.; Mohamad, Haidar J.

    2018-05-01

    An enhancement technique used to improve the studied video quality. Algorithms like mean and standard deviation are used as a criterion within this paper, and it applied for each video clip that divided into 80 images. The studied filming environment has different light intensity (315, 566, and 644Lux). This different environment gives similar reality to the outdoor filming. The outputs of the suggested algorithm are compared with the results before applying it. This method is applied into two ways: first, it is applied for the full video clip to get the enhanced film; second, it is applied for every individual image to get the enhanced image then compiler them to get the enhanced film. This paper shows that the enhancement technique gives good quality video film depending on a statistical method, and it is recommended to use it in different application.

  11. Development and Validation of a Preprocedural Risk Score to Predict Access Site Complications After Peripheral Vascular Interventions Based on the Vascular Quality Initiative Database

    Directory of Open Access Journals (Sweden)

    Daniel Ortiz

    2016-01-01

    Full Text Available Purpose: Access site complications following peripheral vascular intervention (PVI are associated with prolonged hospitalization and increased mortality. Prediction of access site complication risk may optimize PVI care; however, there is no tool designed for this. We aimed to create a clinical scoring tool to stratify patients according to their risk of developing access site complications after PVI. Methods: The Society for Vascular Surgery’s Vascular Quality Initiative database yielded 27,997 patients who had undergone PVI at 131 North American centers. Clinically and statistically significant preprocedural risk factors associated with in-hospital, post-PVI access site complications were included in a multivariate logistic regression model, with access site complications as the outcome variable. A predictive model was developed with a random sample of 19,683 (70% PVI procedures and validated in 8,314 (30%. Results: Access site complications occurred in 939 (3.4% patients. The risk tool predictors are female gender, age > 70 years, white race, bedridden ambulatory status, insulin-treated diabetes mellitus, prior minor amputation, procedural indication of claudication, and nonfemoral arterial access site (model c-statistic = 0.638. Of these predictors, insulin-treated diabetes mellitus and prior minor amputation were protective of access site complications. The discriminatory power of the risk model was confirmed by the validation dataset (c-statistic = 0.6139. Higher risk scores correlated with increased frequency of access site complications: 1.9% for low risk, 3.4% for moderate risk and 5.1% for high risk. Conclusions: The proposed clinical risk score based on eight preprocedural characteristics is a tool to stratify patients at risk for post-PVI access site complications. The risk score may assist physicians in identifying patients at risk for access site complications and selection of patients who may benefit from bleeding avoidance

  12. Inter-Rater Agreement of Pressure Ulcer Risk and Prevention Measures in the National Database of Nursing Quality Indicators(®) (NDNQI).

    Science.gov (United States)

    Waugh, Shirley Moore; Bergquist-Beringer, Sandra

    2016-06-01

    In this descriptive multi-site study, we examined inter-rater agreement on 11 National Database of Nursing Quality Indicators(®) (NDNQI(®) ) pressure ulcer (PrU) risk and prevention measures. One hundred twenty raters at 36 hospitals captured data from 1,637 patient records. At each hospital, agreement between the most experienced rater and each other team rater was calculated for each measure. In the ratings studied, 528 patients were rated as "at risk" for PrU and, therefore, were included in calculations of agreement for the prevention measures. Prevalence-adjusted kappa (PAK) was used to interpret inter-rater agreement because prevalence of single responses was high. The PAK values for eight measures indicated "substantial" to "near perfect" agreement between most experienced and other team raters: Skin assessment on admission (.977, 95% CI [.966-.989]), PrU risk assessment on admission (.978, 95% CI [.964-.993]), Time since last risk assessment (.790, 95% CI [.729-.852]), Risk assessment method (.997, 95% CI [.991-1.0]), Risk status (.877, 95% CI [.838-.917]), Any prevention (.856, 95% CI [.76-.943]), Skin assessment (.956, 95% CI [.904-1.0]), and Pressure-redistribution surface use (.839, 95% CI [.763-.916]). For three intervention measures, PAK values fell below the recommended value of ≥.610: Routine repositioning (.577, 95% CI [.494-.661]), Nutritional support (.500, 95% CI [.418-.581]), and Moisture management (.556, 95% CI [.469-.643]). Areas of disagreement were identified. Findings provide support for the reliability of 8 of the 11 measures. Further clarification of data collection procedures is needed to improve reliability for the less reliable measures. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Database Vs Data Warehouse

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Data warehouse technology includes a set of concepts and methods that offer the users useful information for decision making. The necessity to build a data warehouse arises from the necessity to improve the quality of information in the organization. The date proceeding from different sources, having a variety of forms - both structured and unstructured, are filtered according to business rules and are integrated in a single large data collection. Using informatics solutions, managers have understood that data stored in operational systems - including databases, are an informational gold mine that must be exploited. Data warehouses have been developed to answer the increasing demands for complex analysis, which could not be properly achieved with operational databases. The present paper emphasizes some of the criteria that information application developers can use in order to choose between a database solution or a data warehouse one.

  14. The Danish Depression Database

    DEFF Research Database (Denmark)

    Videbech, Poul Bror Hemming; Deleuran, Anette

    2016-01-01

    AIM OF DATABASE: The purpose of the Danish Depression Database (DDD) is to monitor and facilitate the improvement of the quality of the treatment of depression in Denmark. Furthermore, the DDD has been designed to facilitate research. STUDY POPULATION: Inpatients as well as outpatients...... with depression, aged above 18 years, and treated in the public psychiatric hospital system were enrolled. MAIN VARIABLES: Variables include whether the patient has been thoroughly somatically examined and has been interviewed about the psychopathology by a specialist in psychiatry. The Hamilton score as well...... as an evaluation of the risk of suicide are measured before and after treatment. Whether psychiatric aftercare has been scheduled for inpatients and the rate of rehospitalization are also registered. DESCRIPTIVE DATA: The database was launched in 2011. Every year since then ~5,500 inpatients and 7,500 outpatients...

  15. VBR video traffic models

    CERN Document Server

    Tanwir, Savera

    2014-01-01

    There has been a phenomenal growth in video applications over the past few years. An accurate traffic model of Variable Bit Rate (VBR) video is necessary for performance evaluation of a network design and for generating synthetic traffic that can be used for benchmarking a network. A large number of models for VBR video traffic have been proposed in the literature for different types of video in the past 20 years. Here, the authors have classified and surveyed these models and have also evaluated the models for H.264 AVC and MVC encoded video and discussed their findings.

  16. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  17. Intelligent video surveillance systems

    CERN Document Server

    Dufour, Jean-Yves

    2012-01-01

    Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi

  18. Large-video-display-format conversion

    NARCIS (Netherlands)

    Haan, de G.

    2000-01-01

    High-quality video-format converters apply motion estimation and motion compensation to prevent jitter resulting from picture-rate conversion, and aliasing due to de-interlacing, in sequences with motion. Although initially considered as too expensive, high-quality conversion is now economically

  19. About subjective evaluation of adaptive video streaming

    Science.gov (United States)

    Tavakoli, Samira; Brunnström, Kjell; Garcia, Narciso

    2015-03-01

    The usage of HTTP Adaptive Streaming (HAS) technology by content providers is increasing rapidly. Having available the video content in multiple qualities, using HAS allows to adapt the quality of downloaded video to the current network conditions providing smooth video-playback. However, the time-varying video quality by itself introduces a new type of impairment. The quality adaptation can be done in different ways. In order to find the best adaptation strategy maximizing users perceptual quality it is necessary to investigate about the subjective perception of adaptation-related impairments. However, the novelties of these impairments and their comparably long time duration make most of the standardized assessment methodologies fall less suited for studying HAS degradation. Furthermore, in traditional testing methodologies, the quality of the video in audiovisual services is often evaluated separated and not in the presence of audio. Nevertheless, the requirement of jointly evaluating the audio and the video within a subjective test is a relatively under-explored research field. In this work, we address the research question of determining the appropriate assessment methodology to evaluate the sequences with time-varying quality due to the adaptation. This was done by studying the influence of different adaptation related parameters through two different subjective experiments using a methodology developed to evaluate long test sequences. In order to study the impact of audio presence on quality assessment by the test subjects, one of the experiments was done in the presence of audio stimuli. The experimental results were subsequently compared with another experiment using the standardized single stimulus Absolute Category Rating (ACR) methodology.

  20. Flip Video for Dummies

    CERN Document Server

    Hutsko, Joe

    2010-01-01

    The full-color guide to shooting great video with the Flip Video camera. The inexpensive Flip Video camera is currently one of the hottest must-have gadgets. It's portable and connects easily to any computer to transfer video you shoot onto your PC or Mac. Although the Flip Video camera comes with a quick-start guide, it lacks a how-to manual, and this full-color book fills that void! Packed with full-color screen shots throughout, Flip Video For Dummies shows you how to shoot the best possible footage in a variety of situations. You'll learn how to transfer video to your computer and then edi

  1. Real Time Face Quality Assessment for Face Log Generation

    DEFF Research Database (Denmark)

    Kamal, Nasrollahi; Moeslund, Thomas B.

    2009-01-01

    Summarizing a long surveillance video to just a few best quality face images of each subject, a face-log, is of great importance in surveillance systems. Face quality assessment is the back-bone for face log generation and improving the quality assessment makes the face logs more reliable....... Developing a real time face quality assessment system using the most important facial features and employing it for face logs generation are the concerns of this paper. Extensive tests using four databases are carried out to validate the usability of the system....

  2. The flux database concerted action

    International Nuclear Information System (INIS)

    Mitchell, N.G.; Donnelly, C.E.

    1999-01-01

    This paper summarizes the background to the UIR action on the development of a flux database for radionuclide transfer in soil-plant systems. The action is discussed in terms of the objectives, the deliverables and the progress achieved so far by the flux database working group. The paper describes the background to the current initiative and outlines specific features of the database and supporting documentation. Particular emphasis is placed on the proforma used for data entry, on the database help file and on the approach adopted to indicate data quality. Refs. 3 (author)

  3. Diseño, Construcción y Evaluación de una Pauta de Observación de Videos para Evaluar Calidad del Desempeño Docente Design, Construction, and Assessment of a Guideline for the Observation of Videos to Evaluate Quality of Teacher's Performance

    Directory of Open Access Journals (Sweden)

    Neva Milicic

    2008-11-01

    Full Text Available Este artículo presenta el diseño, construcción y valoración métrica de una pauta de observación de clases, cuyo objetivo es evaluar la calidad del desempeño docente. Para ello se observaron 92 videos de profesores de 4º a 8º básico, evaluados previamente en Chile por el Sistema Nacional de Evaluación del Desempeño Profesional Docente, Docentemós. El instrumento muestra adecuados niveles de confiabilidad y permite efectuar una evaluación complementaria a la efectuada por Docentemás, debido a que integra variables adicionales e innovadoras en la evaluación del desempeño docente. La pauta permite entregar a los maestros el conocimiento de dimensiones relevantes para el logro de aprendizajes de calidad, posibilitando que ellos mismos se autoevalúen y logren visualizar aspectos que pudieran desarrollar y/o mejorar en sus prácticas pedagógicas.This article presents the design, construction, and metric evaluation of a guideline for the observation of school classrooms. The goal was to evaluate the quality of the teacher's educational performance. Ninety-two teacher's videos from 4th to 8th grade, previously recorded by the National System of Evaluation of the Educational Professional Performance, Docentemas, were used in the study. The instrument shows appropriate levels of reliability and allows for a complementary assessment to the one done by Docentemás, integrating additional and innovative variables to the assessment of the teacher's performance. The guideline provides teachers with relevant information about the quality of the learning process, making it possible for them to visualize areas of their pedagogical practices that they could develop or improve.

  4. The effects of video games on laparoscopic simulator skills.

    Science.gov (United States)

    Jalink, Maarten B; Goris, Jetse; Heineman, Erik; Pierie, Jean-Pierre E N; ten Cate Hoedemaker, Henk O

    2014-07-01

    Recently, there has been a growth in studies supporting the hypothesis that video games have positive effects on basic laparoscopic skills. This review discusses all studies directly related to these effects. A search in the PubMed and EMBASE databases was performed using synonymous terms for video games and laparoscopy. All available articles concerning video games and their effects on skills on any laparoscopic simulator (box trainer, virtual reality, and animal models) were selected. Video game experience has been related to higher baseline laparoscopic skills in different studies. There is currently, however, no standardized method to assess video game experience, making it difficult to compare these studies. Several controlled experiments have, nevertheless, shown that video games cannot only be used to improve laparoscopic basic skills in surgical novices, but are also used as a temporary warming-up before laparoscopic surgery. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. The Danish Cardiac Rehabilitation Database

    DEFF Research Database (Denmark)

    Zwisler, Ann-Dorthe; Rossau, Henriette Knold; Nakano, Anne

    2016-01-01

    hospitals annually, with 75% receiving one or more outpatient rehabilitation services by 2015. The database has not yet been running for a full year, which explains the use of approximations. CONCLUSION: The DHRD is an online, national quality improvement database on CR, aimed at patients with CHD......AIM OF DATABASE: The Danish Cardiac Rehabilitation Database (DHRD) aims to improve the quality of cardiac rehabilitation (CR) to the benefit of patients with coronary heart disease (CHD). STUDY POPULATION: Hospitalized patients with CHD with stenosis on coronary angiography treated...... with percutaneous coronary intervention, coronary artery bypass grafting, or medication alone. Reporting is mandatory for all hospitals in Denmark delivering CR. The database was initially implemented in 2013 and was fully running from August 14, 2015, thus comprising data at a patient level from the latter date...

  6. Despeckle filtering for ultrasound imaging and video II selected applications

    CERN Document Server

    Loizou, Christos P

    2015-01-01

    In ultrasound imaging and video visual perception is hindered by speckle multiplicative noise that degrades the quality. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image/video segmentation, texture analysis and encoding in ultrasound imaging and video. The goal of the first book (book 1 of 2 books) was to introduce the problem of speckle in ultrasound image and video as well as the theoretical background, algorithmic steps, and the MatlabTM for the following group of despeckle filters:

  7. REAL-TIME VIDEO SCALING BASED ON CONVOLUTION NEURAL NETWORK ARCHITECTURE

    OpenAIRE

    S Safinaz; A V Ravi Kumar

    2017-01-01

    In recent years, video super resolution techniques becomes mandatory requirements to get high resolution videos. Many super resolution techniques researched but still video super resolution or scaling is a vital challenge. In this paper, we have presented a real-time video scaling based on convolution neural network architecture to eliminate the blurriness in the images and video frames and to provide better reconstruction quality while scaling of large datasets from lower resolution frames t...

  8. Effective deep learning training for single-image super-resolution in endomicroscopy exploiting video-registration-based reconstruction.

    Science.gov (United States)

    Ravì, Daniele; Szczotka, Agnieszka Barbara; Shakir, Dzhoshkun Ismail; Pereira, Stephen P; Vercauteren, Tom

    2018-06-01

    Probe-based confocal laser endomicroscopy (pCLE) is a recent imaging modality that allows performing in vivo optical biopsies. The design of pCLE hardware, and its reliance on an optical fibre bundle, fundamentally limits the image quality with a few tens of thousands fibres, each acting as the equivalent of a single-pixel detector, assembled into a single fibre bundle. Video registration techniques can be used to estimate high-resolution (HR) images by exploiting the temporal information contained in a sequence of low-resolution (LR) images. However, the alignment of LR frames, required for the fusion, is computationally demanding and prone to artefacts. In this work, we propose a novel synthetic data generation approach to train exemplar-based Deep Neural Networks (DNNs). HR pCLE images with enhanced quality are recovered by the models trained on pairs of estimated HR images (generated by the video registration algorithm) and realistic synthetic LR images. Performance of three different state-of-the-art DNNs techniques were analysed on a Smart Atlas database of 8806 images from 238 pCLE video sequences. The results were validated through an extensive image quality assessment that takes into account different quality scores, including a Mean Opinion Score (MOS). Results indicate that the proposed solution produces an effective improvement in the quality of the obtained reconstructed image. The proposed training strategy and associated DNNs allows us to perform convincing super-resolution of pCLE images.

  9. The Danish Smoking Cessation Database

    DEFF Research Database (Denmark)

    Rasmussen, Mette; Tønnesen, Hanne

    2016-01-01

    Background: The Danish Smoking Cessation Database (SCDB) was established in 2001 as the first national healthcare register within the field of health promotion. Aim of the database: The aim of the SCDB is to document and evaluate smoking cessation (SC) interventions to assess and improve their qu......‐free. The database is increasingly used in register-based research.......Background: The Danish Smoking Cessation Database (SCDB) was established in 2001 as the first national healthcare register within the field of health promotion. Aim of the database: The aim of the SCDB is to document and evaluate smoking cessation (SC) interventions to assess and improve...... their quality. The database was also designed to function as a basis for register-based research projects. Study population The population includes smokers in Denmark who have been receiving a face-to-face SC intervention offered by an SC clinic affiliated with the SCDB. SC clinics can be any organisation...

  10. Stackfile Database

    Science.gov (United States)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  11. Dimensioning Method for Conversational Video Applications in Wireless Convergent Networks

    Directory of Open Access Journals (Sweden)

    Raquel Perez Leal

    2007-12-01

    Full Text Available New convergent services are becoming possible, thanks to the expansion of IP networks based on the availability of innovative advanced coding formats such as H.264, which reduce network bandwidth requirements providing good video quality, and the rapid growth in the supply of dual-mode WiFi cellular terminals. This paper provides, first, a comprehensive subject overview as several technologies are involved, such as medium access protocol in IEEE802.11, H.264 advanced video coding standards, and conversational application characterization and recommendations. Second, the paper presents a new and simple dimensioning model of conversational video over wireless LAN. WLAN is addressed under the optimal network throughput and the perspective of video quality. The maximum number of simultaneous users resulting from throughput is limited by the collisions taking place in the shared medium with the statistical contention protocol. The video quality is conditioned by the packet loss in the contention protocol. Both approaches are analyzed within the scope of the advanced video codecs used in conversational video over IP, to conclude that conversational video dimensioning based on network throughput is not enough to ensure a satisfactory user experience, and video quality has to be taken also into account. Finally, the proposed model has been applied to a real-office scenario.

  12. Dimensioning Method for Conversational Video Applications in Wireless Convergent Networks

    Directory of Open Access Journals (Sweden)

    Alonso JoséI

    2008-01-01

    Full Text Available Abstract New convergent services are becoming possible, thanks to the expansion of IP networks based on the availability of innovative advanced coding formats such as H.264, which reduce network bandwidth requirements providing good video quality, and the rapid growth in the supply of dual-mode WiFi cellular terminals. This paper provides, first, a comprehensive subject overview as several technologies are involved, such as medium access protocol in IEEE802.11, H.264 advanced video coding standards, and conversational application characterization and recommendations. Second, the paper presents a new and simple dimensioning model of conversational video over wireless LAN. WLAN is addressed under the optimal network throughput and the perspective of video quality. The maximum number of simultaneous users resulting from throughput is limited by the collisions taking place in the shared medium with the statistical contention protocol. The video quality is conditioned by the packet loss in the contention protocol. Both approaches are analyzed within the scope of the advanced video codecs used in conversational video over IP, to conclude that conversational video dimensioning based on network throughput is not enough to ensure a satisfactory user experience, and video quality has to be taken also into account. Finally, the proposed model has been applied to a real-office scenario.

  13. Segment scheduling method for reducing 360° video streaming latency

    Science.gov (United States)

    Gudumasu, Srinivas; Asbun, Eduardo; He, Yong; Ye, Yan

    2017-09-01

    360° video is an emerging new format in the media industry enabled by the growing availability of virtual reality devices. It provides the viewer a new sense of presence and immersion. Compared to conventional rectilinear video (2D or 3D), 360° video poses a new and difficult set of engineering challenges on video processing and delivery. Enabling comfortable and immersive user experience requires very high video quality and very low latency, while the large video file size poses a challenge to delivering 360° video in a quality manner at scale. Conventionally, 360° video represented in equirectangular or other projection formats can be encoded as a single standards-compliant bitstream using existing video codecs such as H.264/AVC or H.265/HEVC. Such method usually needs very high bandwidth to provide an immersive user experience. While at the client side, much of such high bandwidth and the computational power used to decode the video are wasted because the user only watches a small portion (i.e., viewport) of the entire picture. Viewport dependent 360°video processing and delivery approaches spend more bandwidth on the viewport than on non-viewports and are therefore able to reduce the overall transmission bandwidth. This paper proposes a dual buffer segment scheduling algorithm for viewport adaptive streaming methods to reduce latency when switching between high quality viewports in 360° video streaming. The approach decouples the scheduling of viewport segments and non-viewport segments to ensure the viewport segment requested matches the latest user head orientation. A base layer buffer stores all lower quality segments, and a viewport buffer stores high quality viewport segments corresponding to the most recent viewer's head orientation. The scheduling scheme determines viewport requesting time based on the buffer status and the head orientation. This paper also discusses how to deploy the proposed scheduling design for various viewport adaptive video

  14. Video Toroid Cavity Imager

    Energy Technology Data Exchange (ETDEWEB)

    Gerald, Rex E. II; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  15. Digital Video in Research

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2012-01-01

    Is video becoming “the new black” in academia, if so, what are the challenges? The integration of video in research methodology (for collection, analysis) is well-known, but the use of “academic video” for dissemination is relatively new (Eriksson and Sørensen). The focus of this paper is academic......). In the video, I appear (along with other researchers) and two Danish film directors, and excerpts from their film. My challenges included how to edit the academic video and organize the collaborative effort. I consider video editing as a semiotic, transformative process of “reassembling” voices....... In the discussion, I review academic video in terms of relevance and implications for research practice. The theoretical background is social constructivist, combining social semiotics (Kress, van Leeuwen, McCloud), visual anthropology (Banks, Pink) and dialogic theory (Bakhtin). The Bakhtinian notion of “voices...

  16. Reflections on academic video

    Directory of Open Access Journals (Sweden)

    Thommy Eriksson

    2012-11-01

    Full Text Available As academics we study, research and teach audiovisual media, yet rarely disseminate and mediate through it. Today, developments in production technologies have enabled academic researchers to create videos and mediate audiovisually. In academia it is taken for granted that everyone can write a text. Is it now time to assume that everyone can make a video essay? Using the online journal of academic videos Audiovisual Thinking and the videos published in it as a case study, this article seeks to reflect on the emergence and legacy of academic audiovisual dissemination. Anchoring academic video and audiovisual dissemination of knowledge in two critical traditions, documentary theory and semiotics, we will argue that academic video is in fact already present in a variety of academic disciplines, and that academic audiovisual essays are bringing trends and developments that have long been part of academic discourse to their logical conclusion.

  17. An analysis from the Quality Outcomes Database, Part 1. Disability, quality of life, and pain outcomes following lumbar spine surgery: predicting likely individual patient outcomes for shared decision-making.

    Science.gov (United States)

    McGirt, Matthew J; Bydon, Mohamad; Archer, Kristin R; Devin, Clinton J; Chotai, Silky; Parker, Scott L; Nian, Hui; Harrell, Frank E; Speroff, Theodore; Dittus, Robert S; Philips, Sharon E; Shaffrey, Christopher I; Foley, Kevin T; Asher, Anthony L

    2017-10-01

    OBJECTIVE Quality and outcomes registry platforms lie at the center of many emerging evidence-driven reform models. Specifically, clinical registry data are progressively informing health care decision-making. In this analysis, the authors used data from a national prospective outcomes registry (the Quality Outcomes Database) to develop a predictive model for 12-month postoperative pain, disability, and quality of life (QOL) in patients undergoing elective lumbar spine surgery. METHODS Included in this analysis were 7618 patients who had completed 12 months of follow-up. The authors prospectively assessed baseline and 12-month patient-reported outcomes (PROs) via telephone interviews. The PROs assessed were those ascertained using the Oswestry Disability Index (ODI), EQ-5D, and numeric rating scale (NRS) for back pain (BP) and leg pain (LP). Variables analyzed for the predictive model included age, gender, body mass index, race, education level, history of prior surgery, smoking status, comorbid conditions, American Society of Anesthesiologists (ASA) score, symptom duration, indication for surgery, number of levels surgically treated, history of fusion surgery, surgical approach, receipt of workers' compensation, liability insurance, insurance status, and ambulatory ability. To create a predictive model, each 12-month PRO was treated as an ordinal dependent variable and a separate proportional-odds ordinal logistic regression model was fitted for each PRO. RESULTS There was a significant improvement in all PROs (p disability, QOL, and pain outcomes following lumbar spine surgery were employment status, baseline NRS-BP scores, psychological distress, baseline ODI scores, level of education, workers' compensation status, symptom duration, race, baseline NRS-LP scores, ASA score, age, predominant symptom, smoking status, and insurance status. The prediction discrimination of the 4 separate novel predictive models was good, with a c-index of 0.69 for ODI, 0.69 for EQ-5

  18. Near Real-Time Automatic Data Quality Controls for the AERONET Version 3 Database: An Introduction to the New Level 1.5V Aerosol Optical Depth Data Product

    Science.gov (United States)

    Giles, D. M.; Holben, B. N.; Smirnov, A.; Eck, T. F.; Slutsker, I.; Sorokin, M. G.; Espenak, F.; Schafer, J.; Sinyuk, A.

    2015-12-01

    The Aerosol Robotic Network (AERONET) has provided a database of aerosol optical depth (AOD) measured by surface-based Sun/sky radiometers for over 20 years. AERONET provides unscreened (Level 1.0) and automatically cloud cleared (Level 1.5) AOD in near real-time (NRT), while manually inspected quality assured (Level 2.0) AOD are available after instrument field deployment (Smirnov et al., 2000). The growing need for NRT quality controlled aerosol data has become increasingly important. Applications of AERONET NRT data include the satellite evaluation (e.g., MODIS, VIIRS, MISR, OMI), data synergism (e.g., MPLNET), verification of aerosol forecast models and reanalysis (e.g., GOCART, ICAP, NAAPS, MERRA), input to meteorological models (e.g., NCEP, ECMWF), and field campaign support (e.g., KORUS-AQ, ORACLES). In response to user needs for quality controlled NRT data sets, the new Version 3 (V3) Level 1.5V product was developed with similar quality controls as those applied by hand to the Version 2 (V2) Level 2.0 data set. The AERONET cloud screened (Level 1.5) NRT AOD database can be significantly impacted by data anomalies. The most significant data anomalies include AOD diurnal dependence due to contamination or obstruction of the sensor head windows, anomalous AOD spectral dependence due to problems with filter degradation, instrument gains, or non-linear changes in calibration, and abnormal changes in temperature sensitive wavelengths (e.g., 1020nm) in response to anomalous sensor head temperatures. Other less common AOD anomalies result from loose filters, uncorrected clock shifts, connection and electronic issues, and various solar eclipse episodes. Automatic quality control algorithms are applied to the new V3 Level 1.5 database to remove NRT AOD anomalies and produce the new AERONET V3 Level 1.5V AOD product. Results of the quality control algorithms are presented and the V3 Level 1.5V AOD database is compared to the V2 Level 2.0 AOD database.

  19. The Important Elements of a Science Video

    Science.gov (United States)

    Harned, D. A.; Moorman, M.; McMahon, G.

    2012-12-01

    New technologies have revolutionized use of video as a means of communication. Films have become easier to create and to distribute. Video is omnipresent in our culture and supplements or even replaces writing in many applications. How can scientists and educators best use video to communicate scientific results? Video podcasts are being used in addition to journal, print, and online publications to communicate the relevance of scientific findings of the U.S. Geological Survey's (USGS) National Water-Quality Assessment (NAWQA) program to general audiences such as resource managers, educational groups, public officials, and the general public. In an effort to improve the production of science videos a survey was developed to provide insight into effective science communication with video. Viewers of USGS podcast videos were surveyed using Likert response- scaling to identify the important elements of science videos. The surveys were of 120 scientists and educators attending the 2010 and 2011 Fall Meetings of the American Geophysical Union and the 2012 meeting of the National Monitoring Council. The median age of the respondents was 44 years, with an education level of a Bachelor's Degree or higher. Respondents reported that their primary sources for watching science videos were YouTube and science websites. Video length was the single most important element associated with reaching the greatest number of viewers. The surveys indicated a median length of 5 minutes as appropriate for a web video, with 5-7 minutes the 25th-75th percentiles. An illustration of the effect of length: a 5-minute and a 20-minute version of a USGS film on the effect of urbanization on water-quality was made available on the same website. The short film has been downloaded 3 times more frequently than the longer film version. The survey showed that the most important elements to include in a science film are style elements including strong visuals, an engaging story, and a simple message, and

  20. How Does Neighborhood Quality Moderate the Association Between Online Video Game Play and Depression? A Population-Level Analysis of Korean Students.

    Science.gov (United States)

    Kim, Harris Hyun-Soo; Ahn, Sun Joo Grace

    2016-10-01

    The main objective of our study is to assess the relationship between playing online video games and mental wellbeing of adolescents based on a nationally representative sample. Data come from the Korean Children and Youth Panel Survey (KCYPS), a government-funded multiyear research project. Through a secondary analysis of W2 and W3 of data collected in 2011 and 2012, we examine the extent to which time spent playing online games is related to depression, as measured by a battery of items modeled after the abridged version of Center for Epidemiologic Studies Depression Scale Revised (CESD-R). For proper temporal ordering, the outcome variable is drawn from the latter wave (W3), whereas all time-lagged covariates are taken from the earlier wave (W2). Multilevel regression models show that more game playing is associated with greater depression. Findings also indicate that, net of individual-level variables (e.g., gender, health, family background), living in a community with more divorced families adds to adolescent depression. Finally, a cross-level interaction is observed: the positive association between game playing and depression is more pronounced in an area characterized by a lower aggregate divorce rate.

  1. Integrating Usability Evaluation into Model-Driven Video Game Development

    OpenAIRE

    Fernandez , Adrian; Insfran , Emilio; Abrahão , Silvia; Carsí , José ,; Montero , Emanuel

    2012-01-01

    Part 3: Short Papers; International audience; The increasing complexity of video game development highlights the need of design and evaluation methods for enhancing quality and reducing time and cost. In this context, Model-Driven Development approaches seem to be very promising since a video game can be obtained by transforming platform-independent models into platform-specific models that can be in turn transformed into code. Although this approach is started to being used for video game de...

  2. Extending Database Integration Technology

    National Research Council Canada - National Science Library

    Buneman, Peter

    1999-01-01

    Formal approaches to the semantics of databases and database languages can have immediate and practical consequences in extending database integration technologies to include a vastly greater range...

  3. Gradual cut detection using low-level vision for digital video

    Science.gov (United States)

    Lee, Jae-Hyun; Choi, Yeun-Sung; Jang, Ok-bae

    1996-09-01

    Digital video computing and organization is one of the important issues in multimedia system, signal compression, or database. Video should be segmented into shots to be used for identification and indexing. This approach requires a suitable method to automatically locate cut points in order to separate shot in a video. Automatic cut detection to isolate shots in a video has received considerable attention due to many practical applications; our video database, browsing, authoring system, retrieval and movie. Previous studies are based on a set of difference mechanisms and they measured the content changes between video frames. But they could not detect more special effects which include dissolve, wipe, fade-in, fade-out, and structured flashing. In this paper, a new cut detection method for gradual transition based on computer vision techniques is proposed. And then, experimental results applied to commercial video are presented and evaluated.

  4. The Danish Testicular Cancer database

    Directory of Open Access Journals (Sweden)

    Daugaard G

    2016-10-01

    Full Text Available Gedske Daugaard,1 Maria Gry Gundgaard Kier,1 Mikkel Bandak,1 Mette Saksø Mortensen,1 Heidi Larsson,2 Mette Søgaard,2 Birgitte Groenkaer Toft,3 Birte Engvad,4 Mads Agerbæk,5 Niels Vilstrup Holm,6 Jakob Lauritsen1 1Department of Oncology 5073, Copenhagen University Hospital, Rigshospitalet, Copenhagen, 2Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, 3Department of Pathology, Copenhagen University Hospital, Rigshospitalet, Copenhagen, 4Department of Pathology, Odense University Hospital, Odense, 5Department of Oncology, Aarhus University Hospital, Aarhus, 6Department of Oncology, Odense University Hospital, Odense, Denmark Aim: The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database. The aim is to improve the quality of care for patients with testicular cancer (TC in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. Study population: All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data collection has been performed from 1984 to 2007 and from 2013 onward, respectively. Main variables and descriptive data: The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function, lung function, etc. A questionnaire related to late effects has been conducted, which includes questions regarding social relationships, life situation, general health status, family background, diseases, symptoms, use of medication, marital status, psychosocial issues, fertility, and sexuality. TC survivors alive on October 2014 were invited to fill in this questionnaire including 160 validated questions

  5. Development of an emergency medical video multiplexing transport system. Aiming at the nation wide prehospital care on ambulance.

    Science.gov (United States)

    Nagatuma, Hideaki

    2003-04-01

    The Emergency Medical Video Multiplexing Transport System (EMTS) is designed to support prehospital cares by delivering high quality live video streams of patients in an ambulance to emergency doctors in a remote hospital via satellite communications. The important feature is that EMTS divides a patient's live video scene into four pieces and transports the four video streams on four separate network channels. By multiplexing four video streams, EMTS is able to transport high quality videos through low data transmission rate networks such as satellite communications and cellular phone networks. In order to transport live video streams constantly, EMTS adopts Real-time Transport Protocol/Real-time Control Protocol as a network protocol and video stream data are compressed by Moving Picture Experts Group 4 format. As EMTS combines four video streams with checking video frame numbers, it uses a refresh packet that initializes server's frame numbers to synchronize the four video streams.

  6. Video steganography based on bit-plane decomposition of wavelet-transformed video

    Science.gov (United States)

    Noda, Hideki; Furuta, Tomofumi; Niimi, Michiharu; Kawaguchi, Eiji

    2004-06-01

    This paper presents a steganography method using lossy compressed video which provides a natural way to send a large amount of secret data. The proposed method is based on wavelet compression for video data and bit-plane complexity segmentation (BPCS) steganography. BPCS steganography makes use of bit-plane decomposition and the characteristics of the human vision system, where noise-like regions in bit-planes of a dummy image are replaced with secret data without deteriorating image quality. In wavelet-based video compression methods such as 3-D set partitioning in hierarchical trees (SPIHT) algorithm and Motion-JPEG2000, wavelet coefficients in discrete wavelet transformed video are quantized into a bit-plane structure and therefore BPCS steganography can be applied in the wavelet domain. 3-D SPIHT-BPCS steganography and Motion-JPEG2000-BPCS steganography are presented and tested, which are the integration of 3-D SPIHT video coding and BPCS steganography, and that of Motion-JPEG2000 and BPCS, respectively. Experimental results show that 3-D SPIHT-BPCS is superior to Motion-JPEG2000-BPCS with regard to embedding performance. In 3-D SPIHT-BPCS steganography, embedding rates of around 28% of the compressed video size are achieved for twelve bit representation of wavelet coefficients with no noticeable degradation in video quality.

  7. Toward enhancing the distributed video coder under a multiview video codec framework

    Science.gov (United States)

    Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua

    2016-11-01

    The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.

  8. Authority Control and Linked Bibliographic Databases.

    Science.gov (United States)

    Clack, Doris H.

    1988-01-01

    Explores issues related to bibliographic database authority control, including the nature of standards, quality control, library cooperation, centralized and decentralized databases and authority control systems, and economic considerations. The implications of authority control for linking large scale databases are discussed. (18 references)…

  9. Postoperative pain and quality of life after lobectomy via video-assisted thoracoscopic surgery or anterolateral thoracotomy for early stage lung cancer

    DEFF Research Database (Denmark)

    Bendixen, Morten; Jørgensen, Ole Dan; Kronborg, Christian

    2016-01-01

    (1:1) to lobectomy via four-port VATS or anterolateral thoracotomy. After surgery, we applied identical surgical dressings to ensure masking of patients and staff. Postoperative pain was measured with a numeric rating scale (NRS) six times per day during hospital stay and once at 2, 4, 8, 12, 26......, and 52 weeks, and self-reported quality of life was assessed with the EuroQol 5 Dimensions (EQ5D) and the European Organisation for Research and Treatment of Cancer (EORTC) 30 item Quality of Life Questionnaire (QLQ-C30) during hospital stay and 2, 4, 8, 12, 26, and 52 weeks after discharge. The primary...... died during the follow-up period (three in the VATS group and six in the thoracotomy group). INTERPRETATION: VATS is associated with less postoperative pain and better quality of life than is anterolateral thoracotomy for the first year after surgery, suggesting that VATS should be the preferred...

  10. Web-based tools for data analysis and quality assurance on a life-history trait database of plants of Northwest Europe

    NARCIS (Netherlands)

    Stadler, Michael; Ahlers, Dirk; Bekker, Rene M.; Finke, Jens; Kunzmann, Dierk; Sonnenschein, Michael

    2006-01-01

    Most data mining techniques have rarely been used in ecology. To address the specific needs of scientists analysing data from a plant trait database developed during the LEDA project, a web-based data mining tool has been developed. This paper presents the DIONE data miner and the project it has

  11. Using a Materials Database System as the Backbone for a Certified Quality System (AS/NZS ISO 9001:1994) for a Distance Education Centre.

    Science.gov (United States)

    Hughes, Norm

    The Distance Education Center (DEC) of the University of Southern Queensland (Australia) has developed a unique materials database system which is used to monitor pre-production, design and development, production and post-production planning, scheduling, and distribution of all types of materials including courses offered only on the Internet. In…

  12. Odense Pharmacoepidemiological Database (OPED)

    DEFF Research Database (Denmark)

    Hallas, Jesper; Poulsen, Maja Hellfritzsch; Hansen, Morten Rix

    2017-01-01

    The Odense University Pharmacoepidemiological Database (OPED) is a prescription database established in 1990 by the University of Southern Denmark, covering reimbursed prescriptions from the county of Funen in Denmark and the region of Southern Denmark (1.2 million inhabitants). It is still active...... and thereby has more than 25 years' of continuous coverage. In this MiniReview, we review its history, content, quality, coverage, governance and some of its uses. OPED's data include the Danish Civil Registration Number (CPR), which enables unambiguous linkage with virtually all other health......-related registers in Denmark. Among its research uses, we review record-linkage studies of drug effects, advanced drug utilization studies, some examples of method development and use of OPED as sampling frame to recruit patients for field studies or clinical trials. With the advent of other, more comprehensive...

  13. The Video Generation.

    Science.gov (United States)

    Provenzo, Eugene F., Jr.

    1992-01-01

    Video games are neither neutral nor harmless but represent very specific social and symbolic constructs. Research on the social content of today's video games reveals that sex bias and gender stereotyping are widely evident throughout the Nintendo games. Violence and aggression also pervade the great majority of the games. (MLF)

  14. SECRETS OF SONG VIDEO

    Directory of Open Access Journals (Sweden)

    Chernyshov Alexander V.

    2014-04-01

    Full Text Available The article focuses on the origins of the song videos as TV and Internet-genre. In addition, it considers problems of screen images creation depending on the musical form and the text of a songs in connection with relevant principles of accent and phraseological video editing and filming techniques as well as with additional frames and sound elements.

  15. Reviews in instructional video

    NARCIS (Netherlands)

    van der Meij, Hans

    2017-01-01

    This study investigates the effectiveness of a video tutorial for software training whose construction was based on a combination of insights from multimedia learning and Demonstration-Based Training. In the videos, a model of task performance was enhanced with instructional features that were

  16. Videos - The National Guard

    Science.gov (United States)

    Legislative Liaison Small Business Programs Social Media State Websites Videos Featured Videos On Every Front 2:17 Always Ready, Always There National Guard Bureau Diversity and Inclusion Play Button 1:04 National Guard Bureau Diversity and Inclusion The ChalleNGe Ep.5 [Graduation] Play Button 3:51 The

  17. Processing Decoded Video for LCD-LED Backlight Display

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan

    The quality of digital images and video signal on visual media such as TV screens and LCD displays is affected by two main factors; the display technology and compression standards. Accurate knowledge about the characteristics of display and the video signal can be utilized to develop advanced...... on local LED-LCD backlight. Second, removing the digital video codec artifacts such as blocking and ringing artifacts by post-processing algorithms. A novel algorithm based on image features with optimal balance between visual quality and power consumption was developed. In addition, to remove flickering...... algorithms for signal (image or video) enhancement. One particular application of such algorithms is the case of LCDs with dynamic local backlight. The thesis addressed two main problems; first, designing algorithms that improve the visual quality of perceived image and video and reduce power consumption...

  18. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos ... member of our patient care team. Managing Your Arthritis Managing Your Arthritis Managing Chronic Pain and Depression ...

  19. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ... *PDF files require the free Adobe® Reader® software for viewing. This website is maintained by the ...

  20. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... questions Clinical Studies Publications Catalog Photos and Images Spanish Language Information Grants and Funding Extramural Research Division ... Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video ...