WorldWideScience

Sample records for video projection system

  1. Development of a large-screen high-definition laser video projection system

    Science.gov (United States)

    Clynick, Tony J.

    1991-08-01

    A prototype laser video projector which uses electronic, optical, and mechanical means to project a television picture is described. With the primary goal of commercial viability, the price/performance ratio of the chosen means is critical. The fundamental requirement has been to achieve high brightness, high definition images of at least movie-theater size, at a cost comparable with other existing large-screen video projection technologies, while having the opportunity of developing and exploiting the unique properties of the laser projected image, such as its infinite depth-of-field. Two argon lasers are used in combination with a dye laser to achieve a range of colors which, despite not being identical to those of a CRT, prove to be subjectively acceptable. Acousto-optic modulation in combination with a rotary polygon scanner, digital video line stores, novel specialized electro-optics, and a galvanometric frame scanner form the basis of the projection technique achieving a 30 MHz video bandwidth, high- definition scan rates (1125/60 and 1250/50), high contrast ratio, and good optical efficiency. Auditorium projection of HDTV pictures wider than 20 meters are possible. Applications including 360 degree(s) projection and 3-D video provide further scope for exploitation of the HD laser video projector.

  2. Video-documentation: 'The Pannonic ozon project'

    International Nuclear Information System (INIS)

    Loibl, W.; Cabela, E.; Mayer, H. F.; Schmidt, M.

    1998-07-01

    Goal of the project was the production of a video film as documentation of the Pannonian Ozone Project- POP. The main part of the video describes the POP-model consisting of the modules meteorology, emissions and chemistry, developed during the POP-project. The model considers the European emission patterns of ozone precursors and the actual wind fields. It calculates ozone build up and depletion within air parcels due to emission and weather situation along trajectory routes. Actual ozone concentrations are calculated during model runs simulating the photochemical processes within air parcels moving along 4 day trajectories before reaching the Vienna region. The model computations were validated during extensive ground and aircraft-based measurements of ozone precursors and ozone concentration within the POP study area. Scenario computations were used to determine how much ozone can be reduced in north-eastern Austria by emissions control measures. The video lasts 12:20 minutes and consists of computer animations and life video scenes, presenting the ozone problem in general, the POP model and the model results. The video was produced in co-operation by the Austrian Research Center Seibersdorf - Department of Environmental Planning (ARCS) and Joanneum Research - Institute of Informationsystems (JR). ARCS was responsible for idea, concept, storyboard and text while JR was responsible for computer animation and general video production. The speaker text was written with scientific advice by the POP - project partners: Institute of Meteorology and Physics, University of Agricultural Sciences- Vienna, Environment Agency Austria - Air Quality Department, Austrian Research Center Seibersdorf- Environmental Planning Department/System Research Division. The film was produced as German and English version. (author)

  3. Video-Voice Project (Zambia) | IDRC - International Development ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Video-Voice Project (Zambia). The Zambian health care system has been negatively affected by globalization and faces severe resource constraints. The government has adopted a health reform that emphasizes public participation. This approach requires an informed citizenry, however, at a time when the country is facing ...

  4. Low-complexity camera digital signal imaging for video document projection system

    Science.gov (United States)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  5. Integrative, Interdisciplinary Learning in Bermuda Through Video Projects

    Science.gov (United States)

    Fox, R. J.; Connaughton, M.

    2017-12-01

    Understanding an ecosystem and how humans impact it requires a multidisciplinary perspective and immersive, experiential learning is an exceptional way to achieve understanding. In summer 2017 we took 18 students to the Bermuda Institute of Ocean Sciences (BIOS) as part of a Washington College two-week, four-credit summer field course. We took a multi-disciplinary approach in choosing the curriculum. We focused on the ecology of the islands and surrounding coral reefs as well as the environmental impacts humans are having on the islands. Additionally, we included geology and both local and natural history. Our teaching was supplemented by the BIOS staff and local tour guides. The student learning was integrated and reinforced through student-led video projects. Groups of three students were tasked with creating a 5-7 minute video appropriate for a public audience. We selected video topics based upon locations we would visit in the first week and topics were randomly assigned. The project intention was for the students to critically analyze and evaluate an area of Bermuda that is a worthwhile tourist destination. Students presented why a tourist should visit a locale, the area's ecological distinctiveness and complexity, the impact humans are having, and ways tourists can foster stewardship of that locale. These projects required students to learn how to make and edit videos, collaborate with peers, communicate a narrative to the public, integrate multi-disciplinary topics for a clear, whole-system perspective, observe the environment from a critical viewpoint, and interview local experts. The students produced the videos within the two-week period, and we viewed the videos as a group on the last day. The students worked hard, were proud of their final products, and produced excellent videos. They enjoyed the process, which provided them opportunities to collaborate, show individual strengths, be creative, and work independently of the instructors.

  6. Applying the systems engineering approach to video over IP projects : workshop.

    Science.gov (United States)

    2011-12-01

    In 2009, the Texas Transportation Institute produced for the Texas Department of Transportation a document : called Video over IP Design Guidebook. This report summarizes an implementation of that project in the : form of a workshop. The workshop was...

  7. Discontinuity minimization for omnidirectional video projections

    Science.gov (United States)

    Alshina, Elena; Zakharchenko, Vladyslav

    2017-09-01

    Advances in display technologies both for head mounted devices and television panels demand resolution increase beyond 4K for source signal in virtual reality video streaming applications. This poses a problem of content delivery trough a bandwidth limited distribution networks. Considering a fact that source signal covers entire surrounding space investigation reviled that compression efficiency may fluctuate 40% in average depending on origin selection at the conversion stage from 3D space to 2D projection. Based on these knowledge the origin selection algorithm for video compression applications has been proposed. Using discontinuity entropy minimization function projection origin rotation may be defined to provide optimal compression results. Outcome of this research may be applied across various video compression solutions for omnidirectional content.

  8. Smart Video Communication for Social Groups - The Vconect Project

    NARCIS (Netherlands)

    M. Ursu; P. Stollenmayer; D. Williams; P. Torres; P.S. Cesar Garcia (Pablo Santiago); N. Farber; E. Geelhoed

    2014-01-01

    htmlabstractThis article introduces the Vconect project. Vconect (Video Communications for Networked Communities) is a collaborative European research and development project dealing with high-quality enriched video as a medium for mass communication within social communities. The technical

  9. Writing Assignments in Disguise: Lessons Learned Using Video Projects in the Classroom

    Science.gov (United States)

    Wade, P.; Courtney, A.

    2012-12-01

    This study describes the instructional approach of using student-created video documentaries as projects in an undergraduate non-science majors' Energy Perspectives science course. Four years of teaching this course provided many reflective teaching moments from which we have enhanced our instructional approach to teaching students how to construct a quality Ken Burn's style science video. Fundamental to a good video documentary is the story told via a narrative which involves significant writing, editing and rewriting. Many students primarily associate a video documentary with visual imagery and do not realize the importance of writing in the production of the video. Required components of the student-created video include: 1) select a topic, 2) conduct research, 3) write an outline, 4) write a narrative, 5) construct a project storyboard, 6) shoot or acquire video and photos (from legal sources), 7) record the narrative, 8) construct the video documentary, 9) edit and 10) finalize the project. Two knowledge survey instruments (administered pre- and post) were used for assessment purposes. One survey focused on the skills necessary to research and produce video documentaries and the second survey assessed students' content knowledge acquired from each documentary. This talk will focus on the components necessary for video documentaries and the instructional lessons learned over the years. Additionally, results from both surveys and student reflections of the video project will be shared.

  10. Interactive video audio system: communication server for INDECT portal

    Science.gov (United States)

    Mikulec, Martin; Voznak, Miroslav; Safarik, Jakub; Partila, Pavol; Rozhon, Jan; Mehic, Miralem

    2014-05-01

    The paper deals with presentation of the IVAS system within the 7FP EU INDECT project. The INDECT project aims at developing the tools for enhancing the security of citizens and protecting the confidentiality of recorded and stored information. It is a part of the Seventh Framework Programme of European Union. We participate in INDECT portal and the Interactive Video Audio System (IVAS). This IVAS system provides a communication gateway between police officers working in dispatching centre and police officers in terrain. The officers in dispatching centre have capabilities to obtain information about all online police officers in terrain, they can command officers in terrain via text messages, voice or video calls and they are able to manage multimedia files from CCTV cameras or other sources, which can be interesting for officers in terrain. The police officers in terrain are equipped by smartphones or tablets. Besides common communication, they can reach pictures or videos sent by commander in office and they can respond to the command via text or multimedia messages taken by their devices. Our IVAS system is unique because we are developing it according to the special requirements from the Police of the Czech Republic. The IVAS communication system is designed to use modern Voice over Internet Protocol (VoIP) services. The whole solution is based on open source software including linux and android operating systems. The technical details of our solution are presented in the paper.

  11. Performance of RGB laser-based projection for video walls

    Science.gov (United States)

    Hickl, Peter

    2018-02-01

    The laser phosphor concept is currently the common approach for most applications to introduce laser as a projection light source. However, this concept bears quite some disadvantages for rear-projection video walls. Therefore, Barco has developed a RGB laser design for use in the control room market with tailor-made performance.

  12. Unattended video surveillance systems for international safeguards

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1979-01-01

    The use of unattended video surveillance systems places some unique requirements on the systems and their hardware. The systems have the traditional requirements of video imaging, video storage, and video playback but also have some special requirements such as tamper safing. The technology available to meet these requirements and how it is being applied to unattended video surveillance systems are discussed in this paper

  13. Effect of 3D animation videos over 2D video projections in periodontal health education among dental students.

    Science.gov (United States)

    Dhulipalla, Ravindranath; Marella, Yamuna; Katuri, Kishore Kumar; Nagamani, Penupothu; Talada, Kishore; Kakarlapudi, Anusha

    2015-01-01

    There is limited evidence about the distinguished effect of 3D oral health education videos over conventional 2 dimensional projections in improving oral health knowledge. This randomized controlled trial was done to test the effect of 3 dimensional oral health educational videos among first year dental students. 80 first year dental students were enrolled and divided into two groups (test and control). In the test group, 3D animation and in the control group, regular 2D video projections pertaining to periodontal anatomy, etiology, presenting conditions, preventive measures and treatment of periodontal problems were shown. Effect of 3D animation was evaluated by using a questionnaire consisting of 10 multiple choice questions given to all participants at baseline, immediately after and 1month after the intervention. Clinical parameters like Plaque Index (PI), Gingival Bleeding Index (GBI), and Oral Hygiene Index Simplified (OHI-S) were measured at baseline and 1 month follow up. A significant difference in the post intervention knowledge scores was found between the groups as assessed by unpaired t-test (p3D animation videos are more effective over 2D videos in periodontal disease education and knowledge recall. The application of 3D animation results also demonstrate a better visual comprehension for students and greater health care outcomes.

  14. Video systems for alarm assessment

    International Nuclear Information System (INIS)

    Greenwoll, D.A.; Matter, J.C.; Ebel, P.E.

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs

  15. Video systems for alarm assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greenwoll, D.A.; Matter, J.C. (Sandia National Labs., Albuquerque, NM (United States)); Ebel, P.E. (BE, Inc., Barnwell, SC (United States))

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs.

  16. Crew Resource Management (CRM video storytelling project: a team-based learning activity

    Directory of Open Access Journals (Sweden)

    Ma, Maggie Jiao

    2011-01-01

    Full Text Available This Crew Resource Management (CRM video storytelling project asks students to work in a team (4-5 people per team to create (write and produce a video story. The story should demonstrate lacking and ill practices of CRM knowledge and skills, or positive skills used to create a successful scenario in aviation (e. g. , flight training, commercial aviation, airport management. The activity is composed of two parts: (1 creating a video story of CRM in aviation, and (2 delivering a group presentation. Each tem creates a 5-8 minute long video clip of its story. The story must be originally created by the team to educate pilot and/or aviation management students on good practices of CRM in aviation. Accidents and incidents can be used as a reference to inspire ideas. However, this project is not to re-create any previous CRM accidents/incidents. The video story needs to be self-contained and address two or more aspects of CRM specified in the Federal Aviation Administration’s Advisory Circular 120-51. The presentation must include the use of PowerPoint or similar software and additional multimedia visual aids. The presentation itself will last no more than 17 minutes in length; including the actual video story (each group has additional 3 minutes to set up prior to the presentation. During the presentation following the video each team will discuss the CRM problems (or invite audience to identify CRM problems and explain what CRM practices were performed, and should have been performed. This presentation also should describe how each team worked together in order to complete this project (i. e. , good and bad CRM practiced

  17. A digital video tracking system

    Science.gov (United States)

    Giles, M. K.

    1980-01-01

    The Real-Time Videotheodolite (RTV) was developed in connection with the requirement to replace film as a recording medium to obtain the real-time location of an object in the field-of-view (FOV) of a long focal length theodolite. Design philosophy called for a system capable of discriminatory judgment in identifying the object to be tracked with 60 independent observations per second, capable of locating the center of mass of the object projection on the image plane within about 2% of the FOV in rapidly changing background/foreground situations, and able to generate a predicted observation angle for the next observation. A description is given of a number of subsystems of the RTV, taking into account the processor configuration, the video processor, the projection processor, the tracker processor, the control processor, and the optics interface and imaging subsystem.

  18. Intelligent video surveillance systems and technology

    CERN Document Server

    Ma, Yunqian

    2009-01-01

    From the streets of London to subway stations in New York City, hundreds of thousands of surveillance cameras ubiquitously collect hundreds of thousands of videos, often running 24/7. How can such vast volumes of video data be stored, analyzed, indexed, and searched? How can advanced video analysis and systems autonomously recognize people and detect targeted activities real-time? Collating and presenting the latest information Intelligent Video Surveillance: Systems and Technology explores these issues, from fundamentals principle to algorithmic design and system implementation.An Integrated

  19. International video project on natural analogues

    International Nuclear Information System (INIS)

    Guentensperger, Marcel

    1993-01-01

    A natural analogue can be defined as a natural process which has occurred in the past and is studied in order to test predictions about the future evolution of similar processes. In recent years, natural analogues have been used increasingly to test the mathematical models required for repository performance assessment. Analogues are, however, also of considerable use in public relations as they allow many of the principles involved in demonstrating repository safety to be illustrated in a clear manner using natural systems with which man is familiar. The international Natural Analogue Working Group (NAWG), organised under the auspices of the CEC, has recognised that such PR applications are of considerable importance and should be supported from a technical level. At the NAWG meeting in Pitlochry, Scotland (June 1990), it was recommended that the possibilities for making a video film on this topic be investigated and Nagra was requested to take the lead role in setting up such a project

  20. Secure Video Surveillance System (SVSS) for unannounced safeguards inspections

    International Nuclear Information System (INIS)

    Galdoz, Erwin G.; Pinkalla, Mark

    2010-01-01

    The Secure Video Surveillance System (SVSS) is a collaborative effort between the U.S. Department of Energy (DOE), Sandia National Laboratories (SNL), and the Brazilian-Argentine Agency for Accounting and Control of Nuclear Materials (ABACC). The joint project addresses specific requirements of redundant surveillance systems installed in two South American nuclear facilities as a tool to support unannounced inspections conducted by ABACC and the International Atomic Energy Agency (IAEA). The surveillance covers the critical time (as much as a few hours) between the notification of an inspection and the access of inspectors to the location in facility where surveillance equipment is installed. ABACC and the IAEA currently use the EURATOM Multiple Optical Surveillance System (EMOSS). This outdated system is no longer available or supported by the manufacturer. The current EMOSS system has met the project objective; however, the lack of available replacement parts and system support has made this system unsustainable and has increased the risk of an inoperable system. A new system that utilizes current technology and is maintainable is required to replace the aging EMOSS system. ABACC intends to replace one of the existing ABACC EMOSS systems by the Secure Video Surveillance System. SVSS utilizes commercial off-the shelf (COTS) technologies for all individual components. Sandia National Laboratories supported the system design for SVSS to meet Safeguards requirements, i.e. tamper indication, data authentication, etc. The SVSS consists of two video surveillance cameras linked securely to a data collection unit. The collection unit is capable of retaining historical surveillance data for at least three hours with picture intervals as short as 1sec. Images in .jpg format are available to inspectors using various software review tools. SNL has delivered two SVSS systems for test and evaluation at the ABACC Safeguards Laboratory. An additional 'proto-type' system remains

  1. Using Social Media for Research Dissemination: The Digital Research Video Project

    Directory of Open Access Journals (Sweden)

    Suzanne Pilaar Birch

    2013-09-01

    Full Text Available This article discusses the outcomes of the Digital Research Video Project, which was part of the larger Social Media Knowledge Exchange program at the Centre for Research in the Arts, Social Sciences, and Humanities (CRASSH at the University of Cambridge and funded by the Arts & Humanities Research Council (UK. The project was founded on the premise that open access publication of research, while important, does not necessarily make research accessible. Often, PhD students and post-doctoral scholars lack the skills needed to communicate their research to a broader audience. The goal of the project was, first, to provide communication training to early career researchers (achieved through a workshop held in autumn 2012 and second, to create illustrated videos highlighting projects by early career researchers that would help them engage with their work using multimedia and share their results with a larger audience. This article considers the methods of dissemination and impact of the project.

  2. Intelligent Model for Video Survillance Security System

    Directory of Open Access Journals (Sweden)

    J. Vidhya

    2013-12-01

    Full Text Available Video surveillance system senses and trails out all the threatening issues in the real time environment. It prevents from security threats with the help of visual devices which gather the information related to videos like CCTV’S and IP (Internet Protocol cameras. Video surveillance system has become a key for addressing problems in the public security. They are mostly deployed on the IP based network. So, all the possible security threats exist in the IP based application might also be the threats available for the reliable application which is available for video surveillance. In result, it may increase cybercrime, illegal video access, mishandling videos and so on. Hence, in this paper an intelligent model is used to propose security for video surveillance system which ensures safety and it provides secured access on video.

  3. The Short Life and Ignominious Death of ALA Video and Special Projects.

    Science.gov (United States)

    Handman, Gary

    1991-01-01

    Discussion of videocassettes in our culture and the function of video collections in libraries focuses on the creation and demise of a unit sponsored by the American Library Association, the ALA Video and Special Projects. The unit's role is discussed and funding decisions that led to its demise are explained. (LRW)

  4. State of the art in video system performance

    Science.gov (United States)

    Lewis, Michael J.

    1990-01-01

    The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.

  5. Crew Resource Management (CRM) video storytelling project: a team-based learning activity

    OpenAIRE

    Ma, Maggie Jiao; Denando, John

    2011-01-01

    This Crew Resource Management (CRM) video storytelling project asks students to work in a team (4-5 people per team) to create (write and produce) a video story. The story should demonstrate lacking and ill practices of CRM knowledge and skills, or positive skills used to create a successful scenario in aviation (e. g. , flight training, commercial aviation, airport management). The activity is composed of two parts: (1) creating a video story of CRM in aviation, and (2) delivering a group pr...

  6. Enhancement system of nighttime infrared video image and visible video image

    Science.gov (United States)

    Wang, Yue; Piao, Yan

    2016-11-01

    Visibility of Nighttime video image has a great significance for military and medicine areas, but nighttime video image has so poor quality that we can't recognize the target and background. Thus we enhance the nighttime video image by fuse infrared video image and visible video image. According to the characteristics of infrared and visible images, we proposed improved sift algorithm andαβ weighted algorithm to fuse heterologous nighttime images. We would deduced a transfer matrix from improved sift algorithm. The transfer matrix would rapid register heterologous nighttime images. And theαβ weighted algorithm can be applied in any scene. In the video image fusion system, we used the transfer matrix to register every frame and then used αβ weighted method to fuse every frame, which reached the time requirement soft video. The fused video image not only retains the clear target information of infrared video image, but also retains the detail and color information of visible video image and the fused video image can fluency play.

  7. Cobra: A content-based video retrieval system

    NARCIS (Netherlands)

    Petkovic, M.; Jonker, W.; Jensen, C.S.; Jeffery, K.G.; Pokorny, J.; Saltenis, S.; Bertino, E.; Böhm, K.; Jarke, M.

    2002-01-01

    An increasing number of large publicly available video libraries results in a demand for techniques that can manipulate the video data based on content. In this paper, we present a content-based video retrieval system called Cobra. The system supports automatic extraction and retrieval of high-level

  8. VIDEO BLOGGING AS AN INNOVATIVE FORM OF THE PROJECT ACTIVITY IN FOREIGN LANGUAGE TEACHING TO JOURNALISM STUDENTS

    Directory of Open Access Journals (Sweden)

    M. V. Petrova

    2018-01-01

    Full Text Available Introduction. The appearance of new formats and ways of presenting information inevitably affects the educational process and leads to the necessity to revise the paradigm of pedagogical attitudes and tools of the teaching activity, which in turn generates a number of methodological and didactic problems to be solved. The relevance of the research topic is caused by the current tendency of the distribution of video blogging as an information activity tool that affects the educational environment. There is a steady development of video blogging (a special kind of blog, where the emphasis is made on video information as a new channel of communication in the educational services market, and using it as a separate form of non-educational project activity within the framework of mastering one or another academic discipline. In the conditions of deficiency of classroom hours and increase in student independent work, project-based education is becoming more and more demanded type of training. Currently, interdisciplinary projects are being widely disseminated at high school; these projects are aimed at vocational guidance for a foreign language, they meet the requirements of the new communication reality and the needs of modern educational systems. The aim of the publication is to consider video blogging as an innovative form of project-oriented learning a foreign language and to characterize the features of creating and implementing media content within the framework of a foreign language training course. Methodology and research methods. In the course the research, such theoretical scientific methods as analysis, synthesis, concretization, generalization, as well as hypothetical-deductive and design methods were applied. Results and scientific novelty. For the first time the article deals with the structure of video blogging as a project work and as a form of professionally oriented foreign language teaching as well, also there are formulated basic

  9. Maximizing Resource Utilization in Video Streaming Systems

    Science.gov (United States)

    Alsmirat, Mohammad Abdullah

    2013-01-01

    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…

  10. 78 FR 11988 - Open Video Systems

    Science.gov (United States)

    2013-02-21

    ... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 76 [CS Docket No. 96-46, FCC 96-334] Open Video Systems AGENCY: Federal Communications Commission. ACTION: Final rule; announcement of effective date... 43160, August 21, 1996. The final rules modified rules and policies concerning Open Video Systems. DATES...

  11. Negotiation for Strategic Video Games

    OpenAIRE

    Afiouni, Einar Nour; Øvrelid, Leif Julian

    2013-01-01

    This project aims to examine the possibilities of using game theoretic concepts and multi-agent systems in modern video games with real time demands. We have implemented a multi-issue negotiation system for the strategic video game Civilization IV, evaluating different negotiation techniques with a focus on the use of opponent modeling to improve negotiation results.

  12. Application of Video Recognition Technology in Landslide Monitoring System

    Directory of Open Access Journals (Sweden)

    Qingjia Meng

    2018-01-01

    Full Text Available The video recognition technology is applied to the landslide emergency remote monitoring system. The trajectories of the landslide are identified by this system in this paper. The system of geological disaster monitoring is applied synthetically to realize the analysis of landslide monitoring data and the combination of video recognition technology. Landslide video monitoring system will video image information, time point, network signal strength, power supply through the 4G network transmission to the server. The data is comprehensively analysed though the remote man-machine interface to conduct to achieve the threshold or manual control to determine the front-end video surveillance system. The system is used to identify the target landslide video for intelligent identification. The algorithm is embedded in the intelligent analysis module, and the video frame is identified, detected, analysed, filtered, and morphological treatment. The algorithm based on artificial intelligence and pattern recognition is used to mark the target landslide in the video screen and confirm whether the landslide is normal. The landslide video monitoring system realizes the remote monitoring and control of the mobile side, and provides a quick and easy monitoring technology.

  13. Image processing of integrated video image obtained with a charged-particle imaging video monitor system

    International Nuclear Information System (INIS)

    Iida, Takao; Nakajima, Takehiro

    1988-01-01

    A new type of charged-particle imaging video monitor system was constructed for video imaging of the distributions of alpha-emitting and low-energy beta-emitting nuclides. The system can display not only the scintillation image due to radiation on the video monitor but also the integrated video image becoming gradually clearer on another video monitor. The distortion of the image is about 5% and the spatial resolution is about 2 line pairs (lp)mm -1 . The integrated image is transferred to a personal computer and image processing is performed qualitatively and quantitatively. (author)

  14. Dedicated data recording video system for Spacelab experiments

    Science.gov (United States)

    Fukuda, Toshiyuki; Tanaka, Shoji; Fujiwara, Shinji; Onozuka, Kuniharu

    1984-04-01

    A feasibility study of video tape recorder (VTR) modification to add the capability of data recording etc. was conducted. This system is an on-broad system to support Spacelab experiments as a dedicated video system and a dedicated data recording system to operate independently of the normal operation of the Orbiter, Spacelab and the other experiments. It continuously records the video image signals with the acquired data, status and operator's voice at the same time on one cassette video tape. Such things, the crews' actions, animals' behavior, microscopic views and melting materials in furnace, etc. are recorded. So, it is expected that experimenters can make a very easy and convenient analysis of the synchronized video, voice and data signals in their post flight analysis.

  15. An integrated circuit/packet switched video conferencing system

    Energy Technology Data Exchange (ETDEWEB)

    Kippenhan Junior, H.A.; Lidinsky, W.P.; Roediger, G.A. [Fermi National Accelerator Lab., Batavia, IL (United States). HEP Network Resource Center; Waits, T.A. [Rutgers Univ., Piscataway, NJ (United States). Dept. of Physics and Astronomy

    1996-07-01

    The HEP Network Resource Center (HEPNRC) at Fermilab and the Collider Detector Facility (CDF) collaboration have evolved a flexible, cost-effective, widely accessible video conferencing system for use by high energy physics collaborations and others wishing to use video conferencing. No current systems seemed to fully meet the needs of high energy physics collaborations. However, two classes of video conferencing technology: circuit-switched and packet-switched, if integrated, might encompass most of HEPS's needs. It was also realized that, even with this integration, some additional functions were needed and some of the existing functions were not always wanted. HEPNRC with the help of members of the CDF collaboration set out to develop such an integrated system using as many existing subsystems and components as possible. This system is called VUPAC (Video conferencing Using Packets and Circuits). This paper begins with brief descriptions of the circuit-switched and packet-switched video conferencing systems. Following this, issues and limitations of these systems are considered. Next the VUPAC system is described. Integration is accomplished primarily by a circuit/packet video conferencing interface. Augmentation is centered in another subsystem called MSB (Multiport MultiSession Bridge). Finally, there is a discussion of the future work needed in the evolution of this system. (author)

  16. An integrated circuit/packet switched video conferencing system

    International Nuclear Information System (INIS)

    Kippenhan Junior, H.A.; Lidinsky, W.P.; Roediger, G.A.; Waits, T.A.

    1996-01-01

    The HEP Network Resource Center (HEPNRC) at Fermilab and the Collider Detector Facility (CDF) collaboration have evolved a flexible, cost-effective, widely accessible video conferencing system for use by high energy physics collaborations and others wishing to use video conferencing. No current systems seemed to fully meet the needs of high energy physics collaborations. However, two classes of video conferencing technology: circuit-switched and packet-switched, if integrated, might encompass most of HEPS's needs. It was also realized that, even with this integration, some additional functions were needed and some of the existing functions were not always wanted. HEPNRC with the help of members of the CDF collaboration set out to develop such an integrated system using as many existing subsystems and components as possible. This system is called VUPAC (Video conferencing Using Packets and Circuits). This paper begins with brief descriptions of the circuit-switched and packet-switched video conferencing systems. Following this, issues and limitations of these systems are considered. Next the VUPAC system is described. Integration is accomplished primarily by a circuit/packet video conferencing interface. Augmentation is centered in another subsystem called MSB (Multiport MultiSession Bridge). Finally, there is a discussion of the future work needed in the evolution of this system. (author)

  17. Modular integrated video system (MIVS) review station

    International Nuclear Information System (INIS)

    Garcia, M.L.

    1988-01-01

    An unattended video surveillance unit, the Modular Integrated Video System (MIVS), has been developed by Sandia National Laboratories for International Safeguards use. An important support element of this system is a semi-automatic Review Station. Four component modules, including an 8 mm video tape recorder, a 4-inch video monitor, a power supply and control electronics utilizing a liquid crystal display (LCD) are mounted in a suitcase for probability. The unit communicates through the interactive, menu-driven LCD and may be operated on facility power through the world. During surveillance, the MIVS records video information at specified time intervals, while also inserting consecutive scene numbers and tamper event information. Using either of two available modes of operation, the Review Station reads the inserted information and counts the number of missed scenes and/or tamper events encountered on the tapes, and reports this to the user on the LCD. At the end of a review session, the system will summarize the results of the review, stop the recorder, and advise the user of the completion of the review. In addition, the Review Station will check for any video loss on the tape

  18. Encrypted IP video communication system

    Science.gov (United States)

    Bogdan, Apetrechioaie; Luminiţa, Mateescu

    2010-11-01

    Digital video transmission is a permanent subject of development, research and improvement. This field of research has an exponentially growing market in civil, surveillance, security and military aplications. A lot of solutions: FPGA, ASIC, DSP have been used for this purpose. The paper presents the implementation of an encrypted, IP based, video communication system having a competitive performance/cost ratio .

  19. Application of robust face recognition in video surveillance systems

    Science.gov (United States)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  20. Video-based Mobile Mapping System Using Smartphones

    Science.gov (United States)

    Al-Hamad, A.; Moussa, A.; El-Sheimy, N.

    2014-11-01

    The last two decades have witnessed a huge growth in the demand for geo-spatial data. This demand has encouraged researchers around the world to develop new algorithms and design new mapping systems in order to obtain reliable sources for geo-spatial data. Mobile Mapping Systems (MMS) are one of the main sources for mapping and Geographic Information Systems (GIS) data. MMS integrate various remote sensing sensors, such as cameras and LiDAR, along with navigation sensors to provide the 3D coordinates of points of interest from moving platform (e.g. cars, air planes, etc.). Although MMS can provide accurate mapping solution for different GIS applications, the cost of these systems is not affordable for many users and only large scale companies and institutions can benefits from MMS systems. The main objective of this paper is to propose a new low cost MMS with reasonable accuracy using the available sensors in smartphones and its video camera. Using the smartphone video camera, instead of capturing individual images, makes the system easier to be used by non-professional users since the system will automatically extract the highly overlapping frames out of the video without the user intervention. Results of the proposed system are presented which demonstrate the effect of the number of the used images in mapping solution. In addition, the accuracy of the mapping results obtained from capturing a video is compared to the same results obtained from using separate captured images instead of video.

  1. Telemetry and Communication IP Video Player

    Science.gov (United States)

    OFarrell, Zachary L.

    2011-01-01

    Aegis Video Player is the name of the video over IP system for the Telemetry and Communications group of the Launch Services Program. Aegis' purpose is to display video streamed over a network connection to be viewed during launches. To accomplish this task, a VLC ActiveX plug-in was used in C# to provide the basic capabilities of video streaming. The program was then customized to be used during launches. The VLC plug-in can be configured programmatically to display a single stream, but for this project multiple streams needed to be accessed. To accomplish this, an easy to use, informative menu system was added to the program to enable users to quickly switch between videos. Other features were added to make the player more useful, such as watching multiple videos and watching a video in full screen.

  2. Video game training and the reward system

    OpenAIRE

    Lorenz, R.; Gleich, T.; Gallinat, J.; Kühn, S.

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors ...

  3. Acceptance/operational test procedure 241-AN-107 Video Camera System

    International Nuclear Information System (INIS)

    Pedersen, L.T.

    1994-01-01

    This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer's specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights

  4. Web-based remote video monitoring system implemented using Java technology

    Science.gov (United States)

    Li, Xiaoming

    2012-04-01

    A HTTP based video transmission system has been built upon the p2p(peer to peer) network structure utilizing the Java technologies. This makes the video monitoring available to any host which has been connected to the World Wide Web in any method, including those hosts behind firewalls or in isolated sub-networking. In order to achieve this, a video source peer has been developed, together with the client video playback peer. The video source peer can respond to the video stream request in HTTP protocol. HTTP based pipe communication model is developed to speeding the transmission of video stream data, which has been encoded into fragments using the JPEG codec. To make the system feasible in conveying video streams between arbitrary peers on the web, a HTTP protocol based relay peer is implemented as well. This video monitoring system has been applied in a tele-robotic system as a visual feedback to the operator.

  5. Authentication for Propulsion Test Streaming Video

    Data.gov (United States)

    National Aeronautics and Space Administration — A streaming video system was developed and implemented at SSC to support various propulsion projects at SSC. These projects included J-2X and AJ-26 rocket engine...

  6. The use of student-driven video projects as an educational and outreach tool

    Science.gov (United States)

    Bamzai, A.; Farrell, W.; Klemm, T.

    2014-12-01

    With recent technological advances, the barriers to filmmaking have been lowered, and it is now possible to record and edit video footage with a smartphone or a handheld camera and free software. Students accustomed to documenting their every-day experiences for multimedia-rich social networking sites feel excited and creatively inspired when asked to take on ownership of more complex video projects. With a small amount of guidance on shooting primary and secondary footage and an overview of basic interview skills, students are self-motivated to identify the learning themes with which they resonate most strongly and record their footage in a way that is true to their own experience. The South Central Climate Science Center (SC-CSC) is one of eight regional centers formed by the U.S. Department of the Interior in order to provide decision makers with the science, tools, and information they need to address the impacts of climate variability and change on their areas of responsibility. An important component of this mission is to innovate in the areas of translational science and science communication. This presentation will highlight how the SC-CSC used student-driven video projects to document our Early Career Researcher Workshop and our Undergraduate Internship for Underrepresented Minorities. These projects equipped the students with critical thinking and project management skills, while also providing a finished product that the SC-CSC can use for future outreach purposes.

  7. Video processing project

    CSIR Research Space (South Africa)

    Globisch, R

    2009-03-01

    Full Text Available Video processing source code for algorithms and tools used in software media pipelines (e.g. image scalers, colour converters, etc.) The currently available source code is written in C++ with their associated libraries and DirectShow- Filters....

  8. Video - Real Rights: Decentralization and Women in South Asia ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Material from the videos are based on research projects lead by the Centre for Development Studies, ForestAction, Rural Support Programmes, Society for Promoting Participative Eco-System Management, and UNATI – Organization for Development Education. Photography and video are by Jason Taylor. Videos are in ...

  9. The implementation of Project-Based Learning in courses Audio Video to Improve Employability Skills

    Science.gov (United States)

    Sulistiyo, Edy; Kustono, Djoko; Purnomo; Sutaji, Eddy

    2018-04-01

    This paper presents a project-based learning (PjBL) in subjects with Audio Video the Study Programme Electro Engineering Universitas Negeri Surabaya which consists of two ways namely the design of the prototype audio-video and assessment activities project-based learning tailored to the skills of the 21st century in the form of employability skills. The purpose of learning innovation is applying the lab work obtained in the theory classes. The PjBL aims to motivate students, centering on the problems of teaching in accordance with the world of work. Measures of learning include; determine the fundamental questions, designs, develop a schedule, monitor the learners and progress, test the results, evaluate the experience, project assessment, and product assessment. The results of research conducted showed the level of mastery of the ability to design tasks (of 78.6%), technical planning (39,3%), creativity (42,9%), innovative (46,4%), problem solving skills (the 57.1%), skill to communicate (75%), oral expression (75%), searching and understanding information (to 64.3%), collaborative work skills (71,4%), and classroom conduct (of 78.6%). In conclusion, instructors have to do the reflection and make improvements in some of the aspects that have a level of mastery of the skills less than 60% both on the application of project-based learning courses, audio video.

  10. Secured web-based video repository for multicenter studies.

    Science.gov (United States)

    Yan, Ling; Hicks, Matt; Winslow, Korey; Comella, Cynthia; Ludlow, Christy; Jinnah, H A; Rosen, Ami R; Wright, Laura; Galpern, Wendy R; Perlmutter, Joel S

    2015-04-01

    We developed a novel secured web-based dystonia video repository for the Dystonia Coalition, part of the Rare Disease Clinical Research network funded by the Office of Rare Diseases Research and the National Institute of Neurological Disorders and Stroke. A critical component of phenotypic data collection for all projects of the Dystonia Coalition includes a standardized video of each participant. We now describe our method for collecting, serving and securing these videos that is widely applicable to other studies. Each recruiting site uploads standardized videos to a centralized secured server for processing to permit website posting. The streaming technology used to view the videos from the website does not allow downloading of video files. With appropriate institutional review board approval and agreement with the hosting institution, users can search and view selected videos on the website using customizable, permissions-based access that maintains security yet facilitates research and quality control. This approach provides a convenient platform for researchers across institutions to evaluate and analyze shared video data. We have applied this methodology for quality control, confirmation of diagnoses, validation of rating scales, and implementation of new research projects. We believe our system can be a model for similar projects that require access to common video resources. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. 广乐高速公路施工视频监控系统的设计与实现%Design and Utilization of Video Surveillance System for Project Construction in Guangle Freeway

    Institute of Scientific and Technical Information of China (English)

    李前程

    2012-01-01

    The means of building up video surveillance system for project construction is described in this paper in terms of network structure, surveillance technique choice, system software etc. , which makes large-scale sur- veillance network of video surveillance system applied for both project managing company and site points. In ac- tual application, it solves the problems such as web transferring, compatible and open camera system.%从网络架构、监控技术的选用以及系统软件设计等方面介绍了如何搭建工程建设项目施工视频监控平台,实现了视频监控系统在项目管理公司及分散的监控点大规模网络化应用,而在实际应用中,有效解决网络传输和摄像机的兼容性、开放性等问题。

  12. Video Inspired the Radio Star: Interdisciplinary Projects for Media Arts and Music

    Science.gov (United States)

    Giebelhausen, Robin

    2017-01-01

    Interdisciplinary arts education in music has often included connective lines toward drama, dance, and visual arts. This article will suggest five different projects that could be used to link music to video in order to develop media arts and music interdisciplinary connections.

  13. Noise aliasing in interline-video-based fluoroscopy systems

    International Nuclear Information System (INIS)

    Lai, H.; Cunningham, I.A.

    2002-01-01

    Video-based imaging systems for continuous (nonpulsed) x-ray fluoroscopy use a variety of video formats. Conventional video-camera systems may operate in either interlaced or progressive-scan modes, and CCD systems may operate in interline- or frame-transfer modes. A theoretical model of the image noise power spectrum corresponding to these formats is described. It is shown that with respect to frame-transfer or progressive-readout modes, interline or interlaced cameras operating in a frame-integration mode will result in a spectral shift of 25% of the total image noise power from low spatial frequencies to high. In a field-integration mode, noise power is doubled with most of the increase occurring at high spatial frequencies. The differences are due primarily to the effect of noise aliasing. In interline or interlaced formats, alternate lines are obtained with each video field resulting in a vertical sampling frequency for noise that is one half of the physical sampling frequency. The extent of noise aliasing is modified by differences in the statistical correlations between video fields in the different modes. The theoretical model is validated with experiments using an x-ray image intensifier and CCD-camera system. It is shown that different video modes affect the shape of the noise-power spectrum and therefore the detective quantum efficiency. While the effect on observer performance is not addressed, it is concluded that in order to minimize image noise at the critical mid-to-high spatial frequencies for a specified x-ray exposure, fluoroscopic systems should use only frame-transfer (CCD camera) or progressive-scan (conventional video) formats

  14. Interactive Videos Enhance Learning about Socio-Ecological Systems

    Science.gov (United States)

    Smithwick, Erica; Baxter, Emily; Kim, Kyung; Edel-Malizia, Stephanie; Rocco, Stevie; Blackstock, Dean

    2018-01-01

    Two forms of interactive video were assessed in an online course focused on conservation. The hypothesis was that interactive video enhances student perceptions about learning and improves mental models of social-ecological systems. Results showed that students reported greater learning and attitudes toward the subject following interactive video.…

  15. Video game training and the reward system.

    Science.gov (United States)

    Lorenz, Robert C; Gleich, Tobias; Gallinat, Jürgen; Kühn, Simone

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training. Fifty healthy participants were randomly assigned to a video game training (TG) or control group (CG). Before and after training/control period, functional magnetic resonance imaging (fMRI) was conducted using a non-video game related reward task. At pretest, both groups showed strongest activation in ventral striatum (VS) during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated. This longitudinal study revealed that video game training may preserve reward responsiveness in the VS in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.

  16. Staff acceptance of video monitoring for coordination: a video system to support perioperative situation awareness.

    Science.gov (United States)

    Kim, Young Ju; Xiao, Yan; Hu, Peter; Dutton, Richard

    2009-08-01

    To understand staff acceptance of a remote video monitoring system for operating room (OR) coordination. Improved real-time remote visual access to OR may enhance situational awareness but also raises privacy concerns for patients and staff. Survey. A system was implemented in a six-room surgical suite to display OR monitoring video at an access restricted control desk area. Image quality was manipulated to improve staff acceptance. Two months after installation, interviews and a survey were conducted on staff acceptance of video monitoring. About half of all OR personnel responded (n = 63). Overall levels of concerns were low, with 53% rated no concerns and 42% little concern. Top two reported uses of the video were to see if cases are finished and to see if a room is ready. Viewing the video monitoring system as useful did not reduce levels of concern. Staff in supervisory positions perceived less concern about the system's impact on privacy than did those supervised (p < 0.03). Concerns for patient privacy correlated with concerns for staff privacy and performance monitoring. Technical means such as manipulating image quality helped staff acceptance. Manipulation of image quality resulted overall acceptance of monitoring video, with residual levels of concerns. OR nurses may express staff privacy concern in the form of concerns over patient privacy. This study provided suggestions for technological and implementation strategies of video monitoring for coordination use in OR. Deployment of communication technology and integration of clinical information will likely raise concerns over staff privacy and performance monitoring. The potential gain of increased information access may be offset by negative impact of a sense of loss of autonomy.

  17. New Management Tools – From Video Management Systems to Business Decision Systems

    Directory of Open Access Journals (Sweden)

    Emilian Cristian IRIMESCU

    2015-06-01

    Full Text Available In the last decades management was characterized by the increased use of Business Decision Systems, also called Decision Support Systems. More than that, systems that were until now used in a traditional way, for some simple activities (like security, migrated to the decision area of management. Some examples are the Video Management Systems from the physical security activity. This article will underline the way Video Management Systems passed to Business Decision Systems, which are the advantages of use thereof and which are the trends in this industry. The article will also analyze if at this moment Video Management Systems are real Business Decision Systems or if there are some functions missing to rank them at this level.

  18. Unattended digital video surveillance: A system prototype for EURATOM safeguards

    International Nuclear Information System (INIS)

    Chare, P.; Goerten, J.; Wagner, H.; Rodriguez, C.; Brown, J.E.

    1994-01-01

    Ever increasing capabilities in video and computer technology have changed the face of video surveillance. From yesterday's film and analog video tape-based systems, we now emerge into the digital era with surveillance systems capable of digital image processing, image analysis, decision control logic, and random data access features -- all of which provide greater versatility with the potential for increased effectiveness in video surveillance. Digital systems also offer other advantages such as the ability to ''compress'' data, providing increased storage capacities and the potential for allowing longer surveillance Periods. Remote surveillance and system to system communications are also a benefit that can be derived from digital surveillance systems. All of these features are extremely important in today's climate Of increasing safeguards activity and decreasing budgets -- Los Alamos National Laboratory's Safeguards Systems Group and the EURATOM Safeguards Directorate have teamed to design and implement a period surveillance system that will take advantage of the versatility of digital video for facility surveillance system that will take advantage of the versatility of digital video for facility surveillance and data review. In this Paper we will familiarize you with system components and features and report on progress in developmental areas such as image compression and region of interest processing

  19. Video Game Training and the Reward System

    Directory of Open Access Journals (Sweden)

    Robert C. Lorenz

    2015-02-01

    Full Text Available Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual towards playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training.Fifty healthy participants were randomly assigned to a video game training (TG or control group (CG. Before and after training/control period, functional magnetic resonance imaging (fMRI was conducted using a non-video game related reward task.At pretest, both groups showed strongest activation in ventral striatum (VS during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated.This longitudinal study revealed that video game training may preserve reward responsiveness in the ventral striatum in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.

  20. Video game training and the reward system

    Science.gov (United States)

    Lorenz, Robert C.; Gleich, Tobias; Gallinat, Jürgen; Kühn, Simone

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training. Fifty healthy participants were randomly assigned to a video game training (TG) or control group (CG). Before and after training/control period, functional magnetic resonance imaging (fMRI) was conducted using a non-video game related reward task. At pretest, both groups showed strongest activation in ventral striatum (VS) during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated. This longitudinal study revealed that video game training may preserve reward responsiveness in the VS in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training. PMID:25698962

  1. Using Student Learning and Development Outcomes to Evaluate a First-Year Undergraduate Group Video Project

    Science.gov (United States)

    Jensen, Murray; Mattheis, Allison; Johnson, Brady

    2012-01-01

    Students in an interdisciplinary undergraduate introductory course were required to complete a group video project focused on nutrition and healthy eating. A mixed-methods approach to data collection involved observing and rating video footage of group work sessions and individual and focus group interviews. These data were analyzed and used to evaluate the effectiveness of the assignment in light of two student learning outcomes and two student development outcomes at the University of Minnesota. Positive results support the continued inclusion of the project within the course, and recommend the assignment to other programs as a viable means of promoting both content learning and affective behavioral objectives. PMID:22383619

  2. Three-dimensional optical reconstruction of vocal fold kinematics using high-speed video with a laser projection system

    Science.gov (United States)

    Luegmair, Georg; Mehta, Daryush D.; Kobler, James B.; Döllinger, Michael

    2015-01-01

    Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry. PMID:26087485

  3. Energy Systems Integration Facility Videos | Energy Systems Integration

    Science.gov (United States)

    Facility | NREL Energy Systems Integration Facility Videos Energy Systems Integration Facility Integration Facility NREL + SolarCity: Maximizing Solar Power on Electrical Grids Redefining What's Possible for Renewable Energy: Grid Integration Robot-Powered Reliability Testing at NREL's ESIF Microgrid

  4. HDR video synthesis for vision systems in dynamic scenes

    Science.gov (United States)

    Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried

    2016-09-01

    High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.

  5. Fish4Knowledge collecting and analyzing massive coral reef fish video data

    CERN Document Server

    Chen-Burger, Yun-Heh; Giordano, Daniela; Hardman, Lynda; Lin, Fang-Pang

    2016-01-01

    This book gives a start-to-finish overview of the whole Fish4Knowledge project, in 18 short chapters, each describing one aspect of the project. The Fish4Knowledge project explored the possibilities of big video data, in this case from undersea video. Recording and analyzing 90 thousand hours of video from ten camera locations, the project gives a 3 year view of fish abundance in several tropical coral reefs off the coast of Taiwan. The research system built a remote recording network, over 100 Tb of storage, supercomputer processing, video target detection and tracking, fish species recognition and analysis, a large SQL database to record the results and an efficient retrieval mechanism. Novel user interface mechanisms were developed to provide easy access for marine ecologists, who wanted to explore the dataset. The book is a useful resource for system builders, as it gives an overview of the many new methods that were created to build the Fish4Knowledge system in a manner that also allows readers to see ho...

  6. System design description for the LDUA common video end effector system (CVEE)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The Common Video End Effector System (CVEE), system 62-60, was designed by the Idaho National Engineering Laboratory (INEL) to provide the control interface of the various video end effectors used on the LDUA. The CVEE system consists of a Support Chassis which contains the input and output Opto-22 modules, relays, and power supplies and the Power Chassis which contains the bipolar supply and other power supplies. The combination of the Support Chassis and the Power Chassis make up the CVEE system. The CVEE system is rack mounted in the At Tank Instrument Enclosure (ATIE). Once connected it is controlled using the LDUA supervisory data acquisition system (SDAS). Video and control status will be displayed on monitors within the LDUA control center

  7. Effect Through Broadcasting System Access Point For Video Transmission

    Directory of Open Access Journals (Sweden)

    Leni Marlina

    2015-08-01

    Full Text Available Most universities are already implementing wired and wireless network that is used to access integrated information systems and the Internet. At present it is important to do research on the influence of the broadcasting system through the access point for video transmitter learning in the university area. At every university computer network through the access point must also use the cable in its implementation. These networks require cables that will connect and transmit data from one computer to another computer. While wireless networks of computers connected through radio waves. This research will be a test or assessment of how the influence of the network using the WLAN access point for video broadcasting means learning from the server to the client. Instructional video broadcasting from the server to the client via the access point will be used for video broadcasting means of learning. This study aims to understand how to build a wireless network by using an access point. It also builds a computer server as instructional videos supporting software that can be used for video server that will be emitted by broadcasting via the access point and establish a system of transmitting video from the server to the client via the access point.

  8. FPGA Implementation of Video Transmission System Based on LTE

    Directory of Open Access Journals (Sweden)

    Lu Yan

    2015-01-01

    Full Text Available In order to support high-definition video transmission, an implementation of video transmission system based on Long Term Evolution is designed. This system is developed on Xilinx Virtex-6 FPGA ML605 Evaluation Board. The paper elaborates the features of baseband link designed in Xilinx ISE and protocol stack designed in Xilinx SDK, and introduces the process of setting up hardware and software platform in Xilinx XPS. According to test, this system consumes less hardware resource and is able to transmit bidirectional video clearly and stably.

  9. Web Audio/Video Streaming Tool

    Science.gov (United States)

    Guruvadoo, Eranna K.

    2003-01-01

    In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.

  10. A content-based news video retrieval system: NVRS

    Science.gov (United States)

    Liu, Huayong; He, Tingting

    2009-10-01

    This paper focus on TV news programs and design a content-based news video browsing and retrieval system, NVRS, which is convenient for users to fast browsing and retrieving news video by different categories such as political, finance, amusement, etc. Combining audiovisual features and caption text information, the system automatically segments a complete news program into separate news stories. NVRS supports keyword-based news story retrieval, category-based news story browsing and generates key-frame-based video abstract for each story. Experiments show that the method of story segmentation is effective and the retrieval is also efficient.

  11. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  12. Intelligent video surveillance systems

    CERN Document Server

    Dufour, Jean-Yves

    2012-01-01

    Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi

  13. Video - Real Rights: Decentralization and Women in South Asia ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2010-10-20

    Oct 20, 2010 ... Material from the videos are based on research projects lead by the Centre for Development Studies, ForestAction, Rural Support Programmes, Society for Promoting Participative Eco-System Management, and UNATI – Organization for Development Education. Photography and video are by Jason Taylor.

  14. Video copy protection and detection framework (VPD) for e-learning systems

    Science.gov (United States)

    ZandI, Babak; Doustarmoghaddam, Danial; Pour, Mahsa R.

    2013-03-01

    This Article reviews and compares the copyright issues related to the digital video files, which can be categorized as contended based and Digital watermarking copy Detection. Then we describe how to protect a digital video by using a special Video data hiding method and algorithm. We also discuss how to detect the copy right of the file, Based on expounding Direction of the technology of the video copy detection, and Combining with the own research results, brings forward a new video protection and copy detection approach in terms of plagiarism and e-learning systems using the video data hiding technology. Finally we introduce a framework for Video protection and detection in e-learning systems (VPD Framework).

  15. The modular integrated video system (MIVS)

    International Nuclear Information System (INIS)

    Schneider, S.L.; Sonnier, C.S.

    1987-01-01

    The Modular Integrated Video System (MIVS) is being developed for the International Atomic Energy Agency (IAEA) for use in facilities where mains power is available and the separation of the Camera and Recording Control Unit is desirable. The system is being developed under the US Program for Technical Assistance to the IAEA Safeguards (POTAS). The MIVS is designed to be a user-friendly system, allowing operation with minimal effort and training. The system software, through the use of a Liquid Crystal Display (LCD) and four soft keys, leads the inspector through the setup procedures to accomplish the intended surveillance or maintenance task. Review of surveillance data is accomplished with the use of a Portable Review Station. This Review Station will aid the inspector in the review process and determine the number of missed video scenes during a surveillance period

  16. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Directory of Open Access Journals (Sweden)

    Chang Yao-Jen

    2002-01-01

    Full Text Available This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network and the H.324 WAN (wide-area network users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  17. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Science.gov (United States)

    Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting

    2002-12-01

    This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  18. A modular projection autostereoscopic system for stereo cinema

    Science.gov (United States)

    Elkhov, Victor A.; Kondratiev, Nikolai V.; Ovechkis, Yuri N.; Pautova, Larisa V.

    2009-02-01

    The lenticular raster system for 3D movies non-glasses show designed by NIKFI demonstrated commercially in Moscow in the 40'st of the last century. Essential lack of this method was narrow individual viewing zone as only two images on the film used. To solve this problem, we propose to use digital video projective system with modular principle of its design. Increase of the general number of the pixels forming the stereo image is reached by using of more than one projector. The modular projection autostereoscopic system for demonstration of the 3D movies includes diffuser screen; lenticular plate located in front of the screen; projective system consisted from several projectors and the block of parallax panoramogram fragments creator. By means of this block the parallax panoramogram is broken into fragments which quantity corresponds to number of projectors. For the large dimension lenticular screen making rectangular fragments of inclined raster were joined in a uniform leaf. To obtain the needed focal distance of the screen lenses we used immersion - aqueous solution of glycerin. The immersion also let essentially decrease visibility of fragments joints. An experimental prototype of the modular projection autostereoscopic system was created to validate proposed system.

  19. A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery

    Science.gov (United States)

    Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.

    2012-02-01

    Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.

  20. Video Conference System that Keeps Mutual Eye Contact Among Participants

    Directory of Open Access Journals (Sweden)

    Masahiko Yahagi

    2011-10-01

    Full Text Available A novel video conference system is developed. Suppose that three people A, B, and C attend the video conference, the proposed system enables eye contact among every pair. Furthermore, when B and C chat, A feels as if B and C were facing each other (eye contact seems to be kept among B and C. In the case of a triangle video conference, the respective video system is composed of a half mirror, two video cameras, and two monitors. Each participant watches other participants' images that are reflected by the half mirror. Cameras are set behind the half mirror. Since participants' image (face and the camera position are adjusted to be the same direction, eye contact is kept and conversation becomes very natural compared with conventional video conference systems where participants' eyes do not point to the other participant. When 3 participants sit at the vertex of an equilateral triangle, eyes can be kept even for the situation mentioned above (eye contact between B and C from the aspect of A. Eye contact can be kept not only for 2 or 3 participants but also any number of participants as far as they sit at the vertex of a regular polygon.

  1. Specialized video systems for use in underground storage tanks

    International Nuclear Information System (INIS)

    Heckendom, F.M.; Robinson, C.W.; Anderson, E.K.; Pardini, A.F.

    1994-01-01

    The Robotics Development Groups at the Savannah River Site and the Hanford site have developed remote video and photography systems for deployment in underground radioactive waste storage tanks at Department of Energy (DOE) sites as a part of the Office of Technology Development (OTD) program within DOE. Figure 1 shows the remote video/photography systems in a typical underground storage tank environment. Viewing and documenting the tank interiors and their associated annular spaces is an extremely valuable tool in characterizing their condition and contents and in controlling their remediation. Several specialized video/photography systems and robotic End Effectors have been fabricated that provide remote viewing and lighting. All are remotely deployable into and from the tank, and all viewing functions are remotely operated. Positioning all control components away from the facility prevents the potential for personnel exposure to radiation and contamination. Overview video systems, both monaural and stereo versions, include a camera, zoom lens, camera positioner, vertical deployment system, and positional feedback. Each independent video package can be inserted through a 100 mm (4 in.) diameter opening. A special attribute of these packages is their design to never get larger than the entry hole during operation and to be fully retrievable. The End Effector systems will be deployed on the large robotic Light Duty Utility Arm (LDUA) being developed by other portions of the OTD-DOE programs. The systems implement a multi-functional ''over the coax'' design that uses a single coaxial cable for all data and control signals over the more than 900 foot cable (or fiber optic) link

  2. Video auto stitching in multicamera surveillance system

    Science.gov (United States)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  3. Optimization of video capturing and tone mapping in video camera systems

    NARCIS (Netherlands)

    Cvetkovic, S.D.

    2011-01-01

    Image enhancement techniques are widely employed in many areas of professional and consumer imaging, machine vision and computational imaging. Image enhancement techniques used in surveillance video cameras are complex systems involving controllable lenses, sensors and advanced signal processing. In

  4. Video Content Search System for Better Students Engagement in the Learning Process

    Directory of Open Access Journals (Sweden)

    Alanoud Alotaibi

    2014-12-01

    Full Text Available As a component of the e-learning educational process, content plays an essential role. Increasingly, the video-recorded lectures in e-learning systems are becoming more important to learners. In most cases, a single video-recorded lecture contains more than one topic or sub-topic. Therefore, to enable learners to find the desired topic and reduce learning time, e-learning systems need to provide a search capability for searching within the video content. This can be accomplished by enabling learners to identify the video or portion that contains a keyword they are looking for. This research aims to develop Video Content Search system to facilitate searching in educational videos and its contents. Preliminary results of an experimentation were conducted on a selected university course. All students needed a system to avoid time-wasting problem of watching long videos with no significant benefit. The statistics showed that the number of learners increased during the experiment. Future work will include studying impact of VCS system on students’ performance and satisfaction.

  5. Hybrid compression of video with graphics in DTV communication systems

    OpenAIRE

    Schaar, van der, M.; With, de, P.H.N.

    2000-01-01

    Advanced broadcast manipulation of TV sequences and enhanced user interfaces for TV systems have resulted in an increased amount of pre- and post-editing of video sequences, where graphical information is inserted. However, in the current broadcasting chain, there are no provisions for enabling an efficient transmission/storage of these mixed video and graphics signals and, at this emerging stage of DTV systems, introducing new standards is not desired. Nevertheless, in the professional video...

  6. Facial expression system on video using widrow hoff

    Science.gov (United States)

    Jannah, M.; Zarlis, M.; Mawengkang, H.

    2018-03-01

    Facial expressions recognition is one of interesting research. This research contains human feeling to computer application Such as the interaction between human and computer, data compression, facial animation and facial detection from the video. The purpose of this research is to create facial expression system that captures image from the video camera. The system in this research uses Widrow-Hoff learning method in training and testing image with Adaptive Linear Neuron (ADALINE) approach. The system performance is evaluated by two parameters, detection rate and false positive rate. The system accuracy depends on good technique and face position that trained and tested.

  7. A Secure and Robust Object-Based Video Authentication System

    Directory of Open Access Journals (Sweden)

    He Dajun

    2004-01-01

    Full Text Available An object-based video authentication system, which combines watermarking, error correction coding (ECC, and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI.

  8. A Retrieval Optimized Surveillance Video Storage System for Campus Application Scenarios

    Directory of Open Access Journals (Sweden)

    Shengcheng Ma

    2018-01-01

    Full Text Available This paper investigates and analyzes the characteristics of video data and puts forward a campus surveillance video storage system with the university campus as the specific application environment. Aiming at the challenge that the content-based video retrieval response time is too long, the key-frame index subsystem is designed. The key frame of the video can reflect the main content of the video. Extracted from the video, key frames are associated with the metadata information to establish the storage index. The key-frame index is used in lookup operations while querying. This method can greatly reduce the amount of video data reading and effectively improves the query’s efficiency. From the above, we model the storage system by a stochastic Petri net (SPN and verify the promotion of query performance by quantitative analysis.

  9. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    International Nuclear Information System (INIS)

    Werry, S.M.

    1995-01-01

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure

  10. Real-time geo-referenced video mosaicking with the MATISSE system

    DEFF Research Database (Denmark)

    Vincent, Anne-Gaelle; Pessel, Nathalie; Borgetto, Manon

    This paper presents the MATISSE system: Mosaicking Advanced Technologies Integrated in a Single Software Environment. This system aims at producing in-line and off-line geo-referenced video mosaics of seabed given a video input and navigation data. It is based upon several techniques of image...

  11. Review of Interactive Video--Romanian Project Proposal

    Science.gov (United States)

    Onita, Mihai; Petan, Sorin; Vasiu, Radu

    2016-01-01

    In the recent years, the globalization and massification of video education offer involved more and more eLearning scenarios within universities. This article refers to interactive video and proposes an overview of it. We analyze the background information, regarding the eLearning campus used in virtual universities around the world, the MOOC…

  12. Take-home video for adult literacy

    Science.gov (United States)

    Yule, Valerie

    1996-01-01

    In the past, it has not been possible to "teach oneself to read" at home, because learners could not read the books to teach them. Videos and interactive compact discs have changed that situation and challenge current assumptions of the pedagogy of literacy. This article describes an experimental adult literacy project using video technology. The language used is English, but the basic concepts apply to any alphabetic or syllabic writing system. A half-hour cartoon video can help adults and adolescents with learning difficulties. Computer-animated cartoon graphics are attractive to look at, and simplify complex material in a clear, lively way. This video technique is also proving useful for distance learners, children, and learners of English as a second language. Methods and principles are to be extended using interactive compact discs.

  13. A video wireless capsule endoscopy system powered wirelessly: design, analysis and experiment

    International Nuclear Information System (INIS)

    Pan, Guobing; Chen, Jiaoliao; Xin, Wenhui; Yan, Guozheng

    2011-01-01

    Wireless capsule endoscopy (WCE), as a relatively new technology, has brought about a revolution in the diagnosis of gastrointestinal (GI) tract diseases. However, the existing WCE systems are not widely applied in clinic because of the low frame rate and low image resolution. A video WCE system based on a wireless power supply is developed in this paper. This WCE system consists of a video capsule endoscope (CE), a wireless power transmission device, a receiving box and an image processing station. Powered wirelessly, the video CE has the abilities of imaging the GI tract and transmitting the images wirelessly at a frame rate of 30 frames per second (f/s). A mathematical prototype was built to analyze the power transmission system, and some experiments were performed to test the capability of energy transferring. The results showed that the wireless electric power supply system had the ability to transfer more than 136 mW power, which was enough for the working of a video CE. In in vitro experiments, the video CE produced clear images of the small intestine of a pig with the resolution of 320 × 240, and transmitted NTSC format video outside the body. Because of the wireless power supply, the video WCE system with high frame rate and high resolution becomes feasible, and provides a novel solution for the diagnosis of the GI tract in clinic

  14. Guide to Synchronization of Video Systems to IRIG Timing

    Science.gov (United States)

    1992-07-01

    and industry. 1-2 CHAPTER 2 SYNCHRONISATION Before delving into the details of synchronization , a review is needed of the reasons for synchronizing ... Synchronization of Video Systems to IRIG Timing Optical Systems Group Range Commanders Council White Sands Missile Range, NM 88002-5110 RCC Document 456-92 Range...This document addresses a broad field of video synchronization to IRIG timing with emphasis on color synchronization . This document deals with

  15. A Miniaturized Video System for Monitoring Drosophila Behavior

    Science.gov (United States)

    Bhattacharya, Sharmila; Inan, Omer; Kovacs, Gregory; Etemadi, Mozziyar; Sanchez, Max; Marcu, Oana

    2011-01-01

    Long-term spaceflight may induce a variety of harmful effects in astronauts, resulting in altered motor and cognitive behavior. The stresses experienced by humans in space - most significantly weightlessness (microgravity) and cosmic radiation - are difficult to accurately simulate on Earth. In fact, prolonged and concomitant exposure to microgravity and cosmic radiation can only be studied in space. Behavioral studies in space have focused on model organisms, including Drosophila melanogaster. Drosophila is often used due to its short life span and generational cycle, small size, and ease of maintenance. Additionally, the well-characterized genetics of Drosophila behavior on Earth can be applied to the analysis of results from spaceflights, provided that the behavior in space is accurately recorded. In 2001, the BioExplorer project introduced a low-cost option for researchers: the small satellite. While this approach enabled multiple inexpensive launches of biological experiments, it also imposed stringent restrictions on the monitoring systems in terms of size, mass, data bandwidth, and power consumption. Suggested parameters for size are on the order of 100 mm3 and 1 kg mass for the entire payload. For Drosophila behavioral studies, these engineering requirements are not met by commercially available systems. One system that does meet many requirements for behavioral studies in space is the actimeter. Actimeters use infrared light gates to track the number of times a fly crosses a boundary within a small container (3x3x40 mm). Unfortunately, the apparatus needed to monitor several flies at once would be larger than the capacity of the small satellite. A system is presented, which expands on the actimeter approach to achieve a highly compact, low-power, ultra-low bandwidth solution for simultaneous monitoring of the behavior of multiple flies in space. This also provides a simple, inexpensive alternative to the current systems for monitoring Drosophila

  16. Learning Science Through Digital Video: Views on Watching and Creating Videos

    Science.gov (United States)

    Wade, P.; Courtney, A. R.

    2013-12-01

    In science, the use of digital video to document phenomena, experiments and demonstrations has rapidly increased during the last decade. The use of digital video for science education also has become common with the wide availability of video over the internet. However, as with using any technology as a teaching tool, some questions should be asked: What science is being learned from watching a YouTube clip of a volcanic eruption or an informational video on hydroelectric power generation? What are student preferences (e.g. multimedia versus traditional mode of delivery) with regard to their learning? This study describes 1) the efficacy of watching digital video in the science classroom to enhance student learning, 2) student preferences of instruction with regard to multimedia versus traditional delivery modes, and 3) the use of creating digital video as a project-based educational strategy to enhance learning. Undergraduate non-science majors were the primary focus group in this study. Students were asked to view video segments and respond to a survey focused on what they learned from the segments. Additionally, they were asked about their preference for instruction (e.g. text only, lecture-PowerPoint style delivery, or multimedia-video). A majority of students indicated that well-made video, accompanied with scientific explanations or demonstration of the phenomena was most useful and preferred over text-only or lecture instruction for learning scientific information while video-only delivery with little or no explanation was deemed not very useful in learning science concepts. The use of student generated video projects as learning vehicles for the creators and other class members as viewers also will be discussed.

  17. ALGORITHM OF PLACEMENT OF VIDEO SURVEILLANCE CAMERAS AND ITS SOFTWARE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available Comprehensive distributed safety, control, and monitoring systems applied by companies and organizations of different ownership structure play a substantial role in the present-day society. Video surveillance elements that ensure image processing and decision making in automated or automatic modes are the essential components of new systems. This paper covers the modeling of video surveillance systems installed in buildings, and the algorithm, or pattern, of video camera placement with due account for nearly all characteristics of buildings, detection and recognition facilities, and cameras themselves. This algorithm will be subsequently implemented as a user application. The project contemplates a comprehensive approach to the automatic placement of cameras that take account of their mutual positioning and compatibility of tasks. The project objective is to develop the principal elements of the algorithm of recognition of a moving object to be detected by several cameras. The image obtained by different cameras will be processed. Parameters of motion are to be identified to develop a table of possible options of routes. The implementation of the recognition algorithm represents an independent research project to be covered by a different article. This project consists in the assessment of the degree of complexity of an algorithm of camera placement designated for identification of cases of inaccurate algorithm implementation, as well as in the formulation of supplementary requirements and input data by means of intercrossing sectors covered by neighbouring cameras. The project also contemplates identification of potential problems in the course of development of a physical security and monitoring system at the stage of the project design development and testing. The camera placement algorithm has been implemented as a software application that has already been pilot tested on buildings and inside premises that have irregular dimensions. The

  18. A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos.

    Science.gov (United States)

    Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian

    2016-04-01

    Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today's keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users' information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively.ResultsThe authors produced a prototype implementation of the proposed system, which is publicly accessible athttps://patentq.njit.edu/oer To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable

  19. User-assisted video segmentation system for visual communication

    Science.gov (United States)

    Wu, Zhengping; Chen, Chun

    2002-01-01

    Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.

  20. The ASDEX upgrade digital video processing system for real-time machine protection

    Energy Technology Data Exchange (ETDEWEB)

    Drube, Reinhard, E-mail: reinhard.drube@ipp.mpg.de [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany); Neu, Gregor [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany); Cole, Richard H.; Lüddecke, Klaus [Unlimited Computer Systems GmbH, Seeshaupterstr. 15, 82393 Iffeldorf (Germany); Lunt, Tilmann; Herrmann, Albrecht [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany)

    2013-11-15

    Highlights: • We present the Real-Time Video diagnostic system of ASDEX Upgrade. • We show the implemented image processing algorithms for machine protection. • The way to achieve a robust operating multi-threading Real-Time system is described. -- Abstract: This paper describes the design, implementation, and operation of the Video Real-Time (VRT) diagnostic system of the ASDEX Upgrade plasma experiment and its integration with the ASDEX Upgrade Discharge Control System (DCS). Hot spots produced by heating systems erroneously or accidentally hitting the vessel walls, or from objects in the vessel reaching into the plasma outer border, show up as bright areas in the videos during and after the reaction. A system to prevent damage to the machine by allowing for intervention in a running discharge of the experiment was proposed and implemented. The VRT was implemented on a multi-core real-time Linux system. Up to 16 analog video channels (color and b/w) are acquired and multiple regions of interest (ROI) are processed on each video frame. Detected critical states can be used to initiate appropriate reactions – e.g. gracefully terminate the discharge. The system has been in routine operation since 2007.

  1. Operation quality assessment model for video conference system

    Science.gov (United States)

    Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian

    2018-01-01

    Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.

  2. Practical system for generating digital mixed reality video holograms.

    Science.gov (United States)

    Song, Joongseok; Kim, Changseob; Park, Hanhoon; Park, Jong-Il

    2016-07-10

    We propose a practical system that can effectively mix the depth data of real and virtual objects by using a Z buffer and can quickly generate digital mixed reality video holograms by using multiple graphic processing units (GPUs). In an experiment, we verify that real objects and virtual objects can be merged naturally in free viewing angles, and the occlusion problem is well handled. Furthermore, we demonstrate that the proposed system can generate mixed reality video holograms at 7.6 frames per second. Finally, the system performance is objectively verified by users' subjective evaluations.

  3. FPGA-Based Real-Time Motion Detection for Automated Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2016-03-01

    Full Text Available Design of automated video surveillance systems is one of the exigent missions in computer vision community because of their ability to automatically select frames of interest in incoming video streams based on motion detection. This research paper focuses on the real-time hardware implementation of a motion detection algorithm for such vision based automated surveillance systems. A dedicated VLSI architecture has been proposed and designed for clustering-based motion detection scheme. The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T FPGA platform. The prototyped system robustly detects the relevant motion in real-time in live PAL (720 × 576 resolution video streams directly coming from the camera.

  4. Video integrated measurement system. [Diagnostic display devices

    Energy Technology Data Exchange (ETDEWEB)

    Spector, B.; Eilbert, L.; Finando, S.; Fukuda, F.

    1982-06-01

    A Video Integrated Measurement (VIM) System is described which incorporates the use of various noninvasive diagnostic procedures (moire contourography, electromyography, posturometry, infrared thermography, etc.), used individually or in combination, for the evaluation of neuromusculoskeletal and other disorders and their management with biofeedback and other therapeutic procedures. The system provides for measuring individual diagnostic and therapeutic modes, or multiple modes by split screen superimposition, of real time (actual) images of the patient and idealized (ideal-normal) models on a video monitor, along with analog and digital data, graphics, color, and other transduced symbolic information. It is concluded that this system provides an innovative and efficient method by which the therapist and patient can interact in biofeedback training/learning processes and holds considerable promise for more effective measurement and treatment of a wide variety of physical and behavioral disorders.

  5. Utilization of KSC Present Broadband Communications Data System for Digital Video Services

    Science.gov (United States)

    Andrawis, Alfred S.

    2002-01-01

    This report covers a visibility study of utilizing present KSC broadband communications data system (BCDS) for digital video services. Digital video services include compressed digital TV delivery and video-on-demand. Furthermore, the study examines the possibility of providing interactive video on demand to desktop personal computers via KSC computer network.

  6. Video Podcasts

    DEFF Research Database (Denmark)

    Nortvig, Anne Mette; Sørensen, Birgitte Holm

    2016-01-01

    This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...

  7. Risk analysis of a video-surveillance system

    NARCIS (Netherlands)

    Rothkrantz, L.; Lefter, I.

    2011-01-01

    The paper describes a surveillance system of cameras installed at lamppost of a military area. The surveillance system has been designed to detect unwanted visitors or suspicious behaviors. The area is composed of streets, building blocks and surrounded by gates and water. The video recordings are

  8. A novel video recommendation system based on efficient retrieval of human actions

    Science.gov (United States)

    Ramezani, Mohsen; Yaghmaee, Farzin

    2016-09-01

    In recent years, fast growth of online video sharing eventuated new issues such as helping users to find their requirements in an efficient way. Hence, Recommender Systems (RSs) are used to find the users' most favorite items. Finding these items relies on items or users similarities. Though, many factors like sparsity and cold start user impress the recommendation quality. In some systems, attached tags are used for searching items (e.g. videos) as personalized recommendation. Different views, incomplete and inaccurate tags etc. can weaken the performance of these systems. Considering the advancement of computer vision techniques can help improving RSs. To this end, content based search can be used for finding items (here, videos are considered). In such systems, a video is taken from the user to find and recommend a list of most similar videos to the query one. Due to relating most videos to humans, we present a novel low complex scalable method to recommend videos based on the model of included action. This method has recourse to human action retrieval approaches. For modeling human actions, some interest points are extracted from each action and their motion information are used to compute the action representation. Moreover, a fuzzy dissimilarity measure is presented to compare videos for ranking them. The experimental results on HMDB, UCFYT, UCF sport and KTH datasets illustrated that, in most cases, the proposed method can reach better results than most used methods.

  9. Video Surveillance using a Multi-Camera Tracking and Fusion System

    OpenAIRE

    Zhang , Zhong; Scanlon , Andrew; Yin , Weihong; Yu , Li; Venetianer , Péter L.

    2008-01-01

    International audience; Usage of intelligent video surveillance (IVS) systems is spreading rapidly. These systems are being utilized in a wide range of applications. In most cases, even in multi-camera installations, the video is processed independently in each feed. This paper describes a system that fuses tracking information from multiple cameras, thus vastly expanding its capabilities. The fusion relies on all cameras being calibrated to a site map, while the individual sensors remain lar...

  10. A video authentication technique

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1987-01-01

    Unattended video surveillance systems are particularly vulnerable to the substitution of false video images into the cable that connects the camera to the video recorder. New technology has made it practical to insert a solid state video memory into the video cable, freeze a video image from the camera, and hold this image as long as desired. Various techniques, such as line supervision and sync detection, have been used to detect video cable tampering. The video authentication technique described in this paper uses the actual video image from the camera as the basis for detecting any image substitution made during the transmission of the video image to the recorder. The technique, designed for unattended video systems, can be used for any video transmission system where a two-way digital data link can be established. The technique uses similar microprocessor circuitry at the video camera and at the video recorder to select sample points in the video image for comparison. The gray scale value of these points is compared at the recorder controller and if the values agree within limits, the image is authenticated. If a significantly different image was substituted, the comparison would fail at a number of points and the video image would not be authenticated. The video authentication system can run as a stand-alone system or at the request of another system

  11. Evaluation of video detection systems, volume 1 : effects of configuration changes in the performance of video detection systems.

    Science.gov (United States)

    2009-10-01

    The effects of modifying the configuration of three video detection (VD) systems (Iteris, Autoscope, and Peek) : are evaluated in daytime and nighttime conditions. Four types of errors were used: false, missed, stuck-on, and : dropped calls. The thre...

  12. A practical implementation of free viewpoint video system for soccer games

    Science.gov (United States)

    Suenaga, Ryo; Suzuki, Kazuyoshi; Tezuka, Tomoyuki; Panahpour Tehrani, Mehrdad; Takahashi, Keita; Fujii, Toshiaki

    2015-03-01

    In this paper, we present a free viewpoint video generation system with billboard representation for soccer games. Free viewpoint video generation is a technology that enables users to watch 3-D objects from their desired viewpoints. Practical implementation of free viewpoint video for sports events is highly demanded. However, a commercially acceptable system has not yet been developed. The main obstacles are insufficient user-end quality of the synthesized images and highly complex procedures that sometimes require manual operations. In this work, we aim to develop a commercially acceptable free viewpoint video system with a billboard representation. A supposed scenario is that soccer games during the day can be broadcasted in 3-D, even in the evening of the same day. Our work is still ongoing. However, we have already developed several techniques to support our goal. First, we captured an actual soccer game at an official stadium where we used 20 full-HD professional cameras. Second, we have implemented several tools for free viewpoint video generation as follow. In order to facilitate free viewpoint video generation, all cameras should be calibrated. We calibrated all cameras using checker board images and feature points on the field (cross points of the soccer field lines). We extract each player region from captured images manually. The background region is estimated by observing chrominance changes of each pixel in temporal domain (automatically). Additionally, we have developed a user interface for visualizing free viewpoint video generation using a graphic library (OpenGL), which is suitable for not only commercialized TV sets but also devices such as smartphones. However, practical system has not yet been completed and our study is still ongoing.

  13. QLab 3 show control projects for live performances & installations

    CERN Document Server

    Hopgood, Jeromy

    2013-01-01

    Used from Broadway to Britain's West End, QLab software is the tool of choice for many of the world's most prominent sound, projection, and integrated media designers. QLab 3 Show Control: Projects for Live Performances & Installations is a project-based book on QLab software covering sound, video, and show control. With information on both sound and video system basics and the more advanced functions of QLab such as MIDI show control, new OSC capabilities, networking, video effects, and microphone integration, each chapter's specific projects will allow you to learn the software's capabilitie

  14. VIDEO INFOGRAPHICS FOR SUSTAINABLE DEVELOPMENT (ON THE EXAMPLE OF THE VGTRK PROJECT «RUSSIA IN FIGURES»

    Directory of Open Access Journals (Sweden)

    M. V. Gribok

    2016-01-01

    Full Text Available The dissemination and popularization of knowledge about the country and the world are important tasks of modern society. Without their systematic solution the movement towards sustainable development is impossible. Government’s educational activities for population of the country, is carried out mainly through the mass media – primarily via television, which, according to the poll, is the main source of information and knowledge for 88% of Russians. In order to form an objective public perceptions about the country and the world, on the state TV channel «Russia 24" created project «Russia in figures» («World in figures». This project exists since 2009. It is a broadcast of short informational videos with a duration of 60 seconds between news reports, revealing the relevant statistical information on various topics: the population of Russia and the world, economy, employment, natural resources, transport, tourism, etc. The objectives of this research are analysis of video infographics (animated information graphics for the project «Russia in figures» («World in figures» from the standpoint of sustainable development, as well as identifying features of perception and visualization of geographical data in animated infographic by the example of this project.

  15. A video imaging system and related control hardware for nuclear safeguards surveillance applications

    International Nuclear Information System (INIS)

    Whichello, J.V.

    1987-03-01

    A novel video surveillance system has been developed for safeguards applications in nuclear installations. The hardware was tested at a small experimental enrichment facility located at the Lucas Heights Research Laboratories. The system uses digital video techniques to store, encode and transmit still television pictures over the public telephone network to a receiver located in the Australian Safeguards Office at Kings Cross, Sydney. A decoded, reconstructed picture is then obtained using a second video frame store. A computer-controlled video cassette recorder is used automatically to archive the surveillance pictures. The design of the surveillance system is described with examples of its operation

  16. Representing with Light. Video Projection Mapping for Cultural Heritage

    Science.gov (United States)

    Barbiani, C.; Guerra, F.; Pasini, T.; Visonà, M.

    2018-05-01

    In this paper, we describe a cross-disciplinary process that uses photogrammetric surveys as a precise basis for video projection mapping techniques. Beginning with a solid basis that uses geoinformatics technologies, such as laser scanning and photogrammetric survey, the method sets, as a first step, the physical and geometrical acquisition of the object. Precision and accuracy are the basics that allow the analysis of the artwork, both at a small or large scale, to evaluate details and correspondences. Testing contents at different scales of the object, using 3D printed replicas or real architectures is the second step of the investigation.The core of the process is the use of equations of collinearity into an interactive system such as Max 7, a visual programming language for music and multimedia, in order to facilitate operators to have a fast image correction, directly inside the interactive software. Interactivity gives also the opportunity to easily configure a set of actions to let the spectators to directly change and control the animation content. The paper goes through the different phases of the research, analysing the results and the progress through a series of events on real architecture and experiments on 3d printed models to test the level of involvement of the audience and the flexibility of the system in terms of content.The idea of using the collinearity equation inside da software Max 7 was developed for the M.Arch final Thesis by Massimo Visonà and Tommaso Pasini of the University of Venice (IUAV) in collaboration with the Digital Exhibit Postgraduate Master Course (MDE Iuav).

  17. Realization on the interactive remote video conference system based on multi-Agent

    Directory of Open Access Journals (Sweden)

    Zheng Yan

    2016-01-01

    Full Text Available To make people at different places participate in the same conference, speak and discuss freely, the interactive remote video conferencing system is designed and realized based on multi-Agent collaboration. FEC (forward error correction and tree P2P technology are firstly used to build a live conference structure to transfer audio and video data; then the branch conference port can participate to speak and discuss through the application of becoming a interactive focus; the introduction of multi-Agent collaboration technology improve the system robustness. The experiments showed that, under normal network conditions, the system can support 350 branch conference node simultaneously to make live broadcasting. The audio and video quality is smooth. It can carry out large-scale remote video conference.

  18. Linking Video and Text via Representations of Narrative

    OpenAIRE

    Salway, Andrew; Graham, Mike; Tomadaki, Eleftheria; Xu, Yan

    2003-01-01

    The ongoing TIWO project is investigating the synthesis of language technologies, like information extraction and corpus-based text analysis, video data modeling and knowledge representation. The aim is to develop a computational account of how video and text can be integrated by representations of narrative in multimedia systems. The multimedia domain is that of film and audio description – an emerging text type that is produced specifically to be informative about the events and objects dep...

  19. Remote stereoscopic video play platform for naked eyes based on the Android system

    Science.gov (United States)

    Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng

    2014-11-01

    As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.

  20. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    Science.gov (United States)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  1. Influence of video compression on the measurement error of the television system

    Science.gov (United States)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also

  2. High-speed holographic correlation system for video identification on the internet

    Science.gov (United States)

    Watanabe, Eriko; Ikeda, Kanami; Kodate, Kashiko

    2013-12-01

    Automatic video identification is important for indexing, search purposes, and removing illegal material on the Internet. By combining a high-speed correlation engine and web-scanning technology, we developed the Fast Recognition Correlation system (FReCs), a video identification system for the Internet. FReCs is an application thatsearches through a number of websites with user-generated content (UGC) and detects video content that violates copyright law. In this paper, we describe the FReCs configuration and an approach to investigating UGC websites using FReCs. The paper also illustrates the combination of FReCs with an optical correlation system, which is capable of easily replacing a digital authorization sever in FReCs with optical correlation.

  3. Video-based real-time on-street parking occupancy detection system

    Science.gov (United States)

    Bulan, Orhan; Loce, Robert P.; Wu, Wencheng; Wang, YaoRong; Bernal, Edgar A.; Fan, Zhigang

    2013-10-01

    Urban parking management is receiving significant attention due to its potential to reduce traffic congestion, fuel consumption, and emissions. Real-time parking occupancy detection is a critical component of on-street parking management systems, where occupancy information is relayed to drivers via smart phone apps, radio, Internet, on-road signs, or global positioning system auxiliary signals. Video-based parking occupancy detection systems can provide a cost-effective solution to the sensing task while providing additional functionality for traffic law enforcement and surveillance. We present a video-based on-street parking occupancy detection system that can operate in real time. Our system accounts for the inherent challenges that exist in on-street parking settings, including illumination changes, rain, shadows, occlusions, and camera motion. Our method utilizes several components from video processing and computer vision for motion detection, background subtraction, and vehicle detection. We also present three traffic law enforcement applications: parking angle violation detection, parking boundary violation detection, and exclusion zone violation detection, which can be integrated into the parking occupancy cameras as a value-added option. Our experimental results show that the proposed parking occupancy detection method performs in real-time at 5 frames/s and achieves better than 90% detection accuracy across several days of videos captured in a busy street block under various weather conditions such as sunny, cloudy, and rainy, among others.

  4. Security training with interactive laser-video-disk technology

    International Nuclear Information System (INIS)

    Wilson, D.

    1988-01-01

    DOE, through its contractor EG and G Energy Measurements, Inc., has developed a state-of-the-art interactive-video system for use at the Department of Energy's Central Training Academy. Called the Security Training and Evaluation Shooting System (STRESS), the computer-driven decision shooting system employs the latest is laservideo-disk technology. STRESS is designed to provide realistic and stressful training for security inspectors employed by the DOE and its contractors. The system uses wide-screen video projection, sophisticated scenario-branching technology, and customized video scenarios especially designed for the DOE. Firing a weapon that has been modified to shoot ''laser bullets,'' and wearing a special vest that detects ''hits'': the security inspector encounters adversaries on the wide screen who can shoot or be shot by the inspector in scenarios that demand fast decisions. Based on those decisions, the computer provides instantaneous branching to different scenes, giving the inspector confrontational training with the realism and variability of real life

  5. VME Switch for CERN's PS Analog Video System

    CERN Document Server

    Acebes, I; Heinze, W; Lewis, J; Serrano, J

    2003-01-01

    Analog video signal switching is used in CERN's Proton Synchrotron (PS) complex to route the video signals coming from Beam Diagnostics systems to the Meyrin Control Room (MCR). Traditionally, this has been done with custom electromechanical relay-based cards controlled serially via CAMAC crates. In order to improve the robustness and maintainability of the system, while keeping it analog to preserve the low latency, a VME card based on Analog Devices' AD8116 analog matrix chip has been developed. Video signals go into the front panel and exit the switch through the P2 connector of the VME backplane. The module is a 16 input, 32 output matrix. Larger matrices can be built using more modules and bussing their outputs together, thanks to the high impedance feature of the AD8116. Another VME module takes the selected signals from the P2 connector and performs automatic gain to send them at nominal output level through its front panel. This paper discusses both designs and presents experimental test results.

  6. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  7. Use of Video-Projected Structured Clinical Examination (ViPSCE) instead of the traditional oral (Viva) examination in the assessment of final year medical students.

    Science.gov (United States)

    El Shallaly, Gamal; Ali, Eltayeb

    2004-03-01

    Assessment of medical students using the traditional oral (viva) system has been marred by being highly subjective, non-structured, and biased. The use of the objective structured clinical examination (OSCE) would circumvent these disadvantages. The OSCE is, however, costly and time-consuming particularly if used for assessment of large numbers of students. The need for another form of examination that enjoys the advantages of the OSCE while avoiding its disadvantages in the face of limited resources has been the inspiration behind this innovative approach. (1) To identify the characteristics of the new Video-Projected Structured Clinical Examination (ViPSCE). (2) To compare the acceptability of ViPSCE and OSCE by students and tutors. (3) To compare the time-effectiveness of ViPSCE and OSCE. We used a slide video projection to assess the surgical knowledge, problem solving and management abilities of 112 final year medical students at Alazhari University, Khartoum, Sudan. Students completed evaluation forms at the end of the examination. The administration of the ViPSCE was smooth and straightforward. Feedback of the students showed that they preferred the ViPSCE to both traditional oral (viva) examination and OSCE. The examination time was 2 hours using video projection compared to the 6 hours that it used to take a class of 112 students to complete a classical OSCE. The ViPSCE is a better replacement for the traditional oral exam. It is much less time- consuming than traditional OSCE.

  8. Exterior field evaluation of new generation video motion detection systems

    International Nuclear Information System (INIS)

    Malone, T.P.

    1988-01-01

    Recent advancements in video motion detection (VMD) system design and technology have resulted in several new commercial VMD systems. Considerable interest in the new VMD systems has been generated because the systems are advertised to work effectively in exterior applications. Previous VMD systems, when used in an exterior environment, tended to have very high nuisance alarm rates due to weather conditions, wildlife activity and lighting variations. The new VMD systems advertise more advanced processing of the incoming video signal which is aimed at rejecting exterior environmental nuisance alarm sources while maintaining a high detection capability. This paper discusses the results of field testing, in an exterior environment, of two new VMD systems

  9. Sound for digital video

    CERN Document Server

    Holman, Tomlinson

    2013-01-01

    Achieve professional quality sound on a limited budget! Harness all new, Hollywood style audio techniques to bring your independent film and video productions to the next level.In Sound for Digital Video, Second Edition industry experts Tomlinson Holman and Arthur Baum give you the tools and knowledge to apply recent advances in audio capture, video recording, editing workflow, and mixing to your own film or video with stunning results. This fresh edition is chockfull of techniques, tricks, and workflow secrets that you can apply to your own projects from preproduction

  10. Medical students' perceptions of video-linked lectures and video-streaming

    Directory of Open Access Journals (Sweden)

    Karen Mattick

    2010-12-01

    Full Text Available Video-linked lectures allow healthcare students across multiple sites, and between university and hospital bases, to come together for the purposes of shared teaching. Recording and streaming video-linked lectures allows students to view them at a later date and provides an additional resource to support student learning. As part of a UK Higher Education Academy-funded Pathfinder project, this study explored medical students' perceptions of video-linked lectures and video-streaming, and their impact on learning. The methodology involved semi-structured interviews with 20 undergraduate medical students across four sites and five year groups. Several key themes emerged from the analysis. Students generally preferred live lectures at the home site and saw interaction between sites as a major challenge. Students reported that their attendance at live lectures was not affected by the availability of streamed lectures and tended to be influenced more by the topic and speaker than the technical arrangements. These findings will inform other educators interested in employing similar video technologies in their teaching.Keywords: video-linked lecture; video-streaming; student perceptions; decisionmaking; cross-campus teaching.

  11. Specialized video systems for use in waste tanks

    International Nuclear Information System (INIS)

    Anderson, E.K.; Robinson, C.W.; Heckendorn, F.M.

    1992-01-01

    The Robotics Development Group at the Savannah River Site is developing a remote video system for use in underground radioactive waste storage tanks at the Savannah River Site, as a portion of its site support role. Viewing of the tank interiors and their associated annular spaces is an extremely valuable tool in assessing their condition and controlling their operation. Several specialized video systems have been built that provide remote viewing and lighting, including remotely controlled tank entry and exit. Positioning all control components away from the facility prevents the potential for personnel exposure to radiation and contamination. The SRS waste tanks are nominal 4.5 million liter (1.3 million gallon) underground tanks used to store liquid high level radioactive waste generated by the site, awaiting final disposal. The typical waste tank (Figure 1) is of flattened shape (i.e. wider than high). The tanks sit in a dry secondary containment pan. The annular space between the tank wall and the secondary containment wall is continuously monitored for liquid intrusion and periodically inspected and documented. The latter was historically accomplished with remote still photography. The video systems includes camera, zoom lens, camera positioner, and vertical deployment. The assembly enters through a 125 mm (5 in) diameter opening. A special attribute of the systems is they never get larger than the entry hole during camera aiming etc. and can always be retrieved. The latest systems are easily deployable to a remote setup point and can extend down vertically 15 meters (50ft). The systems are expected to be a valuable asset to tank operations

  12. Status, recent developments and perspective of TINE-powered video system, release 3

    International Nuclear Information System (INIS)

    Weisse, S.; Melkumyan, D.; Duval, P.

    2012-01-01

    Experience has shown that imaging software and hardware installations at accelerator facilities needs to be changed, adapted and updated on a semi-permanent basis. On this premise the component-based core architecture of Video System 3 was founded. In design and implementation, emphasis was, is, and will be put on flexibility, performance, low latency, modularity, inter operability, use of open source, ease of use as well as reuse, good documentation and multi-platform capability. In the past year, a milestone was reached as Video System 3 entered production-level at PITZ, Hasylab and PETRA III. Since then, the development path has been more strongly influenced by production-level experience and customer feedback. In this contribution, we describe the current status, layout, recent developments and perspective of the Video System. Focus will be put on integration of recording and playback of video sequences to Archive/DAQ, a standalone installation of the Video System on a notebook as well as experiences running on Windows 7-64 bit. In addition, new client-side multi-platform GUI/application developments using Java are about to hit the surface. Last but not least it must be mentioned that although the implementation of Release 3 is integrated into the TINE control system, it is modular enough so that integration into other control systems can be considered. (authors)

  13. ESVD: An Integrated Energy Scalable Framework for Low-Power Video Decoding Systems

    Directory of Open Access Journals (Sweden)

    Wen Ji

    2010-01-01

    Full Text Available Video applications using mobile wireless devices are a challenging task due to the limited capacity of batteries. The higher complex functionality of video decoding needs high resource requirements. Thus, power efficient control has become more critical design with devices integrating complex video processing techniques. Previous works on power efficient control in video decoding systems often aim at the low complexity design and not explicitly consider the scalable impact of subfunctions in decoding process, and seldom consider the relationship with the features of compressed video date. This paper is dedicated to developing an energy-scalable video decoding (ESVD strategy for energy-limited mobile terminals. First, ESVE can dynamically adapt the variable energy resources due to the device aware technique. Second, ESVD combines the decoder control with decoded data, through classifying the data into different partition profiles according to its characteristics. Third, it introduces utility theoretical analysis during the resource allocation process, so as to maximize the resource utilization. Finally, it adapts the energy resource as different energy budget and generates the scalable video decoding output under energy-limited systems. Experimental results demonstrate the efficiency of the proposed approach.

  14. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    Science.gov (United States)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  15. Development of the video streaming system for the radiation safety training

    International Nuclear Information System (INIS)

    Uemura, Jitsuya

    2005-01-01

    Radiation workers have to receive the radiation safety training every year. It is very hard for them to receive the training within a limited chance of training. Then, we developed the new training system using the video streaming technique and opened the web page for the training on our homepage. Every worker is available to receive the video lecture at any time and at any place by using his PC via internet. After watching the video, the worker should receive the completion examination. It he can pass the examination, he was registered as a radiation worker by the database system for radiation control. (author)

  16. Portable digital video surveillance system for monitoring flower-visiting bumblebees

    Directory of Open Access Journals (Sweden)

    Thorsdatter Orvedal Aase, Anne Lene

    2011-08-01

    Full Text Available In this study we used a portable event-triggered video surveillance system for monitoring flower-visiting bumblebees. The system consist of mini digital recorder (mini-DVR with a video motion detection (VMD sensor which detects changes in the image captured by the camera, the intruder triggers the recording immediately. The sensitivity and the detection area are adjustable, which may prevent unwanted recordings. To our best knowledge this is the first study using VMD sensor to monitor flower-visiting insects. Observation of flower-visiting insects has traditionally been monitored by direct observations, which is time demanding, or by continuous video monitoring, which demands a great effort in reviewing the material. A total of 98.5 monitoring hours were conducted. For the mini-DVR with VMD, a total of 35 min were spent reviewing the recordings to locate 75 pollinators, which means ca. 0.35 sec reviewing per monitoring hr. Most pollinators in the order Hymenoptera were identified to species or group level, some were only classified to family (Apidae or genus (Bombus. The use of the video monitoring system described in the present paper could result in a more efficient data sampling and reveal new knowledge to pollination ecology (e.g. species identification and pollinating behaviour.

  17. Toward 3D-IPTV: design and implementation of a stereoscopic and multiple-perspective video streaming system

    Science.gov (United States)

    Petrovic, Goran; Farin, Dirk; de With, Peter H. N.

    2008-02-01

    3D-Video systems allow a user to perceive depth in the viewed scene and to display the scene from arbitrary viewpoints interactively and on-demand. This paper presents a prototype implementation of a 3D-video streaming system using an IP network. The architecture of our streaming system is layered, where each information layer conveys a single coded video signal or coded scene-description data. We demonstrate the benefits of a layered architecture with two examples: (a) stereoscopic video streaming, (b) monoscopic video streaming with remote multiple-perspective rendering. Our implementation experiments confirm that prototyping 3D-video streaming systems is possible with today's software and hardware. Furthermore, our current operational prototype demonstrates that highly heterogeneous clients can coexist in the system, ranging from auto-stereoscopic 3D displays to resource-constrained mobile devices.

  18. Mass-storage management for distributed image/video archives

    Science.gov (United States)

    Franchi, Santina; Guarda, Roberto; Prampolini, Franco

    1993-04-01

    The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.

  19. A remote educational system in medicine using digital video.

    Science.gov (United States)

    Hahm, Joon Soo; Lee, Hang Lak; Kim, Sun Il; Shimizu, Shuji; Choi, Ho Soon; Ko, Yong; Lee, Kyeong Geun; Kim, Tae Eun; Yun, Ji Won; Park, Yong Jin; Naoki, Nakashima; Koji, Okamura

    2007-03-01

    Telemedicine has opened the door to a wide range of learning experience and simultaneous feedback to doctors and students at various remote locations. However, there are limitations such as lack of approved international standards of ethics. The aim of our study was to establish a telemedical education system through the development of high quality images, using the digital transfer system on a high-speed network. Using telemedicine, surgical images can be sent not only to domestic areas but also abroad, and opinions regarding surgical procedures can be exchanged between the operation room and a remote place. The Asia Pacific Information Infrastrucuture (APII) link, a submarine cable between Busan and Fukuoka, was used to connect Korea with Japan, and Korea Advanced Research Network (KOREN) was used to connect Busan with Seoul. Teleconference and video streaming between Hanyang University Hospital in Seoul and Kyushu University Hospital in Japan were realized using Digital Video Transfer System (DVTS) over Ipv4 network. Four endoscopic surgeries were successfully transmitted between Seoul and Kyushu, while concomitant teleconferences took place between the two throughout the operations. Enough bandwidth of 60 Mbps could be kept for two-line transmissions. The quality of transmitted video image had no frame loss with a rate of 30 images per second. The sound was also clear, and time delay was less than 0.3 sec. Our experience has demonstrated the feasibility of domestic and international telemedicine. We have established an international medical network with high-quality video transmission over Internet protocol, which is easy to perform, reliable, and economical. Our network system may become a promising tool for worldwide telemedical communication in the future.

  20. Replicas Strategy and Cache Optimization of Video Surveillance Systems Based on Cloud Storage

    Directory of Open Access Journals (Sweden)

    Rongheng Li

    2018-04-01

    Full Text Available With the rapid development of video surveillance technology, especially the popularity of cloud-based video surveillance applications, video data begins to grow explosively. However, in the cloud-based video surveillance system, replicas occupy an amount of storage space. Also, the slow response to video playback constrains the performance of the system. In this paper, considering the characteristics of video data comprehensively, we propose a dynamic redundant replicas mechanism based on security levels that can dynamically adjust the number of replicas. Based on the location correlation between cameras, this paper also proposes a data cache strategy to improve the response speed of data reading. Experiments illustrate that: (1 our dynamic redundant replicas mechanism can save storage space while ensuring data security; (2 the cache mechanism can predict the playback behaviors of the users in advance and improve the response speed of data reading according to the location and time correlation of the front-end cameras; and (3 in terms of cloud-based video surveillance, our proposed approaches significantly outperform existing methods.

  1. Generic Film Forms for Dynamic Virtual Video Synthesis

    NARCIS (Netherlands)

    C.A. Lindley

    1999-01-01

    textabstractThe FRAMES project within the RDN CRC (Cooperative Research Centre for Research Data Networks) is developing an experimental environment for video content-based retrieval and dynamic virtual video synthesis from archives of video data. The FRAMES research prototype is a video synthesis

  2. Pilot Project: analysis, development and projection

    OpenAIRE

    Tapia Abril, Verónica Emilia; Chérrez Rodas, Karina; García Pesántez, Gabriela Rosana; Maldonado Marchán, María Elisa; Bustamante Montesdeoca, José Luis

    2014-01-01

    Since the introduction of ICT in architecture and teaching, pedagogies of education have faced their learning paradigms change. Institutes of higher education have folded to this motion and have undergone a process of change by implementing multimedia elements in their subjects. Through the pilot project educational videos that aim to meet the highest standards of educational videos described by Van Dam have been developed. The project expects to generate educational videos for different depa...

  3. Integrating IPix immersive video surveillance with unattended and remote monitoring (UNARM) systems

    International Nuclear Information System (INIS)

    Michel, K.D.; Klosterbuer, S.F.; Langner, D.C.

    2004-01-01

    Commercially available IPix cameras and software are being researched as a means by which an inspector can be virtually immersed into a nuclear facility. A single IPix camera can provide 360 by 180 degree views with full pan-tilt-zoom capability, and with no moving parts on the camera mount. Immersive video technology can be merged into the current Unattended and Remote Monitoring (UNARM) system, thereby providing an integrated system of monitoring capabilities that tie together radiation, video, isotopic analysis, Global Positioning System (GPS), etc. The integration of the immersive video capability with other monitoring methods already in place provides a significantly enhanced situational awareness to the International Atomic Energy Agency (IAEA) inspectors.

  4. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  5. On-line video image processing system for real-time neutron radiography

    Energy Technology Data Exchange (ETDEWEB)

    Fujine, S; Yoneda, K; Kanda, K [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst.

    1983-09-15

    The neutron radiography system installed at the E-2 experimental hole of the KUR (Kyoto University Reactor) has been used for some NDT applications in the nuclear field. The on-line video image processing system of this facility is introduced in this paper. A 0.5 mm resolution in images was obtained by using a super high quality TV camera developed for X-radiography viewing a NE-426 neutron-sensitive scintillator. The image of the NE-426 on a CRT can be observed directly and visually, thus many test samples can be sequentially observed when necessary for industrial purposes. The video image signals from the TV camera are digitized, with a 33 ms delay, through a video A/D converter (ADC) and can be stored in the image buffer (32 KB DRAM) of a microcomputer (Z-80) system. The digitized pictures are taken with 16 levels of gray scale and resolved to 240 x 256 picture elements (pixels) on a monochrome CRT, with the capability also to display 16 distinct colors on a RGB video display. The direct image of this system could be satisfactory for penetrating the side plates to test MTR type reactor fuels and for the investigation of moving objects.

  6. Engineering task plan for flammable gas atmosphere mobile color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1995-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and testing of the mobile video camera systems. The color video camera systems will be used to observe and record the activities within the vapor space of a tank on a limited exposure basis. The units will be fully mobile and designed for operation in the single-shell flammable gas producing tanks. The objective of this tank is to provide two mobile camera systems for use in flammable gas producing single-shell tanks (SSTs) for the Flammable Gas Tank Safety Program. The camera systems will provide observation, video recording, and monitoring of the activities that occur in the vapor space of applied tanks. The camera systems will be designed to be totally mobile, capable of deployment up to 6.1 meters into a 4 inch (minimum) riser

  7. Virtual Video Prototyping for Healthcare Systems

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Bossen, Claus; Lykke-Olesen, Andreas

    2002-01-01

    Virtual studio technology enables the mixing of physical and digital 3D objects and thus expands the way of representing design ideas in terms of virtual video prototypes, which offers new possibilities for designers by combining elements of prototypes, mock-ups, scenarios, and conventional video....... In this article we report our initial experience in the domain of pervasive healthcare with producing virtual video prototypes and using them in a design workshop. Our experience has been predominantly favourable. The production of a virtual video prototype forces the designers to decide very concrete design...... issues, since one cannot avoid paying attention to the physical, real-world constraints and to details in the usage-interaction between users and technology. From the users' perspective, during our evaluation of the virtual video prototype, we experienced how it enabled users to relate...

  8. Cost-Effective Video Filtering Solution for Real-Time Vision Systems

    Directory of Open Access Journals (Sweden)

    Karl Martin

    2005-08-01

    Full Text Available This paper presents an efficient video filtering scheme and its implementation in a field-programmable logic device (FPLD. Since the proposed nonlinear, spatiotemporal filtering scheme is based on order statistics, its efficient implementation benefits from a bit-serial realization. The utilization of both the spatial and temporal correlation characteristics of the processed video significantly increases the computational demands on this solution, and thus, implementation becomes a significant challenge. Simulation studies reported in this paper indicate that the proposed pipelined bit-serial FPLD filtering solution can achieve speeds of up to 97.6 Mpixels/s and consumes 1700 to 2700 logic cells for the speed-optimized and area-optimized versions, respectively. Thus, the filter area represents only 6.6 to 10.5% of the Altera STRATIX EP1S25 device available on the Altera Stratix DSP evaluation board, which has been used to implement a prototype of the entire real-time vision system. As such, the proposed adaptive video filtering scheme is both practical and attractive for real-time machine vision and surveillance systems as well as conventional video and multimedia applications.

  9. An Emerging Learning Design for Student-Generated "iVideos"

    Science.gov (United States)

    Kearney, Matthew; Jones, Glynis; Roberts, Lynn

    2012-01-01

    This paper describes an emerging learning design for a popular genre of learner-generated video projects: "Ideas Videos" or "iVideos." These advocacy-style videos are short, two-minute, digital videos designed "to evoke powerful experiences about educative ideas" (Wong, Mishra, Koehler & Siebenthal, 2007, p1). We…

  10. Real-Time Projection-Based Augmented Reality System for Dynamic Objects in the Performing Arts

    Directory of Open Access Journals (Sweden)

    Jaewoon Lee

    2015-02-01

    Full Text Available This paper describes the case study of applying projection-based augmented reality, especially for dynamic objects in live performing shows, such as plays, dancing, or musicals. Our study aims to project imagery correctly inside the silhouettes of flexible objects, in other words, live actors or the surface of actor’s costumes; the silhouette transforms its own shape frequently. To realize this work, we implemented a special projection system based on the real-time masking technique, that is to say real-time projection-based augmented reality system for dynamic objects in performing arts. We installed the sets on a stage for live performance, and rehearsed particular scenes of a musical. In live performance, using projection-based augmented reality technology enhances technical and theatrical aspects which were not possible with existing video projection techniques. The projected images on the surfaces of actor’s costume could not only express the particular scene of a performance more effectively, but also lead the audience to an extraordinary visual experience.

  11. Remote Video Supervision in Adapted Physical Education

    Science.gov (United States)

    Kelly, Luke; Bishop, Jason

    2013-01-01

    Supervision for beginning adapted physical education (APE) teachers and inservice general physical education teachers who are learning to work with students with disabilities poses a number of challenges. The purpose of this article is to describe a project aimed at developing a remote video system that could be used by a university supervisor to…

  12. Inexpensive remote video surveillance system with microcomputer and solar cells

    International Nuclear Information System (INIS)

    Guevara Betancourt, Edder

    2013-01-01

    A low-cost prototype is developed with a RPI plate for remote video surveillance. Additionally, the theoretical basis to provide energy independence have developed through solar cells and a battery bank. Some existing commercial monitoring systems are studied and analyzed, components such as: cameras, communication devices (WiFi and 3G), free software packages for video surveillance, control mechanisms and theory remote photovoltaic systems. A number of steps are developed to implement the module and install, configure and test each of the elements of hardware and software that make up the module, exploring the feasibility of providing intelligence to the system using the software chosen. Events that have been generated by motion detection have been simple, intuitive way to view, archive and extract. The implementation of the module by a microcomputer video surveillance and motion detection software (Zoneminder) has been an option for a lot of potential; as the platform for monitoring and recording data has provided all the tools to make a robust and secure surveillance. (author) [es

  13. Real-time video streaming system for LHD experiment using IP multicast

    International Nuclear Information System (INIS)

    Emoto, Masahiko; Yamamoto, Takashi; Yoshida, Masanobu; Nagayama, Yoshio; Hasegawa, Makoto

    2009-01-01

    In order to accomplish smooth cooperation research, remote participation plays an important role. For this purpose, the authors have been developing various applications for remote participation for the LHD (Large Helical Device) experiments, such as Web interface for visualization of acquired data. The video streaming system is one of them. It is useful to grasp the status of the ongoing experiment remotely, and we provide the video images displayed in the control room to the remote users. However, usual streaming servers cannot send video images without delay. The delay changes depending on how to send the images, but even a little delay might become critical if the researchers use the images to adjust the diagnostic devices. One of the main causes of delay is the procedure of compressing and decompressing the images. Furthermore, commonly used video compression method is lossy; it removes less important information to reduce the size. However, lossy images cannot be used for physical analysis because the original information is lost. Therefore, video images for remote participation should be sent without compression in order to minimize the delay and to supply high quality images durable for physical analysis. However, sending uncompressed video images requires large network bandwidth. For example, sending 5 frames of 16bit color SXGA images a second requires 100Mbps. Furthermore, the video images must be sent to several remote sites simultaneously. It is hard for a server PC to handle such a large data. To cope with this problem, the authors adopted IP multicast to send video images to several remote sites at once. Because IP multicast packets are sent only to the network on which the clients want the data; the load of the server does not depend on the number of clients and the network load is reduced. In this paper, the authors discuss the feasibility of high bandwidth video streaming system using IP multicast. (author)

  14. Data compression systems for home-use digital video recording

    NARCIS (Netherlands)

    With, de P.H.N.; Breeuwer, M.; van Grinsven, P.A.M.

    1992-01-01

    The authors focus on image data compression techniques for digital recording. Image coding for storage equipment covers a large variety of systems because the applications differ considerably in nature. Video coding systems suitable for digital TV and HDTV recording and digital electronic still

  15. The design of red-blue 3D video fusion system based on DM642

    Science.gov (United States)

    Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao

    2016-10-01

    Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.

  16. Ranking Highlights in Personal Videos by Analyzing Edited Videos.

    Science.gov (United States)

    Sun, Min; Farhadi, Ali; Chen, Tseng-Hung; Seitz, Steve

    2016-11-01

    We present a fully automatic system for ranking domain-specific highlights in unconstrained personal videos by analyzing online edited videos. A novel latent linear ranking model is proposed to handle noisy training data harvested online. Specifically, given a targeted domain such as "surfing," our system mines the YouTube database to find pairs of raw and their corresponding edited videos. Leveraging the assumption that an edited video is more likely to contain highlights than the trimmed parts of the raw video, we obtain pair-wise ranking constraints to train our model. The learning task is challenging due to the amount of noise and variation in the mined data. Hence, a latent loss function is incorporated to mitigate the issues caused by the noise. We efficiently learn the latent model on a large number of videos (about 870 min in total) using a novel EM-like procedure. Our latent ranking model outperforms its classification counterpart and is fairly competitive compared with a fully supervised ranking system that requires labels from Amazon Mechanical Turk. We further show that a state-of-the-art audio feature mel-frequency cepstral coefficients is inferior to a state-of-the-art visual feature. By combining both audio-visual features, we obtain the best performance in dog activity, surfing, skating, and viral video domains. Finally, we show that impressive highlights can be detected without additional human supervision for seven domains (i.e., skating, surfing, skiing, gymnastics, parkour, dog activity, and viral video) in unconstrained personal videos.

  17. Localization of cask and plug remote handling system in ITER using multiple video cameras

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, João, E-mail: jftferreira@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Vale, Alberto [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Ribeiro, Isabel [Laboratório de Robótica e Sistemas em Engenharia e Ciência - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2013-10-15

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building.

  18. Localization of cask and plug remote handling system in ITER using multiple video cameras

    International Nuclear Information System (INIS)

    Ferreira, João; Vale, Alberto; Ribeiro, Isabel

    2013-01-01

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building

  19. Research of real-time video processing system based on 6678 multi-core DSP

    Science.gov (United States)

    Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang

    2017-10-01

    In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.

  20. LIDAR-INCORPORATED TRAFFIC SIGN DETECTION FROM VIDEO LOG IMAGES OF MOBILE MAPPING SYSTEM

    Directory of Open Access Journals (Sweden)

    Y. Li

    2016-06-01

    Full Text Available Mobile Mapping System (MMS simultaneously collects the Lidar points and video log images in a scenario with the laser profiler and digital camera. Besides the textural details of video log images, it also captures the 3D geometric shape of point cloud. It is widely used to survey the street view and roadside transportation infrastructure, such as traffic sign, guardrail, etc., in many transportation agencies. Although many literature on traffic sign detection are available, they only focus on either Lidar or imagery data of traffic sign. Based on the well-calibrated extrinsic parameters of MMS, 3D Lidar points are, the first time, incorporated into 2D video log images to enhance the detection of traffic sign both physically and visually. Based on the local elevation, the 3D pavement area is first located. Within a certain distance and height of the pavement, points of the overhead and roadside traffic signs can be obtained according to the setup specification of traffic signs in different transportation agencies. The 3D candidate planes of traffic signs are then fitted using the RANSAC plane-fitting of those points. By projecting the candidate planes onto the image, Regions of Interest (ROIs of traffic signs are found physically with the geometric constraints between laser profiling and camera imaging. The Random forest learning of the visual color and shape features of traffic signs is adopted to validate the sign ROIs from the video log images. The sequential occurrence of a traffic sign among consecutive video log images are defined by the geometric constraint of the imaging geometry and GPS movement. Candidate ROIs are predicted in this temporal context to double-check the salient traffic sign among video log images. The proposed algorithm is tested on a diverse set of scenarios on the interstate highway G-4 near Beijing, China under varying lighting conditions and occlusions. Experimental results show the proposed algorithm enhances the

  1. Development and application of remote video monitoring system for combine harvester based on embedded Linux

    Science.gov (United States)

    Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui

    2017-01-01

    Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.

  2. A Complexity-Aware Video Adaptation Mechanism for Live Streaming Systems

    Directory of Open Access Journals (Sweden)

    Chen Homer H

    2007-01-01

    Full Text Available The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.

  3. A Complexity-Aware Video Adaptation Mechanism for Live Streaming Systems

    Science.gov (United States)

    Lu, Meng-Ting; Yao, Jason J.; Chen, Homer H.

    2007-12-01

    The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.

  4. [Telemedicine with digital video transport system].

    Science.gov (United States)

    Hahm, Joon Soo; Shimizu, Shuji; Nakashima, Naoki; Byun, Tae Jun; Lee, Hang Lak; Choi, Ho Soon; Ko, Yong; Lee, Kyeong Geun; Kim, Sun Il; Kim, Tae Eun; Yun, Jiwon; Park, Yong Jin

    2004-06-01

    The growth of technology based on internet protocol has affected on the informatics and automatic controls of medical fields. The aim of this study was to establish the telemedical educational system by developing the high quality image transfer using the DVTS (digital video transmission system) on the high-speed internet network. Using telemedicine, we were able to send surgical images not only to domestic areas but also to international area. Moreover, we could discuss the condition of surgical procedures in the operation room and seminar room. The Korean-Japan cable network (KJCN) was structured in the submarine between Busan and Fukuoka. On the other hand, the Korea advanced research network (KOREN) was used to connect between Busan and Seoul. To link the image between the Hanyang University Hospital in Seoul and Kyushu University Hospital in Japan, we started teleconference system and recorded image-streaming system with DVTS on the circumstance with IPv4 network. Two operative cases were transmitted successfully. We could keep enough bandwidth of 60 Mbps for two-line transmission. The quality of transmitted moving image had no frame loss with the rate 30 per second. The sound was also clear and the time delay was less than 0.3 sec. Our study has demonstrated the feasibility of domestic and international telemedicine. We have established an international medical network with high-quality video transmission over internet protocol. It is easy to perform, reliable, and also economical. Thus, it will be a promising tool in remote medicine for worldwide telemedical communication in the future.

  5. Creating engagement with old research videos

    DEFF Research Database (Denmark)

    Caglio, Agnese; Buur, Jacob

    User-centred design projects that utilize ethnographic research tend to produce hours and hours of contextual video footage that seldom gets used again once the project is complete. The richness of such research video could, however, make it attractive for other project teams or researchers...... as source of inspiration or knowledge of a particular context or user group -- if it were practically feasible to engage with the material later on. In this paper we explore the potentials of using old research footage to stimulate reflection, conversations and creativity by presenting it on pervasive...... screens to colleague designers and researchers. The setup we designed included large and small screens placed in a social space of a research environment, the communal kitchen. Through screenings of ten different 'old' research videos accompanied by various prompt questions and activities we built...

  6. Detection of goal events in soccer videos

    Science.gov (United States)

    Kim, Hyoung-Gook; Roeber, Steffen; Samour, Amjad; Sikora, Thomas

    2005-01-01

    In this paper, we present an automatic extraction of goal events in soccer videos by using audio track features alone without relying on expensive-to-compute video track features. The extracted goal events can be used for high-level indexing and selective browsing of soccer videos. The detection of soccer video highlights using audio contents comprises three steps: 1) extraction of audio features from a video sequence, 2) event candidate detection of highlight events based on the information provided by the feature extraction Methods and the Hidden Markov Model (HMM), 3) goal event selection to finally determine the video intervals to be included in the summary. For this purpose we compared the performance of the well known Mel-scale Frequency Cepstral Coefficients (MFCC) feature extraction method vs. MPEG-7 Audio Spectrum Projection feature (ASP) extraction method based on three different decomposition methods namely Principal Component Analysis( PCA), Independent Component Analysis (ICA) and Non-Negative Matrix Factorization (NMF). To evaluate our system we collected five soccer game videos from various sources. In total we have seven hours of soccer games consisting of eight gigabytes of data. One of five soccer games is used as the training data (e.g., announcers' excited speech, audience ambient speech noise, audience clapping, environmental sounds). Our goal event detection results are encouraging.

  7. A review of video security training and assessment-systems and their applications

    International Nuclear Information System (INIS)

    Cellucci, J.; Hall, R.J.

    1991-01-01

    This paper reports that during the last 10 years computer-aided video data collection and playback systems have been used as nuclear facility security training and assessment tools with varying degrees of success. These mobile systems have been used by trained security personnel for response force training, vulnerability assessment, force-on-force exercises and crisis management. Typically, synchronous recordings from multiple video cameras, communications audio, and digital sensor inputs; are played back to the exercise participants and then edited for training and briefing. Factors that have influence user acceptance include: frequency of use, the demands placed on security personnel, fear of punishment, user training requirements and equipment cost. The introduction of S-VHS video and new software for scenario planning, video editing and data reduction; should bring about a wider range of security applications and supply the opportunity for significant cost sharing with other user groups

  8. Illustrating Geology With Customized Video in Introductory Geoscience Courses

    Science.gov (United States)

    Magloughlin, J. F.

    2008-12-01

    For the past several years, I have been creating short videos for use in large-enrollment introductory physical geology classes. The motivation for this project included 1) lack of appropriate depth in existing videos, 2) engagement of non-science students, 3) student indifference to traditional textbooks, 4) a desire to share the visual splendor of geology through virtual field trips, and 5) a desire to meld photography, animation, narration, and videography in self-contained experiences. These (HD) videos are information-intensive but short, allowing a focus on relatively narrow topics from numerous subdisciplines, incorporation into lectures to help create variety while minimally interrupting flow and holding students' attention, and manageable file sizes. Nearly all involve one or more field locations, including sites throughout the western and central continental U.S., as well as Hawaii, Italy, New Zealand, and Scotland. The limited scope of the project and motivations mentioned preclude a comprehensive treatment of geology. Instead, videos address geologic processes, locations, features, and interactions with humans. The videos have been made available via DVD and on-line streaming. Such a project requires an array of video and audio equipment and software, a broad knowledge of geology, very good computing power, adequate time, creativity, a substantial travel budget, liability insurance, elucidation of the separation (or non-separation) between such a project and other responsibilities, and, preferably but not essentially, the support of one's supervisor or academic unit. Involving students in such projects entails risks, but involving necessary technical expertise is virtually unavoidable. In my own courses, some videos are used in class and/or made available on-line as simply another aspect of the educational experience. Student response has been overwhelmingly positive, particularly when expectations of students regarding the content of the videos is made

  9. Video Quality Prediction Models Based on Video Content Dynamics for H.264 Video over UMTS Networks

    Directory of Open Access Journals (Sweden)

    Asiya Khan

    2010-01-01

    Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.

  10. Mobiele video voor bedrijfscommunicatie

    NARCIS (Netherlands)

    Niamut, O.A.; Weerdt, C.A. van der; Havekes, A.

    2009-01-01

    Het project Penta Mobilé liep van juni tot november 2009 en had als doel de mogelijkheden van mobiele video voor bedrijfscommunicatie toepassingen in kaart te brengen. Dit onderzoek werd uitgevoerd samen met vijf (‘Penta’) partijen: Business Tales, Condor Digital, European Communication Projects

  11. Critical Assessment of Video Production in Teacher Education: Can Video Production Foster Community-Engaged Scholarship?

    Science.gov (United States)

    Yang, Kyung-Hwa

    2014-01-01

    In the theoretical framework of production pedagogy, I reflect on a video production project conducted in a teacher education program and discuss the potential of video production to foster community-engaged scholarship among pre-service teachers. While the importance of engaging learners in creating media has been emphasized, studies show little…

  12. Fostering science communication and outreach through video production in Dartmouth's IGERT Polar Environmental Change graduate program

    Science.gov (United States)

    Hammond Wagner, C. R.; McDavid, L. A.; Virginia, R. A.

    2013-12-01

    Dartmouth's NSF-supported IGERT Polar Environmental Change graduate program has focused on using video media to foster interdisciplinary thinking and to improve student skills in science communication and public outreach. Researchers, educators, and funding organizations alike recognize the value of video media for making research results more accessible and relevant to diverse audiences and across cultures. We present an affordable equipment set and the basic video training needed as well as available Dartmouth institutional support systems for students to produce outreach videos on climate change and its associated impacts on people. We highlight and discuss the successes and challenges of producing three types of video products created by graduate and undergraduate students affiliated with the Dartmouth IGERT. The video projects created include 1) graduate student profile videos, 2) a series of short student-created educational videos for Greenlandic high school students, and 3) an outreach video about women in science based on the experiences of women students conducting research during the IGERT field seminar at Summit Station and Kangerlussuaq, Greenland. The 'Science in Greenland--It's a Girl Thing' video was featured on The New York Times Dot Earth blog and the Huffington Post Green blog among others and received international recognition. While producing these videos, students 1) identified an audience and created story lines, 2) worked in front of and behind the camera, 3) utilized low-cost digital editing applications, and 4) shared the videos on multiple platforms from social media to live presentations. The three video projects were designed to reach different audiences, and presented unique challenges for content presentation and dissemination. Based on student and faculty assessment, we conclude that the video projects improved student science communication skills and increased public knowledge of polar science and the effects of climate change.

  13. Enabling MPEG-2 video playback in embedded systems through improved data cache efficiency

    Science.gov (United States)

    Soderquist, Peter; Leeser, Miriam E.

    1999-01-01

    Digital video decoding, enabled by the MPEG-2 Video standard, is an important future application for embedded systems, particularly PDAs and other information appliances. Many such system require portability and wireless communication capabilities, and thus face severe limitations in size and power consumption. This places a premium on integration and efficiency, and favors software solutions for video functionality over specialized hardware. The processors in most embedded system currently lack the computational power needed to perform video decoding, but a related and equally important problem is the required data bandwidth, and the need to cost-effectively insure adequate data supply. MPEG data sets are very large, and generate significant amounts of excess memory traffic for standard data caches, up to 100 times the amount required for decoding. Meanwhile, cost and power limitations restrict cache sizes in embedded systems. Some systems, including many media processors, eliminate caches in favor of memories under direct, painstaking software control in the manner of digital signal processors. Yet MPEG data has locality which caches can exploit if properly optimized, providing fast, flexible, and automatic data supply. We propose a set of enhancements which target the specific needs of the heterogeneous types within the MPEG decoder working set. These optimizations significantly improve the efficiency of small caches, reducing cache-memory traffic by almost 70 percent, and can make an enhanced 4 KB cache perform better than a standard 1 MB cache. This performance improvement can enable high-resolution, full frame rate video playback in cheaper, smaller system than woudl otherwise be possible.

  14. Development and setting of a time-lapse video camera system for the Antarctic lake observation

    Directory of Open Access Journals (Sweden)

    Sakae Kudoh

    2010-11-01

    Full Text Available A submersible video camera system, which aimed to record the growth image of aquatic vegetation in Antarctic lakes for one year, was manufactured. The system consisted of a video camera, a programmable controller unit, a lens-cleaning wiper with a submersible motor, LED lights, and a lithium ion battery unit. Changes of video camera (High Vision System and modification of the lens-cleaning wiper allowed higher sensitivity and clearer recording images compared to the previous submersible video without increasing the power consumption. This system was set on the lake floor in Lake Naga Ike (a tentative name in Skarvsnes in Soya Coast, during the summer activity of the 51th Japanese Antarctic Research Expedition. Interval record of underwater visual image for one year have been started by our diving operation.

  15. Developing Agent-Oriented Video Surveillance System through Agent-Oriented Methodology (AOM

    Directory of Open Access Journals (Sweden)

    Cheah Wai Shiang

    2016-12-01

    Full Text Available Agent-oriented methodology (AOM is a comprehensive and unified agent methodology for agent-oriented software development. Although AOM is claimed to be able to cope with a complex system development, it is still not yet determined up to what extent this may be true. Therefore, it is vital to conduct an investigation to validate this methodology. This paper presents the adoption of AOM in developing an agent-oriented video surveillance system (VSS. An intruder handling scenario is designed and implemented through AOM. AOM provides an alternative method to engineer a distributed security system in a systematic manner. It presents the security system at a holistic view; provides a better conceptualization of agent-oriented security system and supports rapid prototyping as well as simulation of video surveillance system.

  16. A Client-Server System for Ubiquitous Video Service

    Directory of Open Access Journals (Sweden)

    Ronit Nossenson

    2012-12-01

    Full Text Available In this work we introduce a simple client-server system architecture and algorithms for ubiquitous live video and VOD service support. The main features of the system are: efficient usage of network resources, emphasis on user personalization, and ease of implementation. The system supports many continuous service requirements such as QoS provision, user mobility between networks and between different communication devices, and simultaneous usage of a device by a number of users.

  17. Video micro analysis in music therapy research

    DEFF Research Database (Denmark)

    Holck, Ulla; Oldfield, Amelia; Plahl, Christine

    2004-01-01

    Three music therapy researchers from three different countries who have recently completed their PhD theses will each briefly discuss the role of video analysis in their investigations. All three of these research projects have involved music therapy work with children, some of whom were on the a...... and qualitative approaches to data collection. In addition, participants will be encouraged to reflect on what types of knowledge can be gained from video analyses and to explore the general relevance of video analysis in music therapy research.......Three music therapy researchers from three different countries who have recently completed their PhD theses will each briefly discuss the role of video analysis in their investigations. All three of these research projects have involved music therapy work with children, some of whom were...

  18. A System based on Adaptive Background Subtraction Approach for Moving Object Detection and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Bahadır KARASULU

    2013-04-01

    Full Text Available Video surveillance systems are based on video and image processing research areas in the scope of computer science. Video processing covers various methods which are used to browse the changes in existing scene for specific video. Nowadays, video processing is one of the important areas of computer science. Two-dimensional videos are used to apply various segmentation and object detection and tracking processes which exists in multimedia content-based indexing, information retrieval, visual and distributed cross-camera surveillance systems, people tracking, traffic tracking and similar applications. Background subtraction (BS approach is a frequently used method for moving object detection and tracking. In the literature, there exist similar methods for this issue. In this research study, it is proposed to provide a more efficient method which is an addition to existing methods. According to model which is produced by using adaptive background subtraction (ABS, an object detection and tracking system’s software is implemented in computer environment. The performance of developed system is tested via experimental works with related video datasets. The experimental results and discussion are given in the study

  19. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  20. Rate control scheme for consistent video quality in scalable video codec.

    Science.gov (United States)

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  1. A pilot project in distance education: nurse practitioner students' experience of personal video capture technology as an assessment method of clinical skills.

    Science.gov (United States)

    Strand, Haakan; Fox-Young, Stephanie; Long, Phil; Bogossian, Fiona

    2013-03-01

    This paper reports on a pilot project aimed at exploring postgraduate distance students' experiences using personal video capture technology to complete competency assessments in physical examination. A pre-intervention survey gathered demographic data from nurse practitioner students (n=31) and measured their information communication technology fluency. Subsequently, thirteen (13) students were allocated a hand held video camera to use in their clinical setting. Those participating in the trial completed a post-intervention survey and further data were gathered using semi-structured interviews. Data were analysed by descriptive statistics and deductive content analysis, and the Unified Theory of Acceptance and Use of Technology (Venkatesh et al., 2003) were used to guide the project. Uptake of the intervention was high (93%) as students recognised the potential benefit. Students were video recorded while performing physical examinations. They described high level of stress and some anxiety, which decreased rapidly while assessment was underway. Barriers experienced were in the areas of facilitating conditions (technical character e.g. upload of files) and social influence (e.g. local ethical approval). Students valued the opportunity to reflect on their recorded performance with their clinical mentors and by themselves. This project highlights the demands and difficulties of introducing technology to support work-based learning. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Measurement and protocol for evaluating video and still stabilization systems

    Science.gov (United States)

    Cormier, Etienne; Cao, Frédéric; Guichard, Frédéric; Viard, Clément

    2013-01-01

    This article presents a system and a protocol to characterize image stabilization systems both for still images and videos. It uses a six axes platform, three being used for camera rotation and three for camera positioning. The platform is programmable and can reproduce complex motions that have been typically recorded by a gyroscope mounted on different types of cameras in different use cases. The measurement uses a single chart for still image and videos, the texture dead leaves chart. Although the proposed implementation of the protocol uses a motion platform, the measurement itself does not rely on any specific hardware. For still images, a modulation transfer function is measured in different directions and is weighted by a contrast sensitivity function (simulating the human visual system accuracy) to obtain an acutance. The sharpness improvement due to the image stabilization system is a good measurement of performance as recommended by a CIPA standard draft. For video, four markers on the chart are detected with sub-pixel accuracy to determine a homographic deformation between the current frame and a reference position. This model describes well the apparent global motion as translations, but also rotations along the optical axis and distortion due to the electronic rolling shutter equipping most CMOS sensors. The protocol is applied to all types of cameras such as DSC, DSLR and smartphones.

  3. Breast Ultrasound Examination with Video Monitor System: A Satisfaction Survey among Patients

    International Nuclear Information System (INIS)

    Ryu, Jung Kyu; Kim, Hyun Cheol; Yang, Dal Mo

    2010-01-01

    The purpose of this study is to assess the patients satisfaction with a newly established video-monitor system and the associated basic items for performing breast ultrasound exams by conducting a survey among the patients. 349 patients were invited to take the survey and they had undergone breast ultrasound examination once during the 3 months after the monitor system has been introduced. The questionnaire was composed of 8 questions, 4 of which were about the basic items such as age, gender and the reason of their taking the breast ultrasound exam, their preference for the gender of the examiner and the desired length of time for the examination. The other 4 question were about their satisfaction with the video monitor. The patients were divided into two groups according to the purposes of taking the exams, which were screening or diagnostic purposes. The results were compared between these 2 groups. The satisfaction with the video monitor system was assessed by using a scoring system that ranged from 1 to 5. For the total patients, the screening group was composed of 124 patients and the diagnostic group was composed of 225. The reasons why the patients wanted to take the examinations in the diagnostic group varied. The questionnaire about the preference of the gender of the examiner showed that 81.5% in the screening group and 79.1% in the diagnostic group preferred a woman doctor. The required, suitable time for the breast ultrasound examination was 5 to 10 minutes or 10 to 15 minutes for about 70% of the patients. The mean satisfaction score for the video monitor system was as high as 3.95 point. The portion of patients in each group who answered over 3 points for their satisfaction with the monitor system was 88.7% and 94.2%, respectively. Our study showed that patients preferred 5-15 minutes for the length of the examination time and a female examiner. We also confirmed high patient satisfaction with the video monitor system

  4. Breast Ultrasound Examination with Video Monitor System: A Satisfaction Survey among Patients

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Jung Kyu; Kim, Hyun Cheol; Yang, Dal Mo [East-West Neo Medical Center, Kyung-Hee University, Seoul (Korea, Republic of)

    2010-03-15

    The purpose of this study is to assess the patients satisfaction with a newly established video-monitor system and the associated basic items for performing breast ultrasound exams by conducting a survey among the patients. 349 patients were invited to take the survey and they had undergone breast ultrasound examination once during the 3 months after the monitor system has been introduced. The questionnaire was composed of 8 questions, 4 of which were about the basic items such as age, gender and the reason of their taking the breast ultrasound exam, their preference for the gender of the examiner and the desired length of time for the examination. The other 4 question were about their satisfaction with the video monitor. The patients were divided into two groups according to the purposes of taking the exams, which were screening or diagnostic purposes. The results were compared between these 2 groups. The satisfaction with the video monitor system was assessed by using a scoring system that ranged from 1 to 5. For the total patients, the screening group was composed of 124 patients and the diagnostic group was composed of 225. The reasons why the patients wanted to take the examinations in the diagnostic group varied. The questionnaire about the preference of the gender of the examiner showed that 81.5% in the screening group and 79.1% in the diagnostic group preferred a woman doctor. The required, suitable time for the breast ultrasound examination was 5 to 10 minutes or 10 to 15 minutes for about 70% of the patients. The mean satisfaction score for the video monitor system was as high as 3.95 point. The portion of patients in each group who answered over 3 points for their satisfaction with the monitor system was 88.7% and 94.2%, respectively. Our study showed that patients preferred 5-15 minutes for the length of the examination time and a female examiner. We also confirmed high patient satisfaction with the video monitor system

  5. Applied learning-based color tone mapping for face recognition in video surveillance system

    Science.gov (United States)

    Yew, Chuu Tian; Suandi, Shahrel Azmin

    2012-04-01

    In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.

  6. Virtual Video Prototyping of Pervasive Healthcare Systems

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Bossen, Claus; Madsen, Kim Halskov

    2002-01-01

    Virtual studio technology enables the mixing of physical and digital 3D objects and thus expands the way of representing design ideas in terms of virtual video prototypes, which offers new possibilities for designers by combining elements of prototypes, mock-ups, scenarios, and conventional video....... In this article we report our initial experience in the domain of pervasive healthcare with producing virtual video prototypes and using them in a design workshop. Our experience has been predominantly favourable. The production of a virtual video prototype forces the designers to decide very concrete design...... issues, since one cannot avoid paying attention to the physical, real-world constraints and to details in the usage-interaction between users and technology. From the users' perspective, during our evaluation of the virtual video prototype, we experienced how it enabled users to relate...

  7. CS Seminar Videos

    OpenAIRE

    Ong, Derek; Tona, Glen; Gibb, Kyle; Parbadia, Sivani

    2013-01-01

    Main site for our project can be found at this URL: http://vtechworks.lib.vt.edu/handle/10919/19036. From here you can find videos of all the CS seminars and distinguished lectures given this semester. Each video has its own abstract and description. The files attached in this section are a final report in both raw Word Document and archival PDF formats and a presentation in both raw Powerpoint and archival PDF formats. Computer Science seminars are a very educational and interesting as...

  8. 75 FR 75186 - Interview Room Video System Standard Special Technical Committee Request for Proposals for...

    Science.gov (United States)

    2010-12-02

    ... DEPARTMENT OF JUSTICE Office of Justice Programs [OJP (NIJ) Docket No. 1534] Interview Room Video System Standard Special Technical Committee Request for Proposals for Certification and Testing Expertise... Interview Room Video System Standard and corresponding certification program requirements. This work is...

  9. PERANCANGAN VIDEO PANDUAN FITNES SEBAGAI MEDIA PEMBELAJARAN

    Directory of Open Access Journals (Sweden)

    Rizkysari Meimaharani

    2013-06-01

    Full Text Available ABSTRACT Designing fitness exercise tutorial level beginner as learning and promotion media for life gym was designed to provide guidelines of good movement in the fitness training sessions for beginners, especially the gym because life member will be distributed free of charge for new members sign up. For the process of editing video tutorial software and hardware needed adequate for smooth production. The results also depend on the ability of either constituent knowledge of a general nature and especially directing, editing, creativity, and the ability of hardware, software and technology / computer. Excess video guide allows members to understand the movement is good and right to avoid unwanted injury. Not only guides the movement are presented in this video project but also the member is given petuntuk diet and proper diet for target practice can be easily achieved. Excess video guide allows members to understand the movement is good and right to avoid unwanted injury. Not only guides the movement are presented in this video project but also the member is given guide of diet and proper diet for target practice can be easily achieved. The presence of video editing technology offers convenience to an agency to educate the public through video learning and served as media promotion of a service or related agency theme of the video.

  10. 47 CFR 63.02 - Exemptions for extensions of lines and for systems for the delivery of video programming.

    Science.gov (United States)

    2010-10-01

    ... systems for the delivery of video programming. 63.02 Section 63.02 Telecommunication FEDERAL... systems for the delivery of video programming. (a) Any common carrier is exempt from the requirements of... with respect to the establishment or operation of a system for the delivery of video programming. [64...

  11. Baited remote underwater video system (BRUVs) survey of ...

    African Journals Online (AJOL)

    This is the first baited remote underwater video system (BRUVs) survey of the relative abundance, diversity and seasonal distribution of chondrichthyans in False Bay. Nineteen species from 11 families were recorded across 185 sites at between 4 and 49 m depth. Diversity was greatest in summer, on reefs and in shallow ...

  12. On the definition of adapted audio/video profiles for high-quality video calling services over LTE/4G

    Science.gov (United States)

    Ndiaye, Maty; Quinquis, Catherine; Larabi, Mohamed Chaker; Le Lay, Gwenael; Saadane, Hakim; Perrine, Clency

    2014-01-01

    During the last decade, the important advances and widespread availability of mobile technology (operating systems, GPUs, terminal resolution and so on) have encouraged a fast development of voice and video services like video-calling. While multimedia services have largely grown on mobile devices, the generated increase of data consumption is leading to the saturation of mobile networks. In order to provide data with high bit-rates and maintain performance as close as possible to traditional networks, the 3GPP (The 3rd Generation Partnership Project) worked on a high performance standard for mobile called Long Term Evolution (LTE). In this paper, we aim at expressing recommendations related to audio and video media profiles (selection of audio and video codecs, bit-rates, frame-rates, audio and video formats) for a typical video-calling services held over LTE/4G mobile networks. These profiles are defined according to targeted devices (smartphones, tablets), so as to ensure the best possible quality of experience (QoE). Obtained results indicate that for a CIF format (352 x 288 pixels) which is usually used for smartphones, the VP8 codec provides a better image quality than the H.264 codec for low bitrates (from 128 to 384 kbps). However sequences with high motion, H.264 in slow mode is preferred. Regarding audio, better results are globally achieved using wideband codecs offering good quality except for opus codec (at 12.2 kbps).

  13. Three-dimensional (3-D) video systems: bi-channel or single-channel optics?

    Science.gov (United States)

    van Bergen, P; Kunert, W; Buess, G F

    1999-11-01

    This paper presents the results of a comparison between two different three-dimensional (3-D) video systems, one with single-channel optics, the other with bi-channel optics. The latter integrates two lens systems, each transferring one half of the stereoscopic image; the former uses only one lens system, similar to a two-dimensional (2-D) endoscope, which transfers the complete stereoscopic picture. In our training centre for minimally invasive surgery, surgeons were involved in basic and advanced laparoscopic courses using both a 2-D system and the two 3-D video systems. They completed analog scale questionnaires in order to record a subjective impression of the relative convenience of operating in 2-D and 3-D vision, and to identify perceived deficiencies in the 3-D system. As an objective test, different experimental tasks were developed, in order to measure performance times and to count pre-defined errors made while using the two 3-D video systems and the 2-D system. Using the bi-channel optical system, the surgeon has a heightened spatial perception, and can work faster and more safely than with a single-channel system. However, single-channel optics allow the use of an angulated endoscope, and the free rotation of the optics relative to the camera, which is necessary for some operative applications.

  14. A Simple FSPN Model of P2P Live Video Streaming System

    OpenAIRE

    Kotevski, Zoran; Mitrevski, Pece

    2011-01-01

    Peer to Peer (P2P) live streaming is relatively new paradigm that aims at streaming live video to large number of clients at low cost. Many such applications already exist in the market, but, prior to creating such system it is necessary to analyze its performance via representative model that can provide good insight in the system’s behavior. Modeling and performance analysis of P2P live video streaming systems is challenging task which requires addressing many properties and issues of P2P s...

  15. Manageable and Extensible Video Streaming Systems for On-Line Monitoring of Remote Laboratory Experiments

    Directory of Open Access Journals (Sweden)

    Jian-Wei Lin

    2009-08-01

    Full Text Available To enable clients to view real-time video of the involved instruments during a remote experiment, two real-time video streaming systems are devised. One is for the remote experiments which instruments locate in one geographic spot and the other is for those which instruments scatter over different places. By means of running concurrent streaming processes at a server, multiple instruments can be monitored simultaneously by different clients. The proposed systems possess excellent extensibility, that is, the systems can easily add new digital cameras for instruments without modifying any software. Also they are well-manageable, meaning that an administrator can conveniently adjust the quality of the real-time video depending on system load and visual requirements. Finally, some evaluation concerning CPU utilization and bandwidth consumption of the systems have been evaluated to verify the effectiveness of the proposed solutions.

  16. Detection of Visual Events in Underwater Video Using a Neuromorphic Saliency-based Attention System

    Science.gov (United States)

    Edgington, D. R.; Walther, D.; Cline, D. E.; Sherlock, R.; Salamy, K. A.; Wilson, A.; Koch, C.

    2003-12-01

    The Monterey Bay Aquarium Research Institute (MBARI) uses high-resolution video equipment on remotely operated vehicles (ROV) to obtain quantitative data on the distribution and abundance of oceanic animals. High-quality video data supplants the traditional approach of assessing the kinds and numbers of animals in the oceanic water column through towing collection nets behind ships. Tow nets are limited in spatial resolution, and often destroy abundant gelatinous animals resulting in species undersampling. Video camera-based quantitative video transects (QVT) are taken through the ocean midwater, from 50m to 4000m, and provide high-resolution data at the scale of the individual animals and their natural aggregation patterns. However, the current manual method of analyzing QVT video by trained scientists is labor intensive and poses a serious limitation to the amount of information that can be analyzed from ROV dives. Presented here is an automated system for detecting marine animals (events) visible in the videos. Automated detection is difficult due to the low contrast of many translucent animals and due to debris ("marine snow") cluttering the scene. Video frames are processed with an artificial intelligence attention selection algorithm that has proven a robust means of target detection in a variety of natural terrestrial scenes. The candidate locations identified by the attention selection module are tracked across video frames using linear Kalman filters. Typically, the occurrence of visible animals in the video footage is sparse in space and time. A notion of "boring" video frames is developed by detecting whether or not there is an interesting candidate object for an animal present in a particular sequence of underwater video -- video frames that do not contain any "interesting" events. If objects can be tracked successfully over several frames, they are stored as potentially "interesting" events. Based on low-level properties, interesting events are

  17. Gait Analysis by Multi Video Sequence Analysis

    DEFF Research Database (Denmark)

    Jensen, Karsten; Juhl, Jens

    2009-01-01

    The project presented in this article aims to develop software so that close-range photogrammetry with sufficient accuracy can be used to point out the most frequent foot mal positions and monitor the effect of the traditional treatment. The project is carried out as a cooperation between...... and the calcaneus angle during gait. In the introductory phase of the project the task has been to select, purchase and draw up hardware, select and purchase software concerning video streaming and to develop special software concerning automated registration of the position of the foot during gait by Multi Video...

  18. Joint Optimization in UMTS-Based Video Transmission

    Directory of Open Access Journals (Sweden)

    Attila Zsiros

    2007-01-01

    Full Text Available A software platform is exposed, which was developed to enable demonstration and capacity testing. The platform simulates a joint optimized wireless video transmission. The development succeeded within the frame of the IST-PHOENIX project and is based on the system optimization model of the project. One of the constitutive parts of the model, the wireless network segment, is changed to a detailed, standard UTRA network simulation module. This paper consists of (1 a brief description of the projects simulation chain, (2 brief description of the UTRAN system, and (3 the integration of the two segments. The role of the UTRAN part in the joint optimization is described, with the configuration and control of this element. Finally, some simulation results are shown. In the conclusion, we show how our simulation results translate into real-world performance gains.

  19. Video diaries on social media: Creating online communities for geoscience research and education

    Science.gov (United States)

    Tong, V.

    2013-12-01

    Making video clips is an engaging way to learn and teach geoscience. As smartphones become increasingly common, it is relatively straightforward for students to produce ';video diaries' by recording their research and learning experience over the course of a science module. Instead of keeping the video diaries for themselves, students may use the social media such as Facebook for sharing their experience and thoughts. There are some potential benefits to link video diaries and social media in pedagogical contexts. For example, online comments on video clips offer useful feedback and learning materials to the students. Students also have the opportunity to engage in geoscience outreach by producing authentic scientific contents at the same time. A video diary project was conducted to test the pedagogical potential of using video diaries on social media in the context of geoscience outreach, undergraduate research and teaching. This project formed part of a problem-based learning module in field geophysics at an archaeological site in the UK. The project involved i) the students posting video clips about their research and problem-based learning in the field on a daily basis; and ii) the lecturer building an online outreach community with partner institutions. In this contribution, I will discuss the implementation of the project and critically evaluate the pedagogical potential of video diaries on social media. My discussion will focus on the following: 1) Effectiveness of video diaries on social media; 2) Student-centered approach of producing geoscience video diaries as part of their research and problem-based learning; 3) Learning, teaching and assessment based on video clips and related commentaries posted on Facebook; and 4) Challenges in creating and promoting online communities for geoscience outreach through the use of video diaries. I will compare the outcomes from this study with those from other pedagogical projects with video clips on geoscience, and

  20. Interactive Video, The Next Step

    Science.gov (United States)

    Strong, L. R.; Wold-Brennon, R.; Cooper, S. K.; Brinkhuis, D.

    2012-12-01

    Video has the ingredients to reach us emotionally - with amazing images, enthusiastic interviews, music, and video game-like animations-- and it's emotion that motivates us to learn more about our new interest. However, watching video is usually passive. New web-based technology is expanding and enhancing the video experience, creating opportunities to use video with more direct interaction. This talk will look at an Educaton and Outreach team's experience producing video-centric curriculum using innovative interactive media tools from TED-Ed and FlixMaster. The Consortium for Ocean Leadership's Deep Earth Academy has partnered with the Center for Dark Energy Biosphere Investigations (C-DEBI) to send educators and a video producer aboard three deep sea research expeditions to the Juan de Fuca plate to install and service sub-seafloor observatories. This collaboration between teachers, students, scientists and media producers has proved a productive confluence, providing new ways of understanding both ground-breaking science and the process of science itself - by experimenting with new ways to use multimedia during ocean-going expeditions and developing curriculum and other projects post-cruise.

  1. A Novel System for Supporting Autism Diagnosis Using Home Videos: Iterative Development and Evaluation of System Design.

    Science.gov (United States)

    Nazneen, Nazneen; Rozga, Agata; Smith, Christopher J; Oberleitner, Ron; Abowd, Gregory D; Arriaga, Rosa I

    2015-06-17

    Observing behavior in the natural environment is valuable to obtain an accurate and comprehensive assessment of a child's behavior, but in practice it is limited to in-clinic observation. Research shows significant time lag between when parents first become concerned and when the child is finally diagnosed with autism. This lag can delay early interventions that have been shown to improve developmental outcomes. To develop and evaluate the design of an asynchronous system that allows parents to easily collect clinically valid in-home videos of their child's behavior and supports diagnosticians in completing diagnostic assessment of autism. First, interviews were conducted with 11 clinicians and 6 families to solicit feedback from stakeholders about the system concept. Next, the system was iteratively designed, informed by experiences of families using it in a controlled home-like experimental setting and a participatory design process involving domain experts. Finally, in-field evaluation of the system design was conducted with 5 families of children (4 with previous autism diagnosis and 1 child typically developing) and 3 diagnosticians. For each family, 2 diagnosticians, blind to the child's previous diagnostic status, independently completed an autism diagnosis via our system. We compared the outcome of the assessment between the 2 diagnosticians, and between each diagnostician and the child's previous diagnostic status. The system that resulted through the iterative design process includes (1) NODA smartCapture, a mobile phone-based application for parents to record prescribed video evidence at home; and (2) NODA Connect, a Web portal for diagnosticians to direct in-home video collection, access developmental history, and conduct an assessment by linking evidence of behaviors tagged in the videos to the Diagnostic and Statistical Manual of Mental Disorders criteria. Applying clinical judgment, the diagnostician concludes a diagnostic outcome. During field

  2. Rapid prototyping of an automated video surveillance system: a hardware-software co-design approach

    Science.gov (United States)

    Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.; Ives, Robert W.

    2011-06-01

    FPGA devices with embedded DSP and memory blocks, and high-speed interfaces are ideal for real-time video processing applications. In this work, a hardware-software co-design approach is proposed to effectively utilize FPGA features for a prototype of an automated video surveillance system. Time-critical steps of the video surveillance algorithm are designed and implemented in the FPGAs logic elements to maximize parallel processing. Other non timecritical tasks are achieved by executing a high level language program on an embedded Nios-II processor. Pre-tested and verified video and interface functions from a standard video framework are utilized to significantly reduce development and verification time. Custom and parallel processing modules are integrated into the video processing chain by Altera's Avalon Streaming video protocol. Other data control interfaces are achieved by connecting hardware controllers to a Nios-II processor using Altera's Avalon Memory Mapped protocol.

  3. Toy Trucks in Video Analysis

    DEFF Research Database (Denmark)

    Buur, Jacob; Nakamura, Nanami; Larsen, Rainer Rye

    2015-01-01

    discovered that using scale-models like toy trucks has a strongly encouraging effect on developers/designers to collaboratively make sense of field videos. In our analysis of such scale-model sessions, we found some quite fundamental patterns of how participants utilise objects; the participants build shared......Video fieldstudies of people who could be potential users is widespread in design projects. How to analyse such video is, however, often challenging, as it is time consuming and requires a trained eye to unlock experiential knowledge in people’s practices. In our work with industrialists, we have...... narratives by moving the objects around, they name them to handle the complexity, they experience what happens in the video through their hands, and they use the video together with objects to create alternative narratives, and thus alternative solutions to the problems they observe. In this paper we claim...

  4. "In Our Own Words": Creating Videos as Teaching and Learning Tools

    Directory of Open Access Journals (Sweden)

    Norda Majekodunmi

    2012-11-01

    Full Text Available Online videos, particularly those on YouTube, have proliferated on the internet; watching them has become part of our everyday activity. While libraries have often harnessed the power of videos to create their own promotional and informational videos, few have created their own teaching and learning tools beyond screencasting videos. In the summer of 2010, the authors, two librarians at York University, decided to work on a video project which culminated in a series of instructional videos entitled “Learning: In Our Own Words.” The purpose of the video project was twofold: to trace the “real” experience of incoming students and their development of academic literacies skills (research, writing and learning throughout their first year, and to create videos that librarians and other instructors could use as instructional tools to engage students in critical thinking and discussion. This paper outlines the authors’ experience filming the videos, creating a teaching guide, and screening the videos in the classroom. Lessons learned during this initiative are discussed in the hope that more libraries will develop videos as teaching and learning tools.

  5. A highly sensitive underwater video system for use in turbid aquaculture ponds.

    Science.gov (United States)

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C

    2016-08-24

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds' benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system's high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.

  6. Digital cinema video compression

    Science.gov (United States)

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  7. Objective analysis of image quality of video image capture systems

    Science.gov (United States)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  8. Design of a system based on DSP and FPGA for video recording and replaying

    Science.gov (United States)

    Kang, Yan; Wang, Heng

    2013-08-01

    This paper brings forward a video recording and replaying system with the architecture of Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA). The system achieved encoding, recording, decoding and replaying of Video Graphics Array (VGA) signals which are displayed on a monitor during airplanes and ships' navigating. In the architecture, the DSP is a main processor which is used for a large amount of complicated calculation during digital signal processing. The FPGA is a coprocessor for preprocessing video signals and implementing logic control in the system. In the hardware design of the system, Peripheral Device Transfer (PDT) function of the External Memory Interface (EMIF) is utilized to implement seamless interface among the DSP, the synchronous dynamic RAM (SDRAM) and the First-In-First-Out (FIFO) in the system. This transfer mode can avoid the bottle-neck of the data transfer and simplify the circuit between the DSP and its peripheral chips. The DSP's EMIF and two level matching chips are used to implement Advanced Technology Attachment (ATA) protocol on physical layer of the interface of an Integrated Drive Electronics (IDE) Hard Disk (HD), which has a high speed in data access and does not rely on a computer. Main functions of the logic on the FPGA are described and the screenshots of the behavioral simulation are provided in this paper. In the design of program on the DSP, Enhanced Direct Memory Access (EDMA) channels are used to transfer data between the FIFO and the SDRAM to exert the CPU's high performance on computing without intervention by the CPU and save its time spending. JPEG2000 is implemented to obtain high fidelity in video recording and replaying. Ways and means of acquiring high performance for code are briefly present. The ability of data processing of the system is desirable. And smoothness of the replayed video is acceptable. By right of its design flexibility and reliable operation, the system based on DSP and FPGA

  9. Learning About Energy Resources Through Student Created Video Documentaries in the University Science Classroom

    Science.gov (United States)

    Wade, P.; Courtney, A.

    2010-12-01

    Students enrolled in an undergraduate non-science majors’ Energy Perspectives course created 10-15 minute video documentaries on topics related to Energy Resources and the Environment. Video project topics included wave, biodiesel, clean coal, hydro, solar and “off-the-grid” energy technologies. No student had any prior experience with creating video projects. Students had Liberal Arts academic backgrounds that included Anthropology, Theater Arts, International Studies, English and Early Childhood Education. Students were required to: 1) select a topic, 2) conduct research, 3) write a narrative, 4) construct a project storyboard, 5) shoot or acquire video and photos (from legal sources), 6) record the narrative, and 7) construct the video documentary. This study describes the instructional approach of using student created video documentaries as projects in an undergraduate non-science majors’ science course. Two knowledge survey instruments were used for assessment purposes. Each instrument was administered Pre-, Mid- and Post course. One survey focused on the skills necessary to research and produce video documentaries. Results showed students acquired enhanced technology skills especially with regard to research techniques, writing skills and video editing. The second survey assessed students’ content knowledge acquired from each documentary. Results indicated students’ increased their content knowledge of energy resource topics. Students reported very favorable evaluations concerning their experience with creating “Ken Burns” video project documentaries.

  10. Evaluation of the educational value of YouTube videos about physical examination of the cardiovascular and respiratory systems.

    Science.gov (United States)

    Azer, Samy A; Algrain, Hala A; AlKhelaif, Rana A; AlEshaiwi, Sarah M

    2013-11-13

    A number of studies have evaluated the educational contents of videos on YouTube. However, little analysis has been done on videos about physical examination. This study aimed to analyze YouTube videos about physical examination of the cardiovascular and respiratory systems. It was hypothesized that the educational standards of videos on YouTube would vary significantly. During the period from November 2, 2011 to December 2, 2011, YouTube was searched by three assessors for videos covering the clinical examination of the cardiovascular and respiratory systems. For each video, the following information was collected: title, authors, duration, number of viewers, and total number of days on YouTube. Using criteria comprising content, technical authority, and pedagogy parameters, videos were rated independently by three assessors and grouped into educationally useful and non-useful videos. A total of 1920 videos were screened. Only relevant videos covering the examination of adults in the English language were identified (n=56). Of these, 20 were found to be relevant to cardiovascular examinations and 36 to respiratory examinations. Further analysis revealed that 9 provided useful information on cardiovascular examinations and 7 on respiratory examinations: scoring mean 14.9 (SD 0.33) and mean 15.0 (SD 0.00), respectively. The other videos, 11 covering cardiovascular and 29 on respiratory examinations, were not useful educationally, scoring mean 11.1 (SD 1.08) and mean 11.2 (SD 1.29), respectively. The differences between these two categories were significant (P.86. A small number of videos about physical examination of the cardiovascular and respiratory systems were identified as educationally useful; these videos can be used by medical students for independent learning and by clinical teachers as learning resources. The scoring system utilized by this study is simple, easy to apply, and could be used by other researchers on similar topics.

  11. Candid camera : video surveillance system can help protect assets

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, L.

    2009-11-15

    By combining closed-circuit cameras with sophisticated video analytics to create video sensors for use in remote areas, Calgary-based IntelliView Technologies Inc.'s explosion-proof video surveillance system can help the oil and gas sector monitor its assets. This article discussed the benefits, features, and applications of IntelliView's technology. Some of the benefits include a reduced need for on-site security and operating personnel and its patented analytics product known as the SmrtDVR, where the camera's images are stored. The technology can be used in temperatures as cold as minus 50 degrees Celsius and as high as 50 degrees Celsius. The product was commercialized in 2006 when it was used by Nexen Inc. It was concluded that false alarms set off by natural occurrences such as rain, snow, glare and shadows were a huge problem with analytics in the past, but that problem has been solved by IntelliView, which has its own source code, and re-programmed code. 1 fig.

  12. View Synthesis for Advanced 3D Video Systems

    Directory of Open Access Journals (Sweden)

    2009-02-01

    Full Text Available Interest in 3D video applications and systems is growing rapidly and technology is maturating. It is expected that multiview autostereoscopic displays will play an important role in home user environments, since they support multiuser 3D sensation and motion parallax impression. The tremendous data rate cannot be handled efficiently by representation and coding formats such as MVC or MPEG-C Part 3. Multiview video plus depth (MVD is a new format that efficiently supports such advanced 3DV systems, but this requires high-quality intermediate view synthesis. For this, a new approach is presented that separates unreliable image regions along depth discontinuities from reliable image regions, which are treated separately and fused to the final interpolated view. In contrast to previous layered approaches, our algorithm uses two boundary layers and one reliable layer, performs image-based 3D warping only, and was generically implemented, that is, does not necessarily rely on 3D graphics support. Furthermore, different hole-filling and filtering methods are added to provide high-quality intermediate views. As a result, high-quality intermediate views for an existing 9-view auto-stereoscopic display as well as other stereo- and multiscopic displays are presented, which prove the suitability of our approach for advanced 3DV systems.

  13. View Synthesis for Advanced 3D Video Systems

    Directory of Open Access Journals (Sweden)

    Müller Karsten

    2008-01-01

    Full Text Available Abstract Interest in 3D video applications and systems is growing rapidly and technology is maturating. It is expected that multiview autostereoscopic displays will play an important role in home user environments, since they support multiuser 3D sensation and motion parallax impression. The tremendous data rate cannot be handled efficiently by representation and coding formats such as MVC or MPEG-C Part 3. Multiview video plus depth (MVD is a new format that efficiently supports such advanced 3DV systems, but this requires high-quality intermediate view synthesis. For this, a new approach is presented that separates unreliable image regions along depth discontinuities from reliable image regions, which are treated separately and fused to the final interpolated view. In contrast to previous layered approaches, our algorithm uses two boundary layers and one reliable layer, performs image-based 3D warping only, and was generically implemented, that is, does not necessarily rely on 3D graphics support. Furthermore, different hole-filling and filtering methods are added to provide high-quality intermediate views. As a result, high-quality intermediate views for an existing 9-view auto-stereoscopic display as well as other stereo- and multiscopic displays are presented, which prove the suitability of our approach for advanced 3DV systems.

  14. Video monitoring system for enriched uranium casting furnaces

    International Nuclear Information System (INIS)

    Turner, P.C.

    1978-03-01

    A closed-circuit television (CCTV) system was developed to upgrade the remote-viewing capability on two oralloy (highly enriched uranium) casting furnaces in the Y-12 Plant. A silicon vidicon CCTV camera with a remotely controlled lens and infrared filtering was provided to yield a good-quality video presentation of the furnace crucible as the oralloy material is heated from 25 to 1300 0 C. Existing tube-type CCTV monochrome monitors were replaced with solid-state monitors to increase the system reliability

  15. Video-Based Big Data Analytics in Cyberlearning

    Science.gov (United States)

    Wang, Shuangbao; Kelly, William

    2017-01-01

    In this paper, we present a novel system, inVideo, for video data analytics, and its use in transforming linear videos into interactive learning objects. InVideo is able to analyze video content automatically without the need for initial viewing by a human. Using a highly efficient video indexing engine we developed, the system is able to analyze…

  16. BLUECOM+ project: Connecting Humans and Systems at Ocean Remote Areas using Cost-effective Broadband Communications field

    Science.gov (United States)

    Brito, Pedro; Terrinha, Pedro; Magalhães, Vitor; Santos, Joana; Duarte, Débora; Campos, Rui

    2017-04-01

    launched from the second vessel and connected to the first to have Internet access. The tests were performed at increasing distances up to a maximum distance of 45km from the land station and the first hop, and up to 10km between the two Helikites. The main results achieved were: • Single-hop data rates in excess of 1Mbit/s up to 45km; • Two-hop data rates in excess of 500kbit/s up to 55km; • Video conference with land at 42km offshore without a glitch; • Real-time upload of data collected by an autonomous vehicle offshore to the cloud. A 3rd cruise will be done this year to test video streaming to shore of sea bottom images acquired from the ship with a drop down video system. This will include the integration of the BLUECOM+ network with the drop down video system, in order to demonstrate real-time underwater video transmission offshore. Acknowledgements: This work was developed as part of the BLUECOM+ project (PT02_Aviso4_0005) funded by the EEA Grants and Norway Grants.

  17. The Annie Jump Cannon Video Project at the Harvard-Smithsonian Center for Astrophysics.

    Science.gov (United States)

    Lupfer, C.; Welther, B. L.; Griswold, A.

    1993-05-01

    The heart of this poster paper is the screening of the new 25-minute educational video, ``Annie and the Stars of Many Colors.'' It explores the life and work of Annie Jump Cannon through the eyes of sixth-grade students. A production of the Science Media Group at the CfA, the video was created to interest and inspire girls and minorities, in particular, to continue their study of history and physical science in high school. Recent studies show that science teachers are successfully using videotapes in the classroom to supplement traditional methods of teaching. Other reports show that capable girls and minority students tend to drop science in high school. Our goal, then, was to create a video to stimulate the curiosity and natural interest in science of these younger students. With the help of the Public Affairs Office at the CfA, we arranged to visit local schools to talk to sixth-grade science teachers and their students about the video project. Boys and girls were both eager to participate in it. By lottery, we chose a dozen youngsters of multi-cultural backgrounds to attend a three-day workshop, during which we videotaped them discovering facts about Cannon's childhood and career. Barbara Welther, historian and principal investigator, took the group to the Harvard University Archives to look at some Cannon memorabilia. To learn about spectra, each student assembled a spectroscope from a kit and observed solar lines. CfA astronomers then led the group in various activities to explore the types of stellar spectra that Cannon classified and published in The Henry Draper Catalogue 75 years ago.% and that astronomers still study today. ``Annie and the Stars of Many Colors'' shows young people actively engaged in the process of discovery and offers teachers a novel tool to stimulate discussion of topics in science, history, women's studies, and careers. It is intended for use in schools, libraries, museums, planetariums, as well as for personal interest. For more

  18. Videos Designed to Watch but Audience Required Telling stories is a cliché for best practice in videos. Frontier Scientists, a NSF project titled Science in Alaska: using Multimedia to Support Science Education stressed story but faced audience limitations. FS describes project's story process, reach results, and hypothesizes better scenarios.

    Science.gov (United States)

    O'Connell, E. A.

    2016-12-01

    Telling stories is a cliché for best practice in science videos. It's upheld as a method to capture audience attention in many fields. Findings from neurobiology research show character-driven stories cause the release of the neurochemical oxytocin in the brain. Oxytocin motivates cooperation with others and enhances a sense of empathy, in particular the ability to experience others' emotions. Developing character tension- as in our video design showcasing scientists along with their work- holds the viewers' attention, promotes recall of story, and has the potential to clearly broadcast the feelings and behaviors of the scientists. The brain chemical change should help answer the questions: Why should a viewer care about this science? How does it improve the world, or our lives? Is just a story-driven video the solution to science outreach? Answer: Not in our multi-media world. Frontier Scientists (FS) discovered in its three year National Science Foundation project titled 'Science in Alaska: using Multimedia to Support Science Education': the storied video is only part of the effort. Although FS created from scratch and drove a multimedia national campaign throughout the project, major reach was not achieved. Despite FS' dedicated web site, YouTube channel, weekly blog, monthly press release, Facebook and G+ pages, Twitter activity, contact with scientists' institutions, and TV broadcast, monthly activity on the web site seemed to plateau at about 3000 visitors to the FS website per month. Several factors hampered the effort: Inadequate funding for social media limited the ability of FS to get the word to untapped markets: those whose interest might be sparked by ad campaigns but who do not actively explore unfamiliar agencies' science education content. However, when institutions took advantage of promoting their scientists through the FS videos we saw an uptick in video views and the participating scientists were often contacted for additional stories or were

  19. Performance of a video-image-subtraction-based patient positioning system

    International Nuclear Information System (INIS)

    Milliken, Barrett D.; Rubin, Steven J.; Hamilton, Russell J.; Johnson, L. Scott; Chen, George T.Y.

    1997-01-01

    Purpose: We have developed and tested an interactive video system that utilizes image subtraction techniques to enable high precision patient repositioning using surface features. We report quantitative measurements of system performance characteristics. Methods and Materials: Video images can provide a high precision, low cost measure of patient position. Image subtraction techniques enable one to incorporate detailed information contained in the image of a carefully verified reference position into real-time images. We have developed a system using video cameras providing orthogonal images of the treatment setup. The images are acquired, processed and viewed using an inexpensive frame grabber and a PC. The subtraction images provide the interactive guidance needed to quickly and accurately place a patient in the same position for each treatment session. We describe the design and implementation of our system, and its quantitative performance, using images both to measure changes in position, and to achieve accurate setup reproducibility. Results: Under clinical conditions (60 cm field of view, 3.6 m object distance), the position of static, high contrast objects could be measured with a resolution of 0.04 mm (rms) in each of two dimensions. The two-dimensional position could be reproduced using the real-time image display with a resolution of 0.15 mm (rms). Two-dimensional measurement resolution of the head of a patient undergoing treatment for head and neck cancer was 0.1 mm (rms), using a lateral view, measuring the variation in position of the nose and the ear over the course of a single radiation treatment. Three-dimensional repositioning accuracy of the head of a healthy volunteer using orthogonal camera views was less than 0.7 mm (systematic error) with an rms variation of 1.2 mm. Setup adjustments based on the video images were typically performed within a few minutes. The higher precision achieved using the system to measure objects than to reposition

  20. Video dosimetry: evaluation of X-radiation dose by video fluoroscopic image

    International Nuclear Information System (INIS)

    Nova, Joao Luiz Leocadio da; Lopes, Ricardo Tadeu

    1996-01-01

    A new methodology to evaluate the entrance surface dose on patients under radiodiagnosis is presented. A phantom is used in video fluoroscopic procedures in on line video signal system. The images are obtained from a Siemens Polymat 50 and are digitalized. The results show that the entrance surface dose can be obtained in real time from video imaging

  1. A portable wireless power transmission system for video capsule endoscopes.

    Science.gov (United States)

    Shi, Yu; Yan, Guozheng; Zhu, Bingquan; Liu, Gang

    2015-01-01

    Wireless power transmission (WPT) technology can solve the energy shortage problem of the video capsule endoscope (VCE) powered by button batteries, but the fixed platform limited its clinical application. This paper presents a portable WPT system for VCE. Besides portability, power transfer efficiency and stability are considered as the main indexes of optimization design of the system, which consists of the transmitting coil structure, portable control box, operating frequency, magnetic core and winding of receiving coil. Upon the above principles, the correlation parameters are measured, compared and chosen. Finally, through experiments on the platform, the methods are tested and evaluated. In the gastrointestinal tract of small pig, the VCE is supplied with sufficient energy by the WPT system, and the energy conversion efficiency is 2.8%. The video obtained is clear with a resolution of 320×240 and a frame rate of 30 frames per second. The experiments verify the feasibility of design scheme, and further improvement direction is discussed.

  2. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    Science.gov (United States)

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper. PMID:22438753

  3. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    Directory of Open Access Journals (Sweden)

    Alvaro Suarez

    2012-02-01

    Full Text Available Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  4. Architecture and protocol of a semantic system designed for video tagging with sensor data in mobile devices.

    Science.gov (United States)

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  5. IndigoVision IP video keeps watch over remote gas facilities in Amazon rainforest

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    2010-07-15

    In Brazil, IndigoVision's complete IP video security technology is being used to remotely monitor automated gas facilities in the Amazon rainforest. Twelve compounds containing millions of dollars of process automation, telemetry, and telecom equipment are spread across many thousands of miles of forest and centrally monitored in Rio de Janeiro using Control Center, the company's Security Management software. The security surveillance project uses a hybrid IP network comprising satellite, fibre optic, and wireless links. In addition to advanced compression technology and bandwidth tuning tools, the IP video system uses Activity Controlled Framerate (ACF), which controls the frame rate of the camera video stream based on the amount of motion in a scene. In the absence of activity, the video is streamed at a minimum framerate, but the moment activity is detected the framerate jumps to the configured maximum. This significantly reduces the amount of bandwidth needed. At each remote facility, fixed analog cameras are connected to transmitter nodules that convert the feed to high-quality digital video for transmission over the IP network. The system also integrates alarms with video surveillance. PIR intruder detectors are connected to the system via digital inputs on the transmitters. Advanced alarm-handling features in the Control Center software process the PIR detector alarms and alert operators to potential intrusions. This improves operator efficiency and incident response. 1 fig.

  6. Video control system for a drilling in furniture workpiece

    Science.gov (United States)

    Khmelev, V. L.; Satarov, R. N.; Zavyalova, K. V.

    2018-05-01

    During last 5 years, Russian industry has being starting to be a robotic, therefore scientific groups got new tasks. One of new tasks is machine vision systems, which should solve problem of automatic quality control. This type of systems has a cost of several thousand dollars each. The price is impossible for regional small business. In this article, we describe principle and algorithm of cheap video control system, which one uses web-cameras and notebook or desktop computer as a computing unit.

  7. The everyday lives of video game developers: Experimentally understanding underlying systems/structures

    Directory of Open Access Journals (Sweden)

    Casey O'Donnell

    2009-03-01

    Full Text Available This essay examines how tensions between work and play for video game developers shape the worlds they create. The worlds of game developers, whose daily activity is linked to larger systems of experimentation and technoscientific practice, provide insights that transcend video game development work. The essay draws on ethnographic material from over 3 years of fieldwork with video game developers in the United States and India. It develops the notion of creative collaborative practice based on work in the fields of science and technology studies, game studies, and media studies. The importance of, the desire for, or the drive to understand underlying systems and structures has become fundamental to creative collaborative practice. I argue that the daily activity of game development embodies skills fundamental to creative collaborative practice and that these capabilities represent fundamental aspects of critical thought. Simultaneously, numerous interests have begun to intervene in ways that endanger these foundations of creative collaborative practice.

  8. Video library for video imaging detection at intersection stop lines.

    Science.gov (United States)

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  9. Digitized video subject positioning and surveillance system for PET

    International Nuclear Information System (INIS)

    Picard, Y.; Thompson, C.J.

    1995-01-01

    Head motion is a significant contribution to the degradation of image quality of Positron Emission Tomography (PET) studies. Images from different studies must also be realigned digitally to be correlated when the subject position has changed. These constraints could be eliminated if the subject's head position could be monitored accurately. The authors have developed a video camera-based surveillance system to monitor the head position and motion of subjects undergoing PET studies. The system consists of two CCD (charge-coupled device) cameras placed orthogonally such that both face and profile views of the subject's head are displayed side by side on an RGB video monitor. Digitized images overlay the live images in contrasting colors on the monitor. Such a system can be used to (1) position the subject in the field of view (FOV) by displaying the position of the scanner's slices on the monitor along with the current subject position, (2) monitor head motion and alert the operator of any motion during the study and (3) reposition the subject accurately for subsequent studies by displaying the previous position along with the current position in a contrasting color

  10. Video Game Literacy - Exploring new paradigms and new educational activities

    Directory of Open Access Journals (Sweden)

    Damiano Felini

    2010-12-01

    Full Text Available Literacy is a complex concept of relevance for both traditional and most recent educational theories. Today, concepts of media literacy are being discussed widely. In this article a simple theoretical model and an action-research project are presented. The research project focuses on a training course aiming at the development and strengthening of critical thinking and communicative skills of young people by way of making use of video games. Practical aspects of how to produce a video game with teens and conceptual aspects towards a "video game literacy" are discussed.

  11. Robust Watermarking of Video Streams

    Directory of Open Access Journals (Sweden)

    T. Polyák

    2006-01-01

    Full Text Available In the past few years there has been an explosion in the use of digital video data. Many people have personal computers at home, and with the help of the Internet users can easily share video files on their computer. This makes possible the unauthorized use of digital media, and without adequate protection systems the authors and distributors have no means to prevent it.Digital watermarking techniques can help these systems to be more effective by embedding secret data right into the video stream. This makes minor changes in the frames of the video, but these changes are almost imperceptible to the human visual system. The embedded information can involve copyright data, access control etc. A robust watermark is resistant to various distortions of the video, so it cannot be removed without affecting the quality of the host medium. In this paper I propose a video watermarking scheme that fulfills the requirements of a robust watermark. 

  12. Automated Indexing and Search of Video Data in Large Collections with inVideo

    Directory of Open Access Journals (Sweden)

    Shuangbao Paul Wang

    2017-08-01

    Full Text Available In this paper, we present a novel system, inVideo, for automatically indexing and searching videos based on the keywords spoken in the audio track and the visual content of the video frames. Using the highly efficient video indexing engine we developed, inVideo is able to analyze videos using machine learning and pattern recognition without the need for initial viewing by a human. The time-stamped commenting and tagging features refine the accuracy of search results. The cloud-based implementation makes it possible to conduct elastic search, augmented search, and data analytics. Our research shows that inVideo presents an efficient tool in processing and analyzing videos and increasing interactions in video-based online learning environment. Data from a cybersecurity program with more than 500 students show that applying inVideo to current video material, interactions between student-student and student-faculty increased significantly across 24 sections program-wide.

  13. Designing with video focusing the user-centred design process

    CERN Document Server

    Ylirisku, Salu Pekka

    2007-01-01

    Digital video for user-centered co-design is an emerging field of design, gaining increasing interest in both industry and academia. It merges the techniques and approaches of design ethnography, participatory design, interaction analysis, scenario-based design, and usability studies. This book covers the complete user-centered design project. It illustrates in detail how digital video can be utilized throughout the design process, from early user studies to making sense of video content and envisioning the future with video scenarios to provoking change with video artifacts. The text includes

  14. Hierarchical video surveillance architecture: a chassis for video big data analytics and exploration

    Science.gov (United States)

    Ajiboye, Sola O.; Birch, Philip; Chatwin, Christopher; Young, Rupert

    2015-03-01

    There is increasing reliance on video surveillance systems for systematic derivation, analysis and interpretation of the data needed for predicting, planning, evaluating and implementing public safety. This is evident from the massive number of surveillance cameras deployed across public locations. For example, in July 2013, the British Security Industry Association (BSIA) reported that over 4 million CCTV cameras had been installed in Britain alone. The BSIA also reveal that only 1.5% of these are state owned. In this paper, we propose a framework that allows access to data from privately owned cameras, with the aim of increasing the efficiency and accuracy of public safety planning, security activities, and decision support systems that are based on video integrated surveillance systems. The accuracy of results obtained from government-owned public safety infrastructure would improve greatly if privately owned surveillance systems `expose' relevant video-generated metadata events, such as triggered alerts and also permit query of a metadata repository. Subsequently, a police officer, for example, with an appropriate level of system permission can query unified video systems across a large geographical area such as a city or a country to predict the location of an interesting entity, such as a pedestrian or a vehicle. This becomes possible with our proposed novel hierarchical architecture, the Fused Video Surveillance Architecture (FVSA). At the high level, FVSA comprises of a hardware framework that is supported by a multi-layer abstraction software interface. It presents video surveillance systems as an adapted computational grid of intelligent services, which is integration-enabled to communicate with other compatible systems in the Internet of Things (IoT).

  15. An Authentic Learning Environment Based on Video Project among Arabic Learners

    Directory of Open Access Journals (Sweden)

    Azman Che Mat

    2017-05-01

    Full Text Available Role playing is among the language activities that stimulate language learners to use the language they are learning. However, a successful activity is always challenging especially when the learners are beginners. Therefore, a special arrangement needs to be carried out by instructors. This article explores the use of storyboards, or ‘PCVA’, to help Arabic learners prepare for their video project based on role playing. Blended methods were used to collect data, namely surveys, interviews, and observations. The target participants were among degree students from second level (TAC451 and third level (TAC501 of Arabic course. The total number of the participants is 87 respondents. Interview and observation were conducted during consultation period and then, related information was documented for the purpose of the study. Descriptive analysis was implemented to interpret the data. The findings showed a positive feedback from the learners who were involved in the experiment.

  16. On the development of new SPMN diurnal video systems for daylight fireball monitoring

    Science.gov (United States)

    Madiedo, J. M.; Trigo-Rodríguez, J. M.; Castro-Tirado, A. J.

    2008-09-01

    Daylight fireball video monitoring High-sensitivity video devices are commonly used for the study of the activity of meteor streams during the night. These provide useful data for the determination, for instance, of radiant, orbital and photometric parameters ([1] to [7]). With this aim, during 2006 three automated video stations supported by Universidad de Huelva were set up in Andalusia within the framework of the SPanish Meteor Network (SPMN). These are endowed with 8-9 high sensitivity wide-field video cameras that achieve a meteor limiting magnitude of about +3. These stations have increased the coverage performed by the low-scan allsky CCD systems operated by the SPMN and, besides, achieve a time accuracy of about 0.01s for determining the appearance of meteor and fireball events. Despite of these nocturnal monitoring efforts, we realised the need of setting up stations for daylight fireball detection. Such effort was also motivated by the appearance of the two recent meteorite-dropping events of Villalbeto de la Peña [8,9] and Puerto Lápice [10]. Although the Villalbeto de la Peña event was casually videotaped, and photographed, no direct pictures or videos were obtained for the Puerto Lápice event. Consequently, in order to perform a continuous recording of daylight fireball events, we setup new automated systems based on CCD video cameras. However, the development of these video stations implies several issues with respect to nocturnal systems that must be properly solved in order to get an optimal operation. The first of these video stations, also supported by University of Huelva, has been setup in Sevilla (Andalusia) during May 2007. But, of course, fireball association is unequivocal only in those cases when two or more stations recorded the fireball, and when consequently the geocentric radiant is accurately determined. With this aim, a second diurnal video station is being setup in Andalusia in the facilities of Centro Internacional de Estudios y

  17. YouTube Video Project: A "Cool" Way to Learn Communication Ethics

    Science.gov (United States)

    Lehman, Carol M.; DuFrene, Debbie D.; Lehman, Mark W.

    2010-01-01

    The millennial generation embraces new technologies as a natural way of accessing and exchanging information, staying connected, and having fun. YouTube, a video-sharing site that allows users to upload, view, and share video clips, is among the latest "cool" technologies for enjoying quick laughs, employing a wide variety of corporate activities,…

  18. Application of Project Portfolio Management

    Science.gov (United States)

    Pankowska, Malgorzata

    The main goal of the chapter is the presentation of the application project portfolio management approach to support development of e-Municipality and public administration information systems. The models of how people publish and utilize information on the web have been transformed continually. Instead of simply viewing on static web pages, users publish their own content through blogs and photo- and video-sharing slides. Analysed in this chapter, ICT (Information Communication Technology) projects for municipalities cover the mixture of the static web pages, e-Government information systems, and Wikis. So, for the management of the ICT projects' mixtures the portfolio project management approach is proposed.

  19. Image/video understanding systems based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  20. On the use of video projectors for three-dimensional scanning

    Science.gov (United States)

    Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.; Robledo-Sanchez, Carlos; Diaz-Gonzalez, Gerardo

    2017-08-01

    Structured light projection is one of the most useful methods for accurate three-dimensional scanning. Video projectors are typically used as the illumination source. However, because video projectors are not designed for structured light systems, some considerations such as gamma calibration must be taken into account. In this work, we present a simple method for gamma calibration of video projectors. First, the experimental fringe patterns are normalized. Then, the samples of the fringe patterns are sorted in ascending order. The sample sorting leads to a simple three-parameter sine curve that is fitted using the Gauss-Newton algorithm. The novelty of this method is that the sorting process removes the effect of the unknown phase. Thus, the resulting gamma calibration algorithm is significantly simplified. The feasibility of the proposed method is illustrated in a three-dimensional scanning experiment.

  1. Developing Project Based Learning E-Module for the Course of Video Editing

    Directory of Open Access Journals (Sweden)

    Ketut Krisnayuni

    2017-04-01

    Full Text Available This study examined the development of an electronic module for the course of video editing and analyzed the students’ response of the e-module. A waterfall model was adopted in the development process of the e-module that consisted of five stages namely (1 analysis; (2 design; (3 implementation; (4 evaluation; and (5 maintenance. The subjects of this study were the students of class XI at SMK Negeri 1 Sukasada. Project Based Learning was used as the basis of the e-module development as the most relevant learning model to meet the students’needs and the schools’ situation. The data of the students’ response about the e-module were collected through a questionnaire. The students’ response was very positive indicated by the mean score of 94,37. It was concluded that the developed e-modul was categorized as very good.

  2. Video profile monitor diagnostic system for GTA

    International Nuclear Information System (INIS)

    Sandoval, D.P.; Garcia, R.C.; Gilpatrick, J.D.; Johnson, K.F.; Shinas, M.A.; Wright, R.; Yuan, V.; Zander, M.E.

    1992-01-01

    This paper describes a video diagnostic system used to measure the beam profile and position of the Ground Test Accelerator 2.5-MeV H - ion beam as it exits the intermediate matching section. Inelastic collisions between H-ions and residual nitrogen to fluoresce. The resulting light is captured through transport optics by an intensified CCD camera and is digitized. Real-time beam-profile images are displayed and stored for detailed analysis. Analyzed data showing resolutions for both position and profile measurements will also be presented

  3. Improving student learning via mobile phone video content: Evidence from the BridgeIT India project

    Science.gov (United States)

    Wennersten, Matthew; Quraishy, Zubeeda Banu; Velamuri, Malathi

    2015-08-01

    Past efforts invested in computer-based education technology interventions have generated little evidence of affordable success at scale. This paper presents the results of a mobile phone-based intervention conducted in the Indian states of Andhra Pradesh and Tamil Nadu in 2012-13. The BridgeIT project provided a pool of audio-visual learning materials organised in accordance with a system of syllabi pacing charts. Teachers of Standard 5 and 6 English and Science classes were notified of the availability of new videos via text messages (SMS), which they downloaded onto their phones using an open-source application and showed, with suggested activities, to students on a TV screen using a TV-out cable. In their evaluation of this project, the authors of this paper found that the test scores of children who experienced the intervention improved by 0.36 standard deviations in English and 0.98 standard deviations in Science in Andhra Pradesh, relative to students in similar classrooms who did not experience the intervention. Differences between treatment and control schools in Tamil Nadu were less marked. The intervention was also cost-effective, relative to other computer-based interventions. Based on these results, the authors argue that is possible to use mobile phones to produce a strong positive and statistically significant effect in terms of teaching and learning quality across a large number of classrooms in India at a lower cost per student than past computer-based interventions.

  4. Video personalization for usage environment

    Science.gov (United States)

    Tseng, Belle L.; Lin, Ching-Yung; Smith, John R.

    2002-07-01

    A video personalization and summarization system is designed and implemented incorporating usage environment to dynamically generate a personalized video summary. The personalization system adopts the three-tier server-middleware-client architecture in order to select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. Our semantic metadata is provided through the use of the VideoAnnEx MPEG-7 Video Annotation Tool. When the user initiates a request for content, the client communicates the MPEG-21 usage environment description along with the user query to the middleware. The middleware is powered by the personalization engine and the content adaptation engine. Our personalization engine includes the VideoSue Summarization on Usage Environment engine that selects the optimal set of desired contents according to user preferences. Afterwards, the adaptation engine performs the required transformations and compositions of the selected contents for the specific usage environment using our VideoEd Editing and Composition Tool. Finally, two personalization and summarization systems are demonstrated for the IBM Websphere Portal Server and for the pervasive PDA devices.

  5. AUTOMATIC FAST VIDEO OBJECT DETECTION AND TRACKING ON VIDEO SURVEILLANCE SYSTEM

    Directory of Open Access Journals (Sweden)

    V. Arunachalam

    2012-08-01

    Full Text Available This paper describes the advance techniques for object detection and tracking in video. Most visual surveillance systems start with motion detection. Motion detection methods attempt to locate connected regions of pixels that represent the moving objects within the scene; different approaches include frame-to-frame difference, background subtraction and motion analysis. The motion detection can be achieved by Principle Component Analysis (PCA and then separate an objects from background using background subtraction. The detected object can be segmented. Segmentation consists of two schemes: one for spatial segmentation and the other for temporal segmentation. Tracking approach can be done in each frame of detected Object. Pixel label problem can be alleviated by the MAP (Maximum a Posteriori technique.

  6. A high precision video-electronic measuring system for use with solid state track detectors

    International Nuclear Information System (INIS)

    Schott, J.U.; Schopper, E.; Staudte, R.

    1976-01-01

    A video-electronic image analyzing system Quantimet 720 has been modified to meet the requirements of the measurement of tracks of nuclear particles in solid state track detectors with resulting improvement of precision, speed, and the elimination of subjective influences. A microscope equipped with an automatic XY stage projects the image onto the cathode of a vidicon-amplifier. Within the TV-picture generated, characterized by the coordinate XY in the specimen, we determine coordinates xy of events by setting cross lines on the screen which correspond to a digital accuracy of 0.1 μm at the position of the object. Automatic movement in Z-direction can be performed by stepping motor and measured electronically, or continously by setting electric voltage on a piezostrictive support of the objective. (orig.) [de

  7. Modernization projects in Santa Maria e Garona

    International Nuclear Information System (INIS)

    Marcos, R.; Alutiz, J. I.; Garcia Sanchez, M.

    2011-01-01

    This article shows a vision of the Santa Maria de Garona power Plant modernization guidelines and it also presents the most significant projects deployed in the last decade at the power plant grouped in mechanics projects, electrical projects, instrumentations projects and IT projects. At the same time three projects are explained in more detail: the change of one of the main transformers, the evolution from paper recorders to paperless video graphic recorders and the new plant data information system. (Author)

  8. Hybrid compression of video with graphics in DTV communication systems

    NARCIS (Netherlands)

    Schaar, van der M.; With, de P.H.N.

    2000-01-01

    Advanced broadcast manipulation of TV sequences and enhanced user interfaces for TV systems have resulted in an increased amount of pre- and post-editing of video sequences, where graphical information is inserted. However, in the current broadcasting chain, there are no provisions for enabling an

  9. Video Retrieval Berdasarkan Teks dan Gambar

    Directory of Open Access Journals (Sweden)

    Rahmi Hidayati

    2013-01-01

    Abstract Retrieval video has been used to search a video based on the query entered by user which were text and image. This system could increase the searching ability on video browsing and expected to reduce the video’s retrieval time. The research purposes were designing and creating a software application of retrieval video based on the text and image on the video. The index process for the text is tokenizing, filtering (stopword, stemming. The results of stemming to saved in the text index table. Index process for the image is to create an image color histogram and compute the mean and standard deviation at each primary color red, green and blue (RGB of each image. The results of feature extraction is stored in the image table The process of video retrieval using the query text, images or both. To text query system to process the text query by looking at the text index tables. If there is a text query on the index table system will display information of the video according to the text query. To image query system to process the image query by finding the value of the feature extraction means red, green means, means blue, red standard deviation, standard deviation and standard deviation of blue green. If the value of the six features extracted query image on the index table image will display the video information system according to the query image. To query text and query images, the system will display the video information if the query text and query images have a relationship that is query text and query image has the same film title.   Keywords—  video, index, retrieval, text, image

  10. YouTube, "Drug Videos" and Drugs Education

    Science.gov (United States)

    Manning, Paul

    2013-01-01

    Aims: This article reports on findings to emerge from a project examining YouTube "drug videos" in the light of an emerging literature on the relationship between YouTube and health education. The aim of this article is to describe the variety of discourses circulated by the "drug videos" available on YouTube and to consider…

  11. Video game-based neuromuscular electrical stimulation system for calf muscle training: a case study.

    Science.gov (United States)

    Sayenko, D G; Masani, K; Milosevic, M; Robinson, M F; Vette, A H; McConville, K M V; Popovic, M R

    2011-03-01

    A video game-based training system was designed to integrate neuromuscular electrical stimulation (NMES) and visual feedback as a means to improve strength and endurance of the lower leg muscles, and to increase the range of motion (ROM) of the ankle joints. The system allowed the participants to perform isotonic concentric and isometric contractions in both the plantarflexors and dorsiflexors using NMES. In the proposed system, the contractions were performed against exterior resistance, and the angle of the ankle joints was used as the control input to the video game. To test the practicality of the proposed system, an individual with chronic complete spinal cord injury (SCI) participated in the study. The system provided a progressive overload for the trained muscles, which is a prerequisite for successful muscle training. The participant indicated that he enjoyed the video game-based training and that he would like to continue the treatment. The results show that the training resulted in a significant improvement of the strength and endurance of the paralyzed lower leg muscles, and in an increased ROM of the ankle joints. Video game-based training programs might be effective in motivating participants to train more frequently and adhere to otherwise tedious training protocols. It is expected that such training will not only improve the properties of their muscles but also decrease the severity and frequency of secondary complications that result from SCI. Copyright © 2010 IPEM. All rights reserved.

  12. Interactive Video Courseware for Graphic Communications Teachers and Students.

    Science.gov (United States)

    Sanders, Mark

    1985-01-01

    At Virginia Polytechnic Institute and State University, interactive video serves both as an instructional tool and a project for creative students in graphic communications. The package facilitates courseware development and teaches students simultaneously about microcomputer and video technology. (SK)

  13. Evaluation of Distance Education System for Adult Education Using 4 Video Transmissions

    OpenAIRE

    渡部, 和雄; 湯瀬, 裕昭; 渡邉, 貴之; 井口, 真彦; 藤田, 広一

    2004-01-01

    The authors have developed a distance education system for interactive education which can transmit 4 video streams between distant lecture rooms. In this paper, we describe the results of our experiments using the system for adult education. We propose some efficient ways to use the system for adult education.

  14. Development of P4140 video data wall projector; Video data wall projector

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, H.; Inoue, H. [Toshiba Corp., Tokyo (Japan)

    1998-12-01

    The P4140 is a 3 cathode-ray tube (CRT) video data wall projector for super video graphics array (SVGA) signals. It is used as an image display unit, providing a large screen when several sets are put together. A high-quality picture has been realized by higher resolution and improved color uniformity technology. A new convergence adjustment system has also been developed through the optimal combination of digital and analog technologies. This video data wall installation has been greatly enhanced by the automation of cubes and cube performance settings. The P4140 video data wall projector can be used for displaying not only data but video as well. (author)

  15. An overview of recent end-to-end wireless medical video telemedicine systems using 3G.

    Science.gov (United States)

    Panayides, A; Pattichis, M S; Pattichis, C S; Schizas, C N; Spanias, A; Kyriacou, E

    2010-01-01

    Advances in video compression, network technologies, and computer technologies have contributed to the rapid growth of mobile health (m-health) systems and services. Wide deployment of such systems and services is expected in the near future, and it's foreseen that they will soon be incorporated in daily clinical practice. This study focuses in describing the basic components of an end-to-end wireless medical video telemedicine system, providing a brief overview of the recent advances in the field, while it also highlights future trends in the design of telemedicine systems that are diagnostically driven.

  16. Optimal use of video for teaching the practical implications of studying business information systems

    DEFF Research Database (Denmark)

    Fog, Benedikte; Ulfkjær, Jacob Kanneworff Stigsen; Schlichter, Bjarne Rerup

    that video should be introduced early during a course to prevent students’ misconceptions of working with business information systems, as well as to increase motivation and comprehension within the academic area. It is also considered of importance to have a trustworthy person explaining the practical......The study of business information systems has become increasingly important in the Digital Economy. However, it has been found that students have difficulties understanding the practical implications thereof and this leads to a motivational decreases. This study aims to investigate how to optimize...... not sufficiently reflect the theoretical recommendations of using video optimally in a management education. It did not comply with the video learning sequence as introduced by Marx and Frost (1998). However, it questions if the level of cognitive orientation activities can become too extensive. It finds...

  17. Extracting foreground ensemble features to detect abnormal crowd behavior in intelligent video-surveillance systems

    Science.gov (United States)

    Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien

    2017-09-01

    Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.

  18. Video Comparator

    International Nuclear Information System (INIS)

    Rose, R.P.

    1978-01-01

    The Video Comparator is a comparative gage that uses electronic images from two sources, a standard and an unknown. Two matched video cameras are used to obtain the electronic images. The video signals are mixed and displayed on a single video receiver (CRT). The video system is manufactured by ITP of Chatsworth, CA and is a Tele-Microscope II, Model 148. One of the cameras is mounted on a toolmaker's microscope stand and produces a 250X image of a cast. The other camera is mounted on a stand and produces an image of a 250X template. The two video images are mixed in a control box provided by ITP and displayed on a CRT. The template or the cast can be moved to align the desired features. Vertical reference lines are provided on the CRT, and a feature on the cast can be aligned with a line on the CRT screen. The stage containing the casts can be moved using a Boeckleler micrometer equipped with a digital readout, and a second feature aligned with the reference line and the distance moved obtained from the digital display

  19. Engineering task plan for Tanks 241-AN-103, 104, 105 color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1994-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and installation of the video camera systems into the vapor space within tanks 241-AN-103, 104, and 105. The one camera remotely operated color video systems will be used to observe and record the activities within the vapor space. Activities may include but are not limited to core sampling, auger activities, crust layer examination, monitoring of equipment installation/removal, and any other activities. The objective of this task is to provide a single camera system in each of the tanks for the Flammable Gas Tank Safety Program

  20. SIRSALE: integrated video database management tools

    Science.gov (United States)

    Brunie, Lionel; Favory, Loic; Gelas, J. P.; Lefevre, Laurent; Mostefaoui, Ahmed; Nait-Abdesselam, F.

    2002-07-01

    Video databases became an active field of research during the last decade. The main objective in such systems is to provide users with capabilities to friendly search, access and playback distributed stored video data in the same way as they do for traditional distributed databases. Hence, such systems need to deal with hard issues : (a) video documents generate huge volumes of data and are time sensitive (streams must be delivered at a specific bitrate), (b) contents of video data are very hard to be automatically extracted and need to be humanly annotated. To cope with these issues, many approaches have been proposed in the literature including data models, query languages, video indexing etc. In this paper, we present SIRSALE : a set of video databases management tools that allow users to manipulate video documents and streams stored in large distributed repositories. All the proposed tools are based on generic models that can be customized for specific applications using ad-hoc adaptation modules. More precisely, SIRSALE allows users to : (a) browse video documents by structures (sequences, scenes, shots) and (b) query the video database content by using a graphical tool, adapted to the nature of the target video documents. This paper also presents an annotating interface which allows archivists to describe the content of video documents. All these tools are coupled to a video player integrating remote VCR functionalities and are based on active network technology. So, we present how dedicated active services allow an optimized video transport for video streams (with Tamanoir active nodes). We then describe experiments of using SIRSALE on an archive of news video and soccer matches. The system has been demonstrated to professionals with a positive feedback. Finally, we discuss open issues and present some perspectives.

  1. Development of an emergency medical video multiplexing transport system. Aiming at the nation wide prehospital care on ambulance.

    Science.gov (United States)

    Nagatuma, Hideaki

    2003-04-01

    The Emergency Medical Video Multiplexing Transport System (EMTS) is designed to support prehospital cares by delivering high quality live video streams of patients in an ambulance to emergency doctors in a remote hospital via satellite communications. The important feature is that EMTS divides a patient's live video scene into four pieces and transports the four video streams on four separate network channels. By multiplexing four video streams, EMTS is able to transport high quality videos through low data transmission rate networks such as satellite communications and cellular phone networks. In order to transport live video streams constantly, EMTS adopts Real-time Transport Protocol/Real-time Control Protocol as a network protocol and video stream data are compressed by Moving Picture Experts Group 4 format. As EMTS combines four video streams with checking video frame numbers, it uses a refresh packet that initializes server's frame numbers to synchronize the four video streams.

  2. Learning neuroendoscopy with an exoscope system (video telescopic operating monitor): Early clinical results.

    Science.gov (United States)

    Parihar, Vijay; Yadav, Y R; Kher, Yatin; Ratre, Shailendra; Sethi, Ashish; Sharma, Dhananjaya

    2016-01-01

    Steep learning curve is found initially in pure endoscopic procedures. Video telescopic operating monitor (VITOM) is an advance in rigid-lens telescope systems provides an alternative method for learning basics of neuroendoscopy with the help of the familiar principle of microneurosurgery. The aim was to evaluate the clinical utility of VITOM as a learning tool for neuroendoscopy. Video telescopic operating monitor was used 39 cranial and spinal procedures and its utility as a tool for minimally invasive neurosurgery and neuroendoscopy for initial learning curve was studied. Video telescopic operating monitor was used in 25 cranial and 14 spinal procedures. Image quality is comparable to endoscope and microscope. Surgeons comfort improved with VITOM. Frequent repositioning of scope holder and lack of stereopsis is initial limiting factor was compensated for with repeated procedures. Video telescopic operating monitor is found useful to reduce initial learning curve of neuroendoscopy.

  3. Video Feedforward for Rapid Learning of a Picture-Based Communication System

    Science.gov (United States)

    Smith, Jemma; Hand, Linda; Dowrick, Peter W.

    2014-01-01

    This study examined the efficacy of video self modeling (VSM) using feedforward, to teach various goals of a picture exchange communication system (PECS). The participants were two boys with autism and one man with Down syndrome. All three participants were non-verbal with no current functional system of communication; the two children had long…

  4. Video performance for high security applications

    International Nuclear Information System (INIS)

    Connell, Jack C.; Norman, Bradley C.

    2010-01-01

    The complexity of physical protection systems has increased to address modern threats to national security and emerging commercial technologies. A key element of modern physical protection systems is the data presented to the human operator used for rapid determination of the cause of an alarm, whether false (e.g., caused by an animal, debris, etc.) or real (e.g., a human adversary). Alarm assessment, the human validation of a sensor alarm, primarily relies on imaging technologies and video systems. Developing measures of effectiveness (MOE) that drive the design or evaluation of a video system or technology becomes a challenge, given the subjectivity of the application (e.g., alarm assessment). Sandia National Laboratories has conducted empirical analysis using field test data and mathematical models such as binomial distribution and Johnson target transfer functions to develop MOEs for video system technologies. Depending on the technology, the task of the security operator and the distance to the target, the Probability of Assessment (PAs) can be determined as a function of a variety of conditions or assumptions. PAs used as an MOE allows the systems engineer to conduct trade studies, make informed design decisions, or evaluate new higher-risk technologies. This paper outlines general video system design trade-offs, discusses ways video can be used to increase system performance and lists MOEs for video systems used in subjective applications such as alarm assessment.

  5. Video Dubbing Projects in the Foreign Language Curriculum

    Science.gov (United States)

    Burston, Jack

    2005-01-01

    The dubbing of muted video clips offers an excellent opportunity to develop the skills of foreign language learners at all linguistic levels. In addition to its motivational value, soundtrack dubbing provides a rich source of activities in all language skill areas: listening, reading, writing, speaking. With advanced students, it also lends itself…

  6. 76 FR 55585 - Video Description: Implementation of the Twenty-First Century Communications and Video...

    Science.gov (United States)

    2011-09-08

    ... of Video Programming Report and Order (15 F.C.C.R. 15,230 (2000)), recon. granted in part and denied... dialogue, makes video programming more accessible to individuals who are blind or visually impaired. The... networks, and multichannel video programming distributor systems (``MVPDs'') with more than 50,000...

  7. Innovative Video Diagnostic Equipment for Material Science

    Science.gov (United States)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  8. Design of a highly integrated video acquisition module for smart video flight unit development

    Science.gov (United States)

    Lebre, V.; Gasti, W.

    2017-11-01

    CCD and APS devices are widely used in space missions as instrument sensors and/or in Avionics units like star detectors/trackers. Therefore, various and numerous designs of video acquisition chains have been produced. Basically, a classical video acquisition chain is constituted of two main functional blocks: the Proximity Electronics (PEC), including detector drivers and the Analogue Processing Chain (APC) Electronics that embeds the ADC, a master sequencer and the host interface. Nowadays, low power technologies allow to improve the integration, radiometric performances and power budget optimisation of video units and to standardize video units design and development. To this end, ESA has initiated a development activity through a competitive process requesting the expertise of experienced actors in the field of high resolution electronics for earth observation and Scientific missions. THALES ALENIA SPACE has been granted this activity as a prime contractor through ESA contract called HIVAC that holds for Highly Integrated Video Acquisition Chain. This paper presents main objectives of the on going HIVAC project and focuses on the functionalities and performances offered by the usage of the under development HIVAC board for future optical instruments.

  9. Video in Non-Formal Education: A Bibliographical Study.

    Science.gov (United States)

    Lewis, Peter M.

    Intended to inform United Nations member states about the application of electronic recording and replaying devices in the nonformal education domain, this bibliographic study surveys the literature on video. Since the study is meant to be of particular use to decision makers in developing countries, video projects in North America and Western…

  10. Video Golf

    Science.gov (United States)

    1995-01-01

    George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.

  11. Dynamic video encryption algorithm for H.264/AVC based on a spatiotemporal chaos system.

    Science.gov (United States)

    Xu, Hui; Tong, Xiao-Jun; Zhang, Miao; Wang, Zhu; Li, Ling-Hao

    2016-06-01

    Video encryption schemes mostly employ the selective encryption method to encrypt parts of important and sensitive video information, aiming to ensure the real-time performance and encryption efficiency. The classic block cipher is not applicable to video encryption due to the high computational overhead. In this paper, we propose the encryption selection control module to encrypt video syntax elements dynamically which is controlled by the chaotic pseudorandom sequence. A novel spatiotemporal chaos system and binarization method is used to generate a key stream for encrypting the chosen syntax elements. The proposed scheme enhances the resistance against attacks through the dynamic encryption process and high-security stream cipher. Experimental results show that the proposed method exhibits high security and high efficiency with little effect on the compression ratio and time cost.

  12. Design and Implementation of Mobile Car with Wireless Video Monitoring System Based on STC89C52

    Directory of Open Access Journals (Sweden)

    Yang Hong

    2014-05-01

    Full Text Available With the rapid development of wireless networks and image acquisition technology, wireless video transmission technology has been widely applied in various communication systems. The traditional video monitoring technology is restricted by some conditions such as layout, environmental, the relatively large volume, cost, and so on. In view of this problem, this paper proposes a method that the mobile car can be equipped with wireless video monitoring system. The mobile car which has some functions such as detection, video acquisition and wireless data transmission is developed based on STC89C52 Micro Control Unit (MCU and WiFi router. Firstly, information such as image, temperature and humidity is processed by the MCU and communicated with the router, and then returned by the WiFi router to the host computer phone. Secondly, control information issued by the host computer phone is received by WiFi router and sent to the MCU, and then the MCU sends relevant instructions. Lastly, the wireless transmission of video images and the remote control of the car are realized. The results prove that the system has some features such as simple operation, high stability, fast response, low cost, strong flexibility, widely application, and so on. The system has certain practical value and popularization value.

  13. A special broadcast of CERN's Video news

    CERN Multimedia

    2003-01-01

    A special edition of CERN's video news giving a complete update on the LHC project is to be broadcast in the Main Auditorium. After your lunch make a small detour to the Main Auditorium, where you see the big picture. On 14, 15 and 16 May, between 12:30 and 14:00, a special edition of CERN's video news bulletin will be broadcast in the Main Auditorium. You will have the chance get up-to-date on the LHC project and its experiments. With four years to go before the first collisions in the LHC, the LHC Project Leader Lyn Evans will present a status report on the construction of the accelerator. The spokesmen of the five LHC experiments (ALICE, ATLAS, CMS, LHCb and TOTEM) will explain how the work is going and what the state of play will be in four years' time. This special video news broadcast is the result of collaboration between the CERN Audiovisual Service, the Photo Service and the External communication section. The broadcast will begin with a brand-new programme title sequence. And just as in the real c...

  14. Project Management

    DEFF Research Database (Denmark)

    Kampf, Constance

    2009-01-01

    In this video Associate Professor Constance Kampf talks about the importance project management. Not only as a tool in implementation, but also as a way of thinking, and as something that needs to be considered from idea conception......In this video Associate Professor Constance Kampf talks about the importance project management. Not only as a tool in implementation, but also as a way of thinking, and as something that needs to be considered from idea conception...

  15. International remote monitoring project Argentina Nuclear Power Station Spent Fuel Transfer Remote Monitoring System

    International Nuclear Information System (INIS)

    Schneider, S.; Lucero, R.; Glidewell, D.

    1997-01-01

    The Autoridad Regulataria Nuclear (ARN) and the United States Department of Energy (DOE) are cooperating on the development of a Remote Monitoring System for nuclear nonproliferation efforts. A Remote Monitoring System for spent fuel transfer will be installed at the Argentina Nuclear Power Station in Embalse, Argentina. The system has been designed by Sandia National Laboratories (SNL), with Los Alamos National Laboratory (LANL) and Oak Ridge National Laboratory (ORNL) providing gamma and neutron sensors. This project will test and evaluate the fundamental design and implementation of the Remote Monitoring System in its application to regional and international safeguards efficiency. This paper provides a description of the monitoring system and its functions. The Remote Monitoring System consists of gamma and neutron radiation sensors, RF systems, and video systems integrated into a coherent functioning whole. All sensor data communicate over an Echelon LonWorks Network to a single data logger. The Neumann DCM 14 video module is integrated into the Remote Monitoring System. All sensor and image data are stored on a Data Acquisition System (DAS) and archived and reviewed on a Data and Image Review Station (DIRS). Conventional phone lines are used as the telecommunications link to transmit on-site collected data and images to remote locations. The data and images are authenticated before transmission. Data review stations will be installed at ARN in Buenos Aires, Argentina, ABACC in Rio De Janeiro, IAEA Headquarters in Vienna, and Sandia National Laboratories in Albuquerque, New Mexico. 2 refs., 2 figs

  16. Commercially available video motion detectors

    International Nuclear Information System (INIS)

    1979-01-01

    A market survey of commercially available video motion detection systems was conducted by the Intrusion Detection Systems Technology Division of Sandia Laboratories. The information obtained from this survey is summarized in this report. The cutoff date for this information is May 1978. A list of commercially available video motion detection systems is appended

  17. Video Surveillance: Privacy Issues and Legal Compliance

    DEFF Research Database (Denmark)

    Mahmood Rajpoot, Qasim; Jensen, Christian D.

    2015-01-01

    Pervasive usage of video surveillance is rapidly increasing in developed countries. Continuous security threats to public safety demand use of such systems. Contemporary video surveillance systems offer advanced functionalities which threaten the privacy of those recorded in the video. There is a...

  18. A laboratory evaluation of color video monitors

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video monitors used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color video technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories established a program to evaluate the newest relevant color video equipment. This report documents the evaluation of an integral component, color monitors. It briefly discusses a critical parameter, dynamic range, details test procedures, and evaluates the results.

  19. A laboratory evaluation of color video monitors

    International Nuclear Information System (INIS)

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video monitors used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color video technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories established a program to evaluate the newest relevant color video equipment. This report documents the evaluation of an integral component, color monitors. It briefly discusses a critical parameter, dynamic range, details test procedures, and evaluates the results

  20. Video profile monitor diagnostic system for GTA

    International Nuclear Information System (INIS)

    Sandovil, D.P.; Garcia, R.C.; Gilpatrick, J.D.; Johnson, K.F.; Shinas, M.A.; Wright, R.; Yuan, V.; Zander, M.E.

    1992-01-01

    This paper describes a video diagnostic system used to measure the beam profile and position of the Ground Test Accelerator 2.5 MeV H - ion beam as it exits the intermediate matching section. Inelastic collisions between H - ions and residual nitrogen in the vacuum chamber cause the nitrogen to fluoresce. The resulting light is captured through transport optics by an intensified CCD camera and is digitized. Real-time beam profile images are displayed and stored for detailed analysis. Analyzed data showing resolutions for both position and profile measurements will also be presented. (Author) 5 refs., 7 figs

  1. Handheld CAT Video Game, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed project is to design, develop and fabricate a handheld video game console for astronauts during long space flight. This portable hardware runs...

  2. Development and application of traffic flow information collecting and analysis system based on multi-type video

    Science.gov (United States)

    Lu, Mujie; Shang, Wenjie; Ji, Xinkai; Hua, Mingzhuang; Cheng, Kuo

    2015-12-01

    Nowadays, intelligent transportation system (ITS) has already become the new direction of transportation development. Traffic data, as a fundamental part of intelligent transportation system, is having a more and more crucial status. In recent years, video observation technology has been widely used in the field of traffic information collecting. Traffic flow information contained in video data has many advantages which is comprehensive and can be stored for a long time, but there are still many problems, such as low precision and high cost in the process of collecting information. This paper aiming at these problems, proposes a kind of traffic target detection method with broad applicability. Based on three different ways of getting video data, such as aerial photography, fixed camera and handheld camera, we develop a kind of intelligent analysis software which can be used to extract the macroscopic, microscopic traffic flow information in the video, and the information can be used for traffic analysis and transportation planning. For road intersections, the system uses frame difference method to extract traffic information, for freeway sections, the system uses optical flow method to track the vehicles. The system was applied in Nanjing, Jiangsu province, and the application shows that the system for extracting different types of traffic flow information has a high accuracy, it can meet the needs of traffic engineering observations and has a good application prospect.

  3. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    International Nuclear Information System (INIS)

    Wright, R.; Zander, M.; Brown, S.; Sandoval, D.; Gilpatrick, D.; Gibson, H.

    1992-01-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development of both software and hardware for imagetool and its integration with the GTA control system (GTACS) is discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. (Author) (3 figs., 4 refs.)

  4. Video interactivo en realidad virtual inmersiva

    OpenAIRE

    Gordo Ara, Juan

    2016-01-01

    Currently, developers are creating new virtual reality applications related to the field of video games or graphics environments created by computers. This is due largely to the arrival to the consumer market of new technologies to experience these virtual reality environments. This has provoked a wide adoption of 360º videos, which can be viewed directly from smartphones. In addition, cheap adapters allow converting the phone into a virtual reality display. In this project we investigated me...

  5. Video based OER: Production, discovery, dissemination

    OpenAIRE

    Gibbs, Graham R.

    2012-01-01

    This paper reports lessons learned from a range of ESRC, HEA and Jisc funded projects. Four dimensions will be discussed, economic costs, quality, dissemination and pedagogy.\\ud \\ud Cost issues include the expense of making video, and the variety of skills and expertise required such as interviewing, scripting and editing. Quality issues are similar to those in broadcast video but not as great. However, there are specific requirements for special needs and issues around copyright and licensin...

  6. An integrated multispectral video and environmental monitoring system for the study of coastal processes and the support of beach management operations

    Science.gov (United States)

    Ghionis, George; Trygonis, Vassilis; Karydis, Antonis; Vousdoukas, Michalis; Alexandrakis, George; Drakopoulos, Panos; Amdreadis, Olympos; Psarros, Fotis; Velegrakis, Antonis; Poulos, Serafim

    2016-04-01

    Effective beach management requires environmental assessments that are based on sound science, are cost-effective and are available to beach users and managers in an accessible, timely and transparent manner. The most common problems are: 1) The available field data are scarce and of sub-optimal spatio-temporal resolution and coverage, 2) our understanding of local beach processes needs to be improved in order to accurately model/forecast beach dynamics under a changing climate, and 3) the information provided by coastal scientists/engineers in the form of data, models and scientific interpretation is often too complicated to be of direct use by coastal managers/decision makers. A multispectral video system has been developed, consisting of one or more video cameras operating in the visible part of the spectrum, a passive near-infrared (NIR) camera, an active NIR camera system, a thermal infrared camera and a spherical video camera, coupled with innovative image processing algorithms and a telemetric system for the monitoring of coastal environmental parameters. The complete system has the capability to record, process and communicate (in quasi-real time) high frequency information on shoreline position, wave breaking zones, wave run-up, erosion hot spots along the shoreline, nearshore wave height, turbidity, underwater visibility, wind speed and direction, air and sea temperature, solar radiation, UV radiation, relative humidity, barometric pressure and rainfall. An innovative, remotely-controlled interactive visual monitoring system, based on the spherical video camera (with 360°field of view), combines the video streams from all cameras and can be used by beach managers to monitor (in real time) beach user numbers, flow activities and safety at beaches of high touristic value. The high resolution near infrared cameras permit 24-hour monitoring of beach processes, while the thermal camera provides information on beach sediment temperature and moisture, can

  7. Students' Benefit from Video with Interactive Quizzes in a First-Year Calculus Course

    DEFF Research Database (Denmark)

    Midtiby, Henrik Skov; Nørgaard, Cita; Kjær, Christopher

    2017-01-01

    The intention of this project was to study the students’ self-reported learning outcome from different formats of videos in an introductory calculus course.......The intention of this project was to study the students’ self-reported learning outcome from different formats of videos in an introductory calculus course....

  8. A semantic autonomous video surveillance system for dense camera networks in Smart Cities.

    Science.gov (United States)

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  9. A Semantic Autonomous Video Surveillance System for Dense Camera Networks in Smart Cities

    Directory of Open Access Journals (Sweden)

    Antonio Sánchez-Esguevillas

    2012-08-01

    Full Text Available This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  10. A System to Generate SignWriting for Video Tracks Enhancing Accessibility of Deaf People

    Directory of Open Access Journals (Sweden)

    Elena Verdú

    2017-12-01

    Full Text Available Video content has increased much on the Internet during last years. In spite of the efforts of different organizations and governments to increase the accessibility of websites, most multimedia content on the Internet is not accessible. This paper describes a system that contributes to make multimedia content more accessible on the Web, by automatically translating subtitles in oral language to SignWriting, a way of writing Sign Language. This system extends the functionality of a general web platform that can provide accessible web content for different needs. This platform has a core component that automatically converts any web page to a web page compliant with level AA of WAI guidelines. Around this core component, different adapters complete the conversion according to the needs of specific users. One adapter is the Deaf People Accessibility Adapter, which provides accessible web content for the Deaf, based on SignWritting. Functionality of this adapter has been extended with the video subtitle translator system. A first prototype of this system has been tested through different methods including usability and accessibility tests and results show that this tool can enhance the accessibility of video content available on the Web for Deaf people.

  11. Fast compressed domain motion detection in H.264 video streams for video surveillance applications

    DEFF Research Database (Denmark)

    Szczerba, Krzysztof; Forchhammer, Søren; Støttrup-Andersen, Jesper

    2009-01-01

    This paper presents a novel approach to fast motion detection in H.264/MPEG-4 advanced video coding (AVC) compressed video streams for IP video surveillance systems. The goal is to develop algorithms which may be useful in a real-life industrial perspective by facilitating the processing of large...... on motion vectors embedded in the video stream without requiring a full decoding and reconstruction of video frames. To improve the robustness to noise, a confidence measure based on temporal and spatial clues is introduced to increase the probability of correct detection. The algorithm was tested on indoor...

  12. Smart Streaming for Online Video Services

    OpenAIRE

    Chen, Liang; Zhou, Yipeng; Chiu, Dah Ming

    2013-01-01

    Bandwidth consumption is a significant concern for online video service providers. Practical video streaming systems usually use some form of HTTP streaming (progressive download) to let users download the video at a faster rate than the video bitrate. Since users may quit before viewing the complete video, however, much of the downloaded video will be "wasted". To the extent that users' departure behavior can be predicted, we develop smart streaming that can be used to improve user QoE with ...

  13. Performance Evaluations for Super-Resolution Mosaicing on UAS Surveillance Videos

    Directory of Open Access Journals (Sweden)

    Aldo Camargo

    2013-05-01

    Full Text Available Abstract Unmanned Aircraft Systems (UAS have been widely applied for reconnaissance and surveillance by exploiting information collected from the digital imaging payload. The super-resolution (SR mosaicing of low-resolution (LR UAS surveillance video frames has become a critical requirement for UAS video processing and is important for further effective image understanding. In this paper we develop a novel super-resolution framework, which does not require the construction of sparse matrices. The proposed method implements image operations in the spatial domain and applies an iterated back-projection to construct super-resolution mosaics from the overlapping UAS surveillance video frames. The Steepest Descent method, the Conjugate Gradient method and the Levenberg-Marquardt algorithm are used to numerically solve the nonlinear optimization problem for estimating a super-resolution mosaic. A quantitative performance comparison in terms of computation time and visual quality of the super-resolution mosaics through the three numerical techniques is presented.

  14. Operational experience with a high speed video data acquisition system in Fermilab experiment E-687

    International Nuclear Information System (INIS)

    Baumbaugh, A.E.; Knickerbocker, K.L.; Baumbaugh, B.; Ruchti, R.

    1987-01-01

    Operation of a high speed, triggerable, Video Data Acquisition System (VDAS) including a hardware data compactor and a 16 megabyte First-In-First-Out buffer memory (FIFO) will be discussed. Active target imaging techniques for High Energy Physics are described and preliminary experimental data is reported.. The hardware architecture for the imaging system and experiment will be discussed as well as other applications for the imaging system. Data rates for the compactor is over 30 megabytes/sec and the FIFO has been run at 100 megabytes/sec. The system can be operated at standard video rates or at any rate up to 30 million pixels/second. 7 refs., 3 figs

  15. The Simple Video Coder: A free tool for efficiently coding social video data.

    Science.gov (United States)

    Barto, Daniel; Bird, Clark W; Hamilton, Derek A; Fink, Brandi C

    2017-08-01

    Videotaping of experimental sessions is a common practice across many disciplines of psychology, ranging from clinical therapy, to developmental science, to animal research. Audio-visual data are a rich source of information that can be easily recorded; however, analysis of the recordings presents a major obstacle to project completion. Coding behavior is time-consuming and often requires ad-hoc training of a student coder. In addition, existing software is either prohibitively expensive or cumbersome, which leaves researchers with inadequate tools to quickly process video data. We offer the Simple Video Coder-free, open-source software for behavior coding that is flexible in accommodating different experimental designs, is intuitive for students to use, and produces outcome measures of event timing, frequency, and duration. Finally, the software also offers extraction tools to splice video into coded segments suitable for training future human coders or for use as input for pattern classification algorithms.

  16. IP over optical multicasting for large-scale video delivery

    Science.gov (United States)

    Jin, Yaohui; Hu, Weisheng; Sun, Weiqiang; Guo, Wei

    2007-11-01

    In the IPTV systems, multicasting will play a crucial role in the delivery of high-quality video services, which can significantly improve bandwidth efficiency. However, the scalability and the signal quality of current IPTV can barely compete with the existing broadcast digital TV systems since it is difficult to implement large-scale multicasting with end-to-end guaranteed quality of service (QoS) in packet-switched IP network. China 3TNet project aimed to build a high performance broadband trial network to support large-scale concurrent streaming media and interactive multimedia services. The innovative idea of 3TNet is that an automatic switched optical networks (ASON) with the capability of dynamic point-to-multipoint (P2MP) connections replaces the conventional IP multicasting network in the transport core, while the edge remains an IP multicasting network. In this paper, we will introduce the network architecture and discuss challenges in such IP over Optical multicasting for video delivery.

  17. Performance Analysis of Video Transmission Using Sequential Distortion Minimization Method for Digital Video Broadcasting Terrestrial

    Directory of Open Access Journals (Sweden)

    Novita Astin

    2016-12-01

    Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.

  18. Video Texture Synthesis Based on Flow-Like Stylization Painting

    Directory of Open Access Journals (Sweden)

    Qian Wenhua

    2014-01-01

    Full Text Available The paper presents an NP-video rendering system based on natural phenomena. It provides a simple nonphotorealistic video synthesis system in which user can obtain a flow-like stylization painting and infinite video scene. Firstly, based on anisotropic Kuwahara filtering in conjunction with line integral convolution, the phenomena video scene can be rendered to flow-like stylization painting. Secondly, the methods of frame division, patches synthesis, will be used to synthesize infinite playing video. According to selection examples from different natural video texture, our system can generate stylized of flow-like and infinite video scenes. The visual discontinuities between neighbor frames are decreased, and we also preserve feature and details of frames. This rendering system is easy and simple to implement.

  19. A Super-resolution Reconstruction Algorithm for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Jian Shao

    2017-01-01

    Full Text Available Recent technological developments have resulted in surveillance video becoming a primary method of preserving public security. Many city crimes are observed in surveillance video. The most abundant evidence collected by the police is also acquired through surveillance video sources. Surveillance video footage offers very strong support for solving criminal cases, therefore, creating an effective policy, and applying useful methods to the retrieval of additional evidence is becoming increasingly important. However, surveillance video has had its failings, namely, video footage being captured in low resolution (LR and bad visual quality. In this paper, we discuss the characteristics of surveillance video and describe the manual feature registration – maximum a posteriori – projection onto convex sets to develop a super-resolution reconstruction method, which improves the quality of surveillance video. From this method, we can make optimal use of information contained in the LR video image, but we can also control the image edge clearly as well as the convergence of the algorithm. Finally, we make a suggestion on how to adjust the algorithm adaptability by analyzing the prior information of target image.

  20. Hybrid digital-analog video transmission in wireless multicast and multiple-input multiple-output system

    Science.gov (United States)

    Liu, Yu; Lin, Xiaocheng; Fan, Nianfei; Zhang, Lin

    2016-01-01

    Wireless video multicast has become one of the key technologies in wireless applications. But the main challenge of conventional wireless video multicast, i.e., the cliff effect, remains unsolved. To overcome the cliff effect, a hybrid digital-analog (HDA) video transmission framework based on SoftCast, which transmits the digital bitstream with the quantization residuals, is proposed. With an effective power allocation algorithm and appropriate parameter settings, the residual gains can be maximized; meanwhile, the digital bitstream can assure transmission of a basic video to the multicast receiver group. In the multiple-input multiple-output (MIMO) system, since nonuniform noise interference on different antennas can be regarded as the cliff effect problem, ParCast, which is a variation of SoftCast, is also applied to video transmission to solve it. The HDA scheme with corresponding power allocation algorithms is also applied to improve video performance. Simulations show that the proposed HDA scheme can overcome the cliff effect completely with the transmission of residuals. What is more, it outperforms the compared WSVC scheme by more than 2 dB when transmitting under the same bandwidth, and it can further improve performance by nearly 8 dB in MIMO when compared with the ParCast scheme.

  1. Video Game Literacy - Exploring new paradigms and new educational activities

    OpenAIRE

    Damiano Felini

    2010-01-01

    Literacy is a complex concept of relevance for both traditional and most recent educational theories. Today, concepts of media literacy are being discussed widely. In this article a simple theoretical model and an action-research project are presented. The research project focuses on a training course aiming at the development and strengthening of critical thinking and communicative skills of young people by way of making use of video games. Practical aspects of how to produce a video game wi...

  2. TRECVid Semantic Indexing of Video: A 6-year Retrospective

    NARCIS (Netherlands)

    Awad, G.; Snoek, C.G.M.; Smeaton, A.F.; Quénot, G.

    2016-01-01

    Semantic indexing, or assigning semantic tags to video samples, is a key component for content-based access to video documents and collections. The Semantic Indexing task has been run at TRECVid from 2010 to 2015 with the support of NIST and the Quaero project. As with the previous High-Level

  3. Video motion detection for physical security applications

    International Nuclear Information System (INIS)

    Matter, J.C.

    1990-01-01

    Physical security specialists have been attracted to the concept of video motion detection for several years. Claimed potential advantages included additional benefit from existing video surveillance systems, automatic detection, improved performance compared to human observers, and cost-effectiveness. In recent years, significant advances in image-processing dedicated hardware and image analysis algorithms and software have accelerated the successful application of video motion detection systems to a variety of physical security applications. Early video motion detectors (VMDs) were useful for interior applications of volumetric sensing. Success depended on having a relatively well-controlled environment. Attempts to use these systems outdoors frequently resulted in an unacceptable number of nuisance alarms. Currently, Sandia National Laboratories (SNL) is developing several advanced systems that employ image-processing techniques for a broader set of safeguards and security applications. The Target Cueing and Tracking System (TCATS), the Video Imaging System for Detection, Tracking, and Assessment (VISDTA), the Linear Infrared Scanning Array (LISA); the Mobile Intrusion Detection and Assessment System (MIDAS), and the Visual Artificially Intelligent Surveillance (VAIS) systems are described briefly

  4. Evaluation of the Educational Value of YouTube Videos About Physical Examination of the Cardiovascular and Respiratory Systems

    OpenAIRE

    Azer, Samy A; AlGrain, Hala A; AlKhelaif, Rana A; AlEshaiwi, Sarah M

    2013-01-01

    Background A number of studies have evaluated the educational contents of videos on YouTube. However, little analysis has been done on videos about physical examination. Objective This study aimed to analyze YouTube videos about physical examination of the cardiovascular and respiratory systems. It was hypothesized that the educational standards of videos on YouTube would vary significantly. Methods During the period from November 2, 2011 to December 2, 2011, YouTube was searched by three ass...

  5. Enhancing Scalability in On-Demand Video Streaming Services for P2P Systems

    Directory of Open Access Journals (Sweden)

    R. Arockia Xavier Annie

    2012-01-01

    Full Text Available Recently, many video applications like video telephony, video conferencing, Video-on-Demand (VoD, and so forth have produced heterogeneous consumers in the Internet. In such a scenario, media servers play vital role when a large number of concurrent requests are sent by heterogeneous users. Moreover, the server and distributed client systems participating in the Internet communication have to provide suitable resources to heterogeneous users to meet their requirements satisfactorily. The challenges in providing suitable resources are to analyze the user service pattern, bandwidth and buffer availability, nature of applications used, and Quality of Service (QoS requirements for the heterogeneous users. Therefore, it is necessary to provide suitable techniques to handle these challenges. In this paper, we propose a framework for peer-to-peer- (P2P- based VoD service in order to provide effective video streaming. It consists of four functional modules, namely, Quality Preserving Multivariate Video Model (QPMVM for efficient server management, tracker for efficient peer management, heuristic-based content distribution, and light weight incentivized sharing mechanism. The first two of these modules are confined to a single entity of the framework while the other two are distributed across entities. Experimental results show that the proposed framework avoids overloading the server, increases the number of clients served, and does not compromise on QoS, irrespective of the fact that the expected framework is slightly reduced.

  6. Creating Video Games in a Middle School Language Arts Classroom: A Narrative Account

    Science.gov (United States)

    Oldaker, Adam

    2010-01-01

    This article describes the author's experience co-facilitating a project for which seventh-grade students designed and created original video games based on Madeleine L'Engle's "A Wrinkle in Time". The author provides an overview of recent literature on video game implementation in the classroom and explains how the project was designed and…

  7. Semantic web technologies for video surveillance metadata

    OpenAIRE

    Poppe, Chris; Martens, Gaëtan; De Potter, Pieterjan; Van de Walle, Rik

    2012-01-01

    Video surveillance systems are growing in size and complexity. Such systems typically consist of integrated modules of different vendors to cope with the increasing demands on network and storage capacity, intelligent video analytics, picture quality, and enhanced visual interfaces. Within a surveillance system, relevant information (like technical details on the video sequences, or analysis results of the monitored environment) is described using metadata standards. However, different module...

  8. Modeling 3D Unknown object by Range Finder and Video Camera ...

    African Journals Online (AJOL)

    real world); proprioceptive and exteroceptive sensors allowing the recreating of the 3D geometric database of an environment (virtual world). The virtual world is projected onto a video display terminal (VDT). Computer-generated and video ...

  9. Architectural Considerations for Highly Scalable Computing to Support On-demand Video Analytics

    Science.gov (United States)

    2017-04-19

    research were used to implement a distributed on-demand video analytics system that was prototyped for the use of forensics investigators in law...demand video intelligence; intelligent video system ; video analytics platform I. INTRODUCTION Video Analytics systems has been of tremendous interest...enforcement. The system was tested in the wild using video files as well as a commercial Video Management System supporting more than 100 surveillance

  10. Using cloud computing technologies in IP-video surveillance systems with the function of 3d-object modelling

    Directory of Open Access Journals (Sweden)

    Zhigalov Kirill

    2018-01-01

    Full Text Available This article is devoted to the integration of cloud technology functions into 3D IP video surveil-lance systems in order to conduct further video Analytics, incoming real-time data, as well as stored video materials on the server in the «cloud». The main attention is devoted to «cloud technologies» usage optimizing the process of recognition of the desired object by increasing the criteria of flexibility and scalability of the system. Transferring image load from the client to the cloud server, to the virtual part of the system. The development of the issues considered in the article in terms of data analysis, which will significantly improve the effectiveness of the implementation of special tasks facing special units.

  11. Cost-Benefit Performance of Robotic Surgery Compared with Video-Assisted Thoracoscopic Surgery under the Japanese National Health Insurance System.

    Science.gov (United States)

    Kajiwara, Naohiro; Patrick Barron, James; Kato, Yasufumi; Kakihana, Masatoshi; Ohira, Tatsuo; Kawate, Norihiko; Ikeda, Norihiko

    2015-01-01

    Medical economics have significant impact on the entire country. The explosion in surgical techniques has been accompanied by questions regarding actual improvements in outcome and cost-effectiveness, such as the da Vinci(®) Surgical System (dVS) compared with conventional video-assisted thoracic surgery (VATS). To establish a medical fee system for robot-assisted thoracic surgery (RATS), which is a system not yet firmly established in Japan. This study examines the cost benefit performance (CBP) based on medical fees compared with VATS and RATS under the Japanese National Health Insurance System (JNHIS) introduced in 2012. The projected (but as yet undecided) price in the JNHIS would be insufficient if institutions have less than even 200 dVS cases per year. Only institutions which perform more than 300 dVS operations per year would obtain a positive CBP with the projected JNHIS reimbursement. Thus, under the present conditions, it is necessary to perform at least 300 dVS operations per year in each institution with a dVS system to avoid financial deficit with current robotic surgical management. This may hopefully encourage a downward price revision of the dVS equipment by the manufacture which would result in a decrease in the cost per procedure.

  12. Popular video for rural development in Peru.

    Science.gov (United States)

    Calvelo Rios, J M

    1989-01-01

    Peru developed its first use of video for training and education in rural areas over a decade ago. On completion of the project in 1986, over 400,000 peasants had attended video courses lasting from 5-20 days. The courses included rural health, family planning, reforestation, agriculture, animal husbandry, housing, nutrition, and water sanitation. There were 125 course packages made and 1,260 video programs from 10-18 minutes in length. There were 780 additional video programs created on human resource development, socioeconomic diagnostics and culture. 160 specialists were trained to produce audiovisual materials and run the programs. Also, 70 trainers from other countries were trained. The results showed many used the training in practical applications. To promote rural development 2 things are needed , capital and physical inputs, such as equipment, fertilizers, pesticides, etc. The video project provided peasants an additional input that would help them manage the financial and physical inputs more efficiently. Video was used because many farmers are illiterate or speak a language different from the official one. Printed guides that contained many illustrations and few words served as memory aids and group discussions reinforced practical learning. By seeing, hearing, and doing, the training was effective. There were 46% women which made fertility and family planning subjects more easily communicated. The production of teaching modules included field investigations, academic research, field recording, tape editing, and experimental application in the field. An agreement with the peasants was initiated before a course began to help insure full participation and to also make sure resources were available to use the knowledge gained. The courses were limited to 30 and the cost per participant was $34 per course.

  13. Project delivery system (PDS)

    CERN Document Server

    2001-01-01

    As business environments become increasingly competitive, companies seek more comprehensive solutions to the delivery of their projects. "Project Delivery System: Fourth Edition" describes the process-driven project delivery systems which incorporates the best practices from Total Quality and is aligned with the Project Management Institute and ISO Quality Standards is the means by which projects are consistently and efficiently planned, executed and completed to the satisfaction of clients and customers.

  14. Video Creation: A Tool for Engaging Students to Learn Science

    Science.gov (United States)

    Courtney, A. R.

    2016-12-01

    Students today process information very differently than those of previous generations. They are used to getting their news from 140-character tweets, being entertained by You-Tube videos, and Googling everything. Thus, traditional passive methods of content delivery do not work well for many of these millennials. All students, regardless of career goals, need to become scientifically literate to be able to function in a world where scientific issues are of increasing importance. Those who have had experience applying scientific reasoning to real-world problems in the classroom will be better equipped to make informed decisions in the future. The problem to be solved is how to present scientific content in a manner that fosters student learning in today's world. This presentation will describe how the appeal of technology and social communication via creation of documentary-style videos has been used to engage students to learn scientific concepts in a university non-science major course focused on energy and the environment. These video projects place control of the learning experience into the hands of the learner and provide an opportunity to develop critical thinking skills. Students discover how to locate scientifically reliable information by limiting searches to respected sources and synthesize the information through collaborative content creation to generate a "story". Video projects have a number of advantages over research paper writing. They allow students to develop collaboration skills and be creative in how they deliver the scientific content. Research projects are more effective when the audience is larger than just a teacher. Although our videos are used as peer-teaching tools in the classroom, they also are shown to a larger audience in a public forum to increase the challenge. Video will be the professional communication tool of the future. This presentation will cover the components of the video production process and instructional lessons

  15. Make your own video with ActivePresenter

    CERN Document Server

    CERN. Geneva

    2016-01-01

    A step-by-step video tutorial on how to use ActivePresenter, a screen recording tool for Windows and Mac. The installation step is not needed for CERN users, as the product is already made available. This tutorial explains how to install ActivePresenter, how to do a screen recording and edit a video using ActivePresenter and finally how to exports the end product. Tell us what you think about this or any other video in this category via e-learning.support at cern.ch All info about the CERN rapid e-learning project is linked from http://twiki.cern.ch/ELearning  

  16. A hybrid thermal video and FTIR spectrometer system for rapidly locating and characterizing gas leaks

    Science.gov (United States)

    Williams, David J.; Wadsworth, Winthrop; Salvaggio, Carl; Messinger, David W.

    2006-08-01

    Undiscovered gas leaks, known as fugitive emissions, in chemical plants and refinery operations can impact regional air quality and present a loss of product for industry. Surveying a facility for potential gas leaks can be a daunting task. Industrial leak detection and repair programs can be expensive to administer. An efficient, accurate and cost effective method for detecting and quantifying gas leaks would both save industries money by identifying production losses and improve regional air quality. Specialized thermal video systems have proven effective in rapidly locating gas leaks. These systems, however, do not have the spectral resolution for compound identification. Passive FTIR spectrometers can be used for gas compound identification, but using these systems for facility surveys is problematic due to their small field of view. A hybrid approach has been developed that utilizes the thermal video system to locate gas plumes using real time visualization of the leaks, coupled with the high spectral resolution FTIR spectrometer for compound identification and quantification. The prototype hybrid video/spectrometer system uses a sterling cooled thermal camera, operating in the MWIR (3-5 μm) with an additional notch filter set at around 3.4 μm, which allows for the visualization of gas compounds that absorb in this narrow spectral range, such as alkane hydrocarbons. This camera is positioned alongside of a portable, high speed passive FTIR spectrometer, which has a spectral range of 2 - 25 μm and operates at 4 cm -1 resolution. This system uses a 10 cm telescope foreoptic with an onboard blackbody for calibration. The two units are optically aligned using a turning mirror on the spectrometer's telescope with the video camera's output.

  17. Video steganography based on bit-plane decomposition of wavelet-transformed video

    Science.gov (United States)

    Noda, Hideki; Furuta, Tomofumi; Niimi, Michiharu; Kawaguchi, Eiji

    2004-06-01

    This paper presents a steganography method using lossy compressed video which provides a natural way to send a large amount of secret data. The proposed method is based on wavelet compression for video data and bit-plane complexity segmentation (BPCS) steganography. BPCS steganography makes use of bit-plane decomposition and the characteristics of the human vision system, where noise-like regions in bit-planes of a dummy image are replaced with secret data without deteriorating image quality. In wavelet-based video compression methods such as 3-D set partitioning in hierarchical trees (SPIHT) algorithm and Motion-JPEG2000, wavelet coefficients in discrete wavelet transformed video are quantized into a bit-plane structure and therefore BPCS steganography can be applied in the wavelet domain. 3-D SPIHT-BPCS steganography and Motion-JPEG2000-BPCS steganography are presented and tested, which are the integration of 3-D SPIHT video coding and BPCS steganography, and that of Motion-JPEG2000 and BPCS, respectively. Experimental results show that 3-D SPIHT-BPCS is superior to Motion-JPEG2000-BPCS with regard to embedding performance. In 3-D SPIHT-BPCS steganography, embedding rates of around 28% of the compressed video size are achieved for twelve bit representation of wavelet coefficients with no noticeable degradation in video quality.

  18. Data Reduction and Control Software for Meteor Observing Stations Based on CCD Video Systems

    Science.gov (United States)

    Madiedo, J. M.; Trigo-Rodriguez, J. M.; Lyytinen, E.

    2011-01-01

    The SPanish Meteor Network (SPMN) is performing a continuous monitoring of meteor activity over Spain and neighbouring countries. The huge amount of data obtained by the 25 video observing stations that this network is currently operating made it necessary to develop new software packages to accomplish some tasks, such as data reduction and remote operation of autonomous systems based on high-sensitivity CCD video devices. The main characteristics of this software are described here.

  19. PVR system design of advanced video navigation reinforced with audible sound

    NARCIS (Netherlands)

    Eerenberg, O.; Aarts, R.; De With, P.N.

    2014-01-01

    This paper presents an advanced video navigation concept for Personal Video Recording (PVR), based on jointly using the primary image and a Picture-in-Picture (PiP) image, featuring combined rendering of normal-play video fragments with audio and fast-search video. The hindering loss of audio during

  20. An overview of new video techniques

    CERN Document Server

    Parker, R

    1999-01-01

    Current video transmission and distribution systems at CERN use a variety of analogue techniques which are several decades old. It will soon be necessary to replace this obsolete equipment, and the opportunity therefore exists to rationalize the diverse systems now in place. New standards for digital transmission and distribution are now emerging. This paper gives an overview of these new standards and of the underlying technology common to many of them. The paper reviews Digital Video Broadcasting (DVB), the Motion Picture Experts Group specifications (MPEG1, MPEG2, MPEG4, and MPEG7), videoconferencing standards (H.261 etc.), and packet video systems, together with predictions of the penetration of these standards into the consumer market. The digital transport mechanisms now available (IP, SDH, ATM) are also reviewed, and the implication of widespread adoption of these systems on video transmission and distribution is analysed.

  1. Semantic-based surveillance video retrieval.

    Science.gov (United States)

    Hu, Weiming; Xie, Dan; Fu, Zhouyu; Zeng, Wenrong; Maybank, Steve

    2007-04-01

    Visual surveillance produces large amounts of video data. Effective indexing and retrieval from surveillance video databases are very important. Although there are many ways to represent the content of video clips in current video retrieval algorithms, there still exists a semantic gap between users and retrieval systems. Visual surveillance systems supply a platform for investigating semantic-based video retrieval. In this paper, a semantic-based video retrieval framework for visual surveillance is proposed. A cluster-based tracking algorithm is developed to acquire motion trajectories. The trajectories are then clustered hierarchically using the spatial and temporal information, to learn activity models. A hierarchical structure of semantic indexing and retrieval of object activities, where each individual activity automatically inherits all the semantic descriptions of the activity model to which it belongs, is proposed for accessing video clips and individual objects at the semantic level. The proposed retrieval framework supports various queries including queries by keywords, multiple object queries, and queries by sketch. For multiple object queries, succession and simultaneity restrictions, together with depth and breadth first orders, are considered. For sketch-based queries, a method for matching trajectories drawn by users to spatial trajectories is proposed. The effectiveness and efficiency of our framework are tested in a crowded traffic scene.

  2. Short Project-Based Learning with MATLAB Applications to Support the Learning of Video-Image Processing

    Science.gov (United States)

    Gil, Pablo

    2017-10-01

    University courses concerning Computer Vision and Image Processing are generally taught using a traditional methodology that is focused on the teacher rather than on the students. This approach is consequently not effective when teachers seek to attain cognitive objectives involving their students' critical thinking. This manuscript covers the development, implementation and assessment of a short project-based engineering course with MATLAB applications Multimedia Engineering being taken by Bachelor's degree students. The principal goal of all course lectures and hands-on laboratory activities was for the students to not only acquire image-specific technical skills but also a general knowledge of data analysis so as to locate phenomena in pixel regions of images and video frames. This would hopefully enable the students to develop skills regarding the implementation of the filters, operators, methods and techniques used for image processing and computer vision software libraries. Our teaching-learning process thus permits the accomplishment of knowledge assimilation, student motivation and skill development through the use of a continuous evaluation strategy to solve practical and real problems by means of short projects designed using MATLAB applications. Project-based learning is not new. This approach has been used in STEM learning in recent decades. But there are many types of projects. The aim of the current study is to analyse the efficacy of short projects as a learning tool when compared to long projects during which the students work with more independence. This work additionally presents the impact of different types of activities, and not only short projects, on students' overall results in this subject. Moreover, a statistical study has allowed the author to suggest a link between the students' success ratio and the type of content covered and activities completed on the course. The results described in this paper show that those students who took part

  3. Focal-plane change triggered video compression for low-power vision sensor systems.

    Directory of Open Access Journals (Sweden)

    Yu M Chi

    Full Text Available Video sensors with embedded compression offer significant energy savings in transmission but incur energy losses in the complexity of the encoder. Energy efficient video compression architectures for CMOS image sensors with focal-plane change detection are presented and analyzed. The compression architectures use pixel-level computational circuits to minimize energy usage by selectively processing only pixels which generate significant temporal intensity changes. Using the temporal intensity change detection to gate the operation of a differential DCT based encoder achieves nearly identical image quality to traditional systems (4dB decrease in PSNR while reducing the amount of data that is processed by 67% and reducing overall power consumption reduction of 51%. These typical energy savings, resulting from the sparsity of motion activity in the visual scene, demonstrate the utility of focal-plane change triggered compression to surveillance vision systems.

  4. Extracting 3d Semantic Information from Video Surveillance System Using Deep Learning

    Science.gov (United States)

    Zhang, J. S.; Cao, J.; Mao, B.; Shen, D. Q.

    2018-04-01

    At present, intelligent video analysis technology has been widely used in various fields. Object tracking is one of the important part of intelligent video surveillance, but the traditional target tracking technology based on the pixel coordinate system in images still exists some unavoidable problems. Target tracking based on pixel can't reflect the real position information of targets, and it is difficult to track objects across scenes. Based on the analysis of Zhengyou Zhang's camera calibration method, this paper presents a method of target tracking based on the target's space coordinate system after converting the 2-D coordinate of the target into 3-D coordinate. It can be seen from the experimental results: Our method can restore the real position change information of targets well, and can also accurately get the trajectory of the target in space.

  5. OLIVE: Speech-Based Video Retrieval

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Gauvain, Jean-Luc; den Hartog, Jurgen; den Hartog, Jeremy; Netter, Klaus

    1999-01-01

    This paper describes the Olive project which aims to support automated indexing of video material by use of human language technologies. Olive is making use of speech recognition to automatically derive transcriptions of the sound tracks, generating time-coded linguistic elements which serve as the

  6. Making Sense of Video Analytics: Lessons Learned from Clickstream Interactions, Attitudes, and Learning Outcome in a Video-Assisted Course

    Directory of Open Access Journals (Sweden)

    Michail N. Giannakos

    2015-02-01

    Full Text Available Online video lectures have been considered an instructional media for various pedagogic approaches, such as the flipped classroom and open online courses. In comparison to other instructional media, online video affords the opportunity for recording student clickstream patterns within a video lecture. Video analytics within lecture videos may provide insights into student learning performance and inform the improvement of video-assisted teaching tactics. Nevertheless, video analytics are not accessible to learning stakeholders, such as researchers and educators, mainly because online video platforms do not broadly share the interactions of the users with their systems. For this purpose, we have designed an open-access video analytics system for use in a video-assisted course. In this paper, we present a longitudinal study, which provides valuable insights through the lens of the collected video analytics. In particular, we found that there is a relationship between video navigation (repeated views and the level of cognition/thinking required for a specific video segment. Our results indicated that learning performance progress was slightly improved and stabilized after the third week of the video-assisted course. We also found that attitudes regarding easiness, usability, usefulness, and acceptance of this type of course remained at the same levels throughout the course. Finally, we triangulate analytics from diverse sources, discuss them, and provide the lessons learned for further development and refinement of video-assisted courses and practices.

  7. High data-rate video broadcasting over 3G wireless systems

    NARCIS (Netherlands)

    Atici, C.; Sunay, M.O.

    2007-01-01

    In cellular environments, video broadcasting is a challenging problem in which the number of users receiving the service and the average successfully decoded video data-rate have to be intelligently optimized. When video is broadcasted using the 3G packet data standard, 1xEV-DO, the code space may

  8. A video-based system for hand-driven stop-motion animation.

    Science.gov (United States)

    Han, Xiaoguang; Fu, Hongbo; Zheng, Hanlin; Liu, Ligang; Wang, Jue

    2013-01-01

    Stop-motion is a well-established animation technique but is often laborious and requires craft skills. A new video-based system can animate the vast majority of everyday objects in stop-motion style, more flexibly and intuitively. Animators can perform and capture motions continuously instead of breaking them into increments and shooting one still picture per increment. More important, the system permits direct hand manipulation without resorting to rigs, achieving more natural object control for beginners. The system's key component is two-phase keyframe-based capturing and processing, assisted by computer vision techniques. With this system, even amateurs can generate high-quality stop-motion animations.

  9. A Semi-Automatic, Remote-Controlled Video Observation System for Transient Luminous Events

    DEFF Research Database (Denmark)

    Allin, Thomas Højgaard; Neubert, Torsten; Laursen, Steen

    2003-01-01

    In support for global ELF/VLF observations, HF measurements in France, and conjugate photometry/VLF observations in South Africa, we developed and operated a semi-automatic, remotely controlled video system for the observation of middle-atmospheric transient luminous events (TLEs). Installed...

  10. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  11. Real-Time FPGA-Based Object Tracker with Automatic Pan-Tilt Features for Smart Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2017-05-01

    Full Text Available The design of smart video surveillance systems is an active research field among the computer vision community because of their ability to perform automatic scene analysis by selecting and tracking the objects of interest. In this paper, we present the design and implementation of an FPGA-based standalone working prototype system for real-time tracking of an object of interest in live video streams for such systems. In addition to real-time tracking of the object of interest, the implemented system is also capable of providing purposive automatic camera movement (pan-tilt in the direction determined by movement of the tracked object. The complete system, including camera interface, DDR2 external memory interface controller, designed object tracking VLSI architecture, camera movement controller and display interface, has been implemented on the Xilinx ML510 (Virtex-5 FX130T FPGA Board. Our proposed, designed and implemented system robustly tracks the target object present in the scene in real time for standard PAL (720 × 576 resolution color video and automatically controls camera movement in the direction determined by the movement of the tracked object.

  12. The Educational Efficacy of Distinct Information Delivery Systems in Modified Video Games

    Science.gov (United States)

    Moshirnia, Andrew; Israel, Maya

    2010-01-01

    Despite the increasing popularity of many commercial video games, this popularity is not shared by educational video games. Modified video games, however, can bridge the gap in quality between commercial and education video games by embedding educational content into popular commercial video games. This study examined how different information…

  13. Utilizing Video Games

    Science.gov (United States)

    Blaize, L.

    Almost from its birth, the computer and video gaming industry has done an admirable job of communicating the vision and attempting to convey the experience of traveling through space to millions of gamers from all cultures and demographics. This paper will propose several approaches the 100 Year Starship Study can take to use the power of interactive media to stir interest in the Starship and related projects among a global population. It will examine successful gaming franchises from the past that are relevant to the mission and consider ways in which the Starship Study could cooperate with game development studios to bring the Starship vision to those franchises and thereby to the public. The paper will examine ways in which video games can be used to crowd-source research aspects for the Study, and how video games are already considering many of the same topics that will be examined by this Study. Finally, the paper will propose some mechanisms by which the 100 Year Starship Study can establish very close ties with the gaming industry and foster cooperation in pursuit of the Study's goals.

  14. Video Bandwidth Compression System.

    Science.gov (United States)

    1980-08-01

    scaling function, located between the inverse DPCM and inverse transform , on the decoder matrix multiplier chips. 1"V1 T.. ---- i.13 SECURITY...Bit Unpacker and Inverse DPCM Slave Sync Board 15 e. Inverse DPCM Loop Boards 15 f. Inverse Transform Board 16 g. Composite Video Output Board 16...36 a. Display Refresh Memory 36 (1) Memory Section 37 (2) Timing and Control 39 b. Bit Unpacker and Inverse DPCM 40 c. Inverse Transform Processor 43

  15. System design description for the LDUA high resolution stereoscopic video camera system (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS), system 6230, was designed to be used as an end effector on the LDUA to perform surveillance and inspection activities within a waste tank. It is attached to the LDUA by means of a Tool Interface Plate (TIP) which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate. Designed to perform up close weld and corrosion inspection roles in US T operations, the HRSVS will support and supplement the Light Duty Utility Arm (LDUA) and provide the crucial inspection tasks needed to ascertain waste tank condition

  16. Evaluation of a video-based head motion tracking system for dedicated brain PET

    Science.gov (United States)

    Anishchenko, S.; Beylin, D.; Stepanov, P.; Stepanov, A.; Weinberg, I. N.; Schaeffer, S.; Zavarzin, V.; Shaposhnikov, D.; Smith, M. F.

    2015-03-01

    Unintentional head motion during Positron Emission Tomography (PET) data acquisition can degrade PET image quality and lead to artifacts. Poor patient compliance, head tremor, and coughing are examples of movement sources. Head motion due to patient non-compliance can be an issue with the rise of amyloid brain PET in dementia patients. To preserve PET image resolution and quantitative accuracy, head motion can be tracked and corrected in the image reconstruction algorithm. While fiducial markers can be used, a contactless approach is preferable. A video-based head motion tracking system for a dedicated portable brain PET scanner was developed. Four wide-angle cameras organized in two stereo pairs are used for capturing video of the patient's head during the PET data acquisition. Facial points are automatically tracked and used to determine the six degree of freedom head pose as a function of time. The presented work evaluated the newly designed tracking system using a head phantom and a moving American College of Radiology (ACR) phantom. The mean video-tracking error was 0.99±0.90 mm relative to the magnetic tracking device used as ground truth. Qualitative evaluation with the ACR phantom shows the advantage of the motion tracking application. The developed system is able to perform tracking with accuracy close to millimeter and can help to preserve resolution of brain PET images in presence of movements.

  17. Integrated Project Management System description

    International Nuclear Information System (INIS)

    1987-03-01

    The Uranium Mill Tailings Remedial Action (UMTRA) Project is a Department of Energy (DOE) designated Major System Acquisition (MSA). To execute and manage the Project mission successfully and to comply with the MSA requirements, the UMTRA Project Office (''Project Office'') has implemented and operates an Integrated Project Management System (IPMS). The Project Office is assisted by the Technical Assistance Contractor's (TAC) Project Integration and Control (PIC) Group in system operation. Each participant, in turn, provides critical input to system operation and reporting requirements. The IPMS provides a uniform structured approach for integrating the work of Project participants. It serves as a tool for planning and control, workload management, performance measurement, and specialized reporting within a standardized format. This system description presents the guidance for its operation. Appendices 1 and 2 contain definitions of commonly used terms and abbreviations and acronyms, respectively. 17 figs., 5 tabs

  18. Use of Video Analysis System for Working Posture Evaluations

    Science.gov (United States)

    McKay, Timothy D.; Whitmore, Mihriban

    1994-01-01

    In a work environment, it is important to identify and quantify the relationship among work activities, working posture, and workplace design. Working posture may impact the physical comfort and well-being of individuals, as well as performance. The Posture Video Analysis Tool (PVAT) is an interactive menu and button driven software prototype written in Supercard (trademark). Human Factors analysts are provided with a predefined set of options typically associated with postural assessments and human performance issues. Once options have been selected, the program is used to evaluate working posture and dynamic tasks from video footage. PVAT has been used to evaluate postures from Orbiter missions, as well as from experimental testing of prototype glove box designs. PVAT can be used for video analysis in a number of industries, with little or no modification. It can contribute to various aspects of workplace design such as training, task allocations, procedural analyses, and hardware usability evaluations. The major advantage of the video analysis approach is the ability to gather data, non-intrusively, in restricted-access environments, such as emergency and operation rooms, contaminated areas, and control rooms. Video analysis also provides the opportunity to conduct preliminary evaluations of existing work areas.

  19. Researchers and teachers learning together and from each other using video-based multimodal analysis

    DEFF Research Database (Denmark)

    Davidsen, Jacob; Vanderlinde, Ruben

    2014-01-01

    integrated touch-screens into their teaching and learning. This paper examines the methodological usefulness of video-based multimodal analysis. Through reflection on the research project, we discuss how, by using video-based multimodal analysis, researchers and teachers can study children’s touch......This paper discusses a year-long technology integration project, during which teachers and researchers joined forces to explore children’s collaborative activities through the use of touch-screens. In the research project, discussed in this paper, 16 touch-screens were integrated into teaching...... and learning activities in two separate classrooms; the learning and collaborative processes were captured by using video, collecting over 150 hours of footage. By using digital research technologies and a longitudinal design, the authors of the research project studied how teachers and children gradually...

  20. Operationally Efficient Propulsion System Study (OEPSS): OEPSS Video Script

    Science.gov (United States)

    Wong, George S.; Waldrop, Glen S.; Trent, Donnie (Editor)

    1992-01-01

    The OEPSS video film, along with the OEPSS Databooks, provides a data base of current launch experience that will be useful for design of future expendable and reusable launch systems. The focus is on the launch processing of propulsion systems. A brief 15-minute overview of the OEPSS study results is found at the beginning of the film. The remainder of the film discusses in more detail: current ground operations at the Kennedy Space Center; typical operations issues and problems; critical operations technologies; and efficiency of booster and space propulsion systems. The impact of system architecture on the launch site and its facility infrastucture is emphasized. Finally, a particularly valuable analytical tool, developed during the OEPSS study, that will provide for the "first time" a quantitative measure of operations efficiency for a propulsion system is described.

  1. Efficient genre-specific semantic video indexing

    NARCIS (Netherlands)

    Wu, J.; Worring, M.

    2012-01-01

    Large video collections such as YouTube contain many different video genres, while in many applications the user might be interested in one or two specific video genres only. Thus, when users are querying the system with a specific semantic concept like AnchorPerson, and MovieStars, they are likely

  2. Deception Detection in Videos

    OpenAIRE

    Wu, Zhe; Singh, Bharat; Davis, Larry S.; Subrahmanian, V. S.

    2017-01-01

    We present a system for covert automated deception detection in real-life courtroom trial videos. We study the importance of different modalities like vision, audio and text for this task. On the vision side, our system uses classifiers trained on low level video features which predict human micro-expressions. We show that predictions of high-level micro-expressions can be used as features for deception prediction. Surprisingly, IDT (Improved Dense Trajectory) features which have been widely ...

  3. Image quality assessment for video stream recognition systems

    Science.gov (United States)

    Chernov, Timofey S.; Razumnuy, Nikita P.; Kozharinov, Alexander S.; Nikolaev, Dmitry P.; Arlazarov, Vladimir V.

    2018-04-01

    Recognition and machine vision systems have long been widely used in many disciplines to automate various processes of life and industry. Input images of optical recognition systems can be subjected to a large number of different distortions, especially in uncontrolled or natural shooting conditions, which leads to unpredictable results of recognition systems, making it impossible to assess their reliability. For this reason, it is necessary to perform quality control of the input data of recognition systems, which is facilitated by modern progress in the field of image quality evaluation. In this paper, we investigate the approach to designing optical recognition systems with built-in input image quality estimation modules and feedback, for which the necessary definitions are introduced and a model for describing such systems is constructed. The efficiency of this approach is illustrated by the example of solving the problem of selecting the best frames for recognition in a video stream for a system with limited resources. Experimental results are presented for the system for identity documents recognition, showing a significant increase in the accuracy and speed of the system under simulated conditions of automatic camera focusing, leading to blurring of frames.

  4. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  5. Motion video analysis using planar parallax

    Science.gov (United States)

    Sawhney, Harpreet S.

    1994-04-01

    Motion and structure analysis in video sequences can lead to efficient descriptions of objects and their motions. Interesting events in videos can be detected using such an analysis--for instance independent object motion when the camera itself is moving, figure-ground segregation based on the saliency of a structure compared to its surroundings. In this paper we present a method for 3D motion and structure analysis that uses a planar surface in the environment as a reference coordinate system to describe a video sequence. The motion in the video sequence is described as the motion of the reference plane, and the parallax motion of all the non-planar components of the scene. It is shown how this method simplifies the otherwise hard general 3D motion analysis problem. In addition, a natural coordinate system in the environment is used to describe the scene which can simplify motion based segmentation. This work is a part of an ongoing effort in our group towards video annotation and analysis for indexing and retrieval. Results from a demonstration system being developed are presented.

  6. Analysis of User Requirements in Interactive 3D Video Systems

    Directory of Open Access Journals (Sweden)

    Haiyue Yuan

    2012-01-01

    Full Text Available The recent development of three dimensional (3D display technologies has resulted in a proliferation of 3D video production and broadcasting, attracting a lot of research into capture, compression and delivery of stereoscopic content. However, the predominant design practice of interactions with 3D video content has failed to address its differences and possibilities in comparison to the existing 2D video interactions. This paper presents a study of user requirements related to interaction with the stereoscopic 3D video. The study suggests that the change of view, zoom in/out, dynamic video browsing, and textual information are the most relevant interactions with stereoscopic 3D video. In addition, we identified a strong demand for object selection that resulted in a follow-up study of user preferences in 3D selection using virtual-hand and ray-casting metaphors. These results indicate that interaction modality affects users’ decision of object selection in terms of chosen location in 3D, while user attitudes do not have significant impact. Furthermore, the ray-casting-based interaction modality using Wiimote can outperform the volume-based interaction modality using mouse and keyboard for object positioning accuracy.

  7. An Evaluation of the Informedia Digital Video Library System at the Open University.

    Science.gov (United States)

    Kukulska-Hulme, Agnes; Van der Zwan, Robert; DiPaolo, Terry; Evers, Vanessa; Clarke, Sarah

    1999-01-01

    Reports on an Open University evaluation study of the Informedia Digital Video Library System developed at Carnegie Mellon University (CMU). Findings indicate that there is definite potential for using the system, provided that certain modifications can be made. Results also confirm findings of the Informedia team at CMU that the content of video…

  8. Improving education and supervision of Queensland X-ray Operators through video conference technology: A teleradiography pilot project.

    Science.gov (United States)

    Rawle, Marnie; Oliver, Tanya; Pighills, Alison; Lindsay, Daniel

    2017-12-01

    X-ray Operator (XO) supervision in Queensland is performed by radiographers in a site removed from the XO site. This has historically been performed by telephone when the XO requires immediate help, as well as post-examination through radiographer review and the provision of written feedback on images produced. This project aimed to improve image quality through the provision of real-time support of XOs by the introduction of video conference (VC) supervision. A 6-month pilot project compared image quality with and without VC supervision. VC equipment was installed in the X-ray room at two rural sites, as well as at the radiographer site, to enable visual and oral supervision. The VC unit enabled visualisation of the X-ray examination technique as it was being undertaken, as well as the images produced prior to transmission to the Picture Archiving and Communication System (PACS). Statistically significant improvement in image quality criteria measures were seen for patient positioning (P = 0.008), image quality (P < 0.001) and diagnostic value (P < 0.001) of images taken during this project. No statistically significant differences were seen during case level assessment in the inclusion of only appropriate imaging (P = 0.06), and the inclusion of unacceptable imaging (P = 0.06), however improvements were seen in both of these criteria. The survey revealed 24.6% of examinations performed would normally have involved the XO contacting the radiographer for assistance, although, assistance was actually provided in 88.3% of examinations. This project has demonstrated that significant improvement in image quality is achievable with VC supervision. A larger study with a control arm that did not receive direct supervision should be used to validate the findings of this study. © 2017 The Authors. Journal of Medical Radiation Sciences published by John Wiley & Sons Australia, Ltd on behalf of Australian Society of Medical Imaging and Radiation Therapy and New Zealand

  9. Considerations for Producing Media for Science Museum Exhibits: A Volcano Video Case Study

    Science.gov (United States)

    Sable, MFA, J.

    2013-12-01

    While science museums continue to expand their use of videos in exhibits, they are also seeking to add engaging content to their websites in the hope of reaching broader audiences. As a cost-effective way to do both, a project is undertaken to develop a video for a museum website that can easily be adapted for use in an exhibit. To establish goals and constraints for the video, this project explores the needs of museums and their audiences. Past literature is compared with current exhibitions in several U.S. museums. Once identified, the needs of science museums are incorporated into the content, form, and style of the two-part video "Living in Pele's Paradise." Through the story of the spectacular 1959-60 eruption of Kilauea Volcano, Hawai'i, the video shows how research and monitoring contribute to helping communities prepare for volcanic hazards. A 20-minute version of the video is produced for the web, and a 4-minute version is developed for use in a hypothetical science museum exhibit. The two versions of the video provide a cross-platform experience with multiple levels of content depth.

  10. Projective-anticipating, projective and projective-lag synchronization of chaotic systems with time-varying delays

    International Nuclear Information System (INIS)

    Feng Cunfang; Guan Wei; Wang Yinghai

    2013-01-01

    We investigate different types of projective (projective-anticipating, projective and projective-lag) synchronization in unidirectionally nonlinearly coupled time-delayed chaotic systems with variable time delays. Based on the Krasovskii–Lyapunov approach, we find both the existence and sufficient stability conditions, using a general class of time-delayed chaotic systems related to optical bistable or hybrid optical bistable devices. Our method has the advantage that it requires only one nonlinearly coupled term to achieve different types of projective synchronization in time-delayed chaotic systems with variable time delays. Compared with other existing works, our result provides an easy way to achieve projective-anticipating, projective and projective-lag synchronization. Numerical simulations of the Ikeda system are given to demonstrate the validity of the proposed method. (paper)

  11. Web Based Project Management System

    OpenAIRE

    Aadamsoo, Anne-Mai

    2010-01-01

    To increase an efficiency of a product, nowadays many web development companies are using different project management systems. A company may run a number of projects at a time, and requires input from a number of individuals, or teams for a multi level development plan, whereby a good project management system is needed. Project management systems represent a rapidly growing technology in IT industry. As the number of users, who utilize project management applications continues to grow, w...

  12. Modification and Validation of an Automotive Data Processing Unit, Compessed Video System, and Communications Equipment

    Energy Technology Data Exchange (ETDEWEB)

    Carter, R.J.

    1997-04-01

    The primary purpose of the "modification and validation of an automotive data processing unit (DPU), compressed video system, and communications equipment" cooperative research and development agreement (CRADA) was to modify and validate both hardware and software, developed by Scientific Atlanta, Incorporated (S-A) for defense applications (e.g., rotary-wing airplanes), for the commercial sector surface transportation domain (i.e., automobiles and trucks). S-A also furnished a state-of-the-art compressed video digital storage and retrieval system (CVDSRS), and off-the-shelf data storage and transmission equipment to support the data acquisition system for crash avoidance research (DASCAR) project conducted by Oak Ridge National Laboratory (ORNL). In turn, S-A received access to hardware and technology related to DASCAR. DASCAR was subsequently removed completely and installation was repeated a number of times to gain an accurate idea of complete installation, operation, and removal of DASCAR. Upon satisfactory completion of the DASCAR construction and preliminary shakedown, ORNL provided NHTSA with an operational demonstration of DASCAR at their East Liberty, OH test facility. The demonstration included an on-the-road demonstration of the entire data acquisition system using NHTSA'S test track. In addition, the demonstration also consisted of a briefing, containing the following: ORNL generated a plan for validating the prototype data acquisition system with regard to: removal of DASCAR from an existing vehicle, and installation and calibration in other vehicles; reliability of the sensors and systems; data collection and transmission process (data integrity); impact on the drivability of the vehicle and obtrusiveness of the system to the driver; data analysis procedures; conspicuousness of the vehicle to other drivers; and DASCAR installation and removal training and documentation. In order to identify any operational problems not captured by the systems

  13. Video-based Chinese Input System via Fingertip Tracking

    Directory of Open Access Journals (Sweden)

    Chih-Chang Yu

    2012-10-01

    Full Text Available In this paper, we propose a system to detect and track fingertips online and recognize Mandarin Phonetic Symbol (MPS for user-friendly Chinese input purposes. Using fingertips and cameras to replace pens and touch panels as input devices could reduce the cost and improve the ease-of-use and comfort of computer-human interface. In the proposed framework, particle filters with enhanced appearance models are applied for robust fingertip tracking. Afterwards, MPS combination recognition is performed on the tracked fingertip trajectories using Hidden Markov Models. In the proposed system, the fingertips of the users could be robustly tracked. Also, the challenges of entering, leaving and virtual strokes caused by video-based fingertip input can be overcome. Experimental results have shown the feasibility and effectiveness of the proposed work.

  14. E-learning for Project Management

    DEFF Research Database (Denmark)

    Kampf, Constance Elizabeth

    2011-01-01

    This is a series of online videos designed for the Project management course, in my YouTube channel. The video links are currently private for my university course. Please email me at cka@asb.dk if you are interested in viewing them. The videos total about 12 hours of lectures, and are adapted...

  15. Study on a High Compression Processing for Video-on-Demand e-learning System

    Science.gov (United States)

    Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    The authors proposed a high-quality and small-capacity lecture-video-file creating system for distance e-learning system. Examining the feature of the lecturing scene, the authors ingeniously employ two kinds of image-capturing equipment having complementary characteristics : one is a digital video camera with a low resolution and a high frame rate, and the other is a digital still camera with a high resolution and a very low frame rate. By managing the two kinds of image-capturing equipment, and by integrating them with image processing, we can produce course materials with the greatly reduced file capacity : the course materials satisfy the requirements both for the temporal resolution to see the lecturer's point-indicating actions and for the high spatial resolution to read the small written letters. As a result of a comparative experiment, the e-lecture using the proposed system was confirmed to be more effective than an ordinary lecture from the viewpoint of educational effect.

  16. The Effect of Motion Artifacts on Near-Infrared Spectroscopy (NIRS Data and Proposal of a Video-NIRS System

    Directory of Open Access Journals (Sweden)

    Masayuki Satoh

    2017-11-01

    Full Text Available Aims: The aims of this study were (1 to investigate the influence of physical movement on near-infrared spectroscopy (NIRS data, (2 to establish a video-NIRS system which simultaneously records NIRS data and the subject’s movement, and (3 to measure the oxygenated hemoglobin (oxy-Hb concentration change (Δoxy-Hb during a word fluency (WF task. Experiment 1: In 5 healthy volunteers, we measured the oxy-Hb and deoxygenated hemoglobin (deoxy-Hb concentrations during 11 kinds of facial, head, and extremity movements. The probes were set in the bilateral frontal regions. The deoxy-Hb concentration was increased in 85% of the measurements. Experiment 2: Using a pillow on the backrest of the chair, we established the video-NIRS system with data acquisition and video capture software. One hundred and seventy-six elderly people performed the WF task. The deoxy-Hb concentration was decreased in 167 subjects (95%. Experiment 3: Using the video-NIRS system, we measured the Δoxy-Hb, and compared it with the results of the WF task. Δoxy-Hb was significantly correlated with the number of words. Conclusion: Like the blood oxygen level-dependent imaging effect in functional MRI, the deoxy-Hb concentration will decrease if the data correctly reflect the change in neural activity. The video-NIRS system might be useful to collect NIRS data by recording the waveforms and the subject’s appearance simultaneously.

  17. Video research: documenting and learning from HIV and AIDS communication strategies for social change in Ghana

    OpenAIRE

    Decosas, Heiko

    2010-01-01

    The dynamic landscape of global communications continually presents new challenges for the design and analysis of media and communication within international development projects. This Masters project uses video and web technology to document, explore and extend the role of communication in a CIDA funded HIV and AIDS stigma reduction project in Ghana, West Africa. The project includes a documentary video entitled: The Challenge of Stigma, Reflections on community education as a pathway to ch...

  18. VideoSET: Video Summary Evaluation through Text

    OpenAIRE

    Yeung, Serena; Fathi, Alireza; Fei-Fei, Li

    2014-01-01

    In this paper we present VideoSET, a method for Video Summary Evaluation through Text that can evaluate how well a video summary is able to retain the semantic information contained in its original video. We observe that semantics is most easily expressed in words, and develop a text-based approach for the evaluation. Given a video summary, a text representation of the video summary is first generated, and an NLP-based metric is then used to measure its semantic distance to ground-truth text ...

  19. The digital Emily project: achieving a photorealistic digital actor.

    Science.gov (United States)

    Alexander, Oleg; Rogers, Mike; Lambeth, William; Chiang, Jen-Yuan; Ma, Wan-Chun; Wang, Chuan-Chang; Debevec, Paul

    2010-01-01

    The Digital Emily Project uses advanced face scanning, character rigging, performance capture, and compositing to achieve one of the world's first photorealistic digital facial performances. The project scanned the geometry and reflectance of actress Emily O'Brien's face in 33 poses, showing different emotions, gaze directions, and lip formations in a light stage. These high-resolution scans-accurate to skin pores and fine wrinkles-became the basis for building a blendshape-based facial-animation rig whose expressions closely matched the scans. The blendshape rig drove displacement maps to add dynamic surface detail. A video-based facial animation system animated the face according to the performance in a reference video, and the digital face was tracked onto the video's motion and rendered under the same illumination. The result was a realistic 3D digital facial performance credited as one of the first to cross the "uncanny valley" between animated and fully human performances.

  20. The design of video and remote analysis system for gamma spectrum based on LabVIEW

    International Nuclear Information System (INIS)

    Xu Hongkun; Fang Fang; Chen Wei

    2009-01-01

    For the protection of analyst in the measurement,as well as the facilitation of expert to realize the remote analysis, a solution of live video combined with internet access and control is proposed. DirectShow technology and the LabVIEW'S IDT (Internet Develop Toolkit) module are used, video and analysis pages of the gamma energy spectrum are integrated and published in the windows system by IIS (Internet Information Sever). We realize the analysis of gamma spectrum and remote operations by internet. At the same time, the system has a friendly interface and easily to be put into practice. It also has some reference value for the related radioactive measurement. (authors)

  1. Resource-Constrained Low-Complexity Video Coding for Wireless Transmission

    DEFF Research Database (Denmark)

    Ukhanova, Ann

    of video quality. We proposed a new metric for objective quality assessment that considers frame rate. As many applications deal with wireless video transmission, we performed an analysis of compression and transmission systems with a focus on power-distortion trade-off. We proposed an approach...... for ratedistortion-complexity optimization of upcoming video compression standard HEVC. We also provided a new method allowing decrease of power consumption on mobile devices in 3G networks. Finally, we proposed low-delay and low-power approaches for video transmission over wireless personal area networks, including......Constrained resources like memory, power, bandwidth and delay requirements in many mobile systems pose limitations for video applications. Standard approaches for video compression and transmission do not always satisfy system requirements. In this thesis we have shown that it is possible to modify...

  2. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    Science.gov (United States)

    Morishima, Shigeo; Nakamura, Satoshi

    2004-12-01

    We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  3. Dual-Layer Video Encryption using RSA Algorithm

    Science.gov (United States)

    Chadha, Aman; Mallik, Sushmit; Chadha, Ankit; Johar, Ravdeep; Mani Roja, M.

    2015-04-01

    This paper proposes a video encryption algorithm using RSA and Pseudo Noise (PN) sequence, aimed at applications requiring sensitive video information transfers. The system is primarily designed to work with files encoded using the Audio Video Interleaved (AVI) codec, although it can be easily ported for use with Moving Picture Experts Group (MPEG) encoded files. The audio and video components of the source separately undergo two layers of encryption to ensure a reasonable level of security. Encryption of the video component involves applying the RSA algorithm followed by the PN-based encryption. Similarly, the audio component is first encrypted using PN and further subjected to encryption using the Discrete Cosine Transform. Combining these techniques, an efficient system, invulnerable to security breaches and attacks with favorable values of parameters such as encryption/decryption speed, encryption/decryption ratio and visual degradation; has been put forth. For applications requiring encryption of sensitive data wherein stringent security requirements are of prime concern, the system is found to yield negligible similarities in visual perception between the original and the encrypted video sequence. For applications wherein visual similarity is not of major concern, we limit the encryption task to a single level of encryption which is accomplished by using RSA, thereby quickening the encryption process. Although some similarity between the original and encrypted video is observed in this case, it is not enough to comprehend the happenings in the video.

  4. An e-health system for the elderly (Butler Project): a pilot study on acceptance and satisfaction.

    Science.gov (United States)

    Botella, Cristina; Etchemendy, Ernestina; Castilla, Diana; Baños, Rosa María; García-Palacios, Azucena; Quero, Soledad; Alcañiz, Mariano; Lozano, José Antonio

    2009-06-01

    The Butler Project is a technological e-health platform that uses the Internet to connect various users; it was designed to deliver health care to the elderly. The Butler platform has three levels of implementation: diagnosis (mood monitoring, alert system, management reports), therapy (training in inducing positive moods, memory work), and entertainment (e-mail, chat, video, photo albums, music, friend forums, accessibility to the Internet). The objective of this work is to describe the psychological aspects of the platform and to present data obtained from four users. Results show that after using the system, the participants increased their positive emotions and decreased their negative ones; in addition, they obtained high levels of satisfaction and experienced little difficulty in using the system.

  5. A "Journey in Feminist Theory Together": The "Doing Feminist Theory through Digital Video" Project

    Science.gov (United States)

    Hurst, Rachel Alpha Johnston

    2014-01-01

    "Doing Feminist Theory Through Digital Video" is an assignment I designed for my undergraduate feminist theory course, where students created a short digital video on a concept in feminist theory. I outline the assignment and the pedagogical and epistemological frameworks that structured the assignment (digital storytelling,…

  6. 47 CFR 76.1503 - Carriage of video programming providers on open video systems.

    Science.gov (United States)

    2010-10-01

    ... service showing that the Notice of Intent has been served on all local cable franchising authorities... video programming provider within five business days of receiving a written request from the provider...

  7. Real-time "x-ray vision" for healthcare simulation: an interactive projective overlay system to enhance intubation training and other procedural training.

    Science.gov (United States)

    Samosky, Joseph T; Baillargeon, Emma; Bregman, Russell; Brown, Andrew; Chaya, Amy; Enders, Leah; Nelson, Douglas A; Robinson, Evan; Sukits, Alison L; Weaver, Robert A

    2011-01-01

    We have developed a prototype of a real-time, interactive projective overlay (IPO) system that creates augmented reality display of a medical procedure directly on the surface of a full-body mannequin human simulator. These images approximate the appearance of both anatomic structures and instrument activity occurring within the body. The key innovation of the current work is sensing the position and motion of an actual device (such as an endotracheal tube) inserted into the mannequin and using the sensed position to control projected video images portraying the internal appearance of the same devices and relevant anatomic structures. The images are projected in correct registration onto the surface of the simulated body. As an initial practical prototype to test this technique we have developed a system permitting real-time visualization of the intra-airway position of an endotracheal tube during simulated intubation training.

  8. Roadside video data analysis deep learning

    CERN Document Server

    Verma, Brijesh; Stockwell, David

    2017-01-01

    This book highlights the methods and applications for roadside video data analysis, with a particular focus on the use of deep learning to solve roadside video data segmentation and classification problems. It describes system architectures and methodologies that are specifically built upon learning concepts for roadside video data processing, and offers a detailed analysis of the segmentation, feature extraction and classification processes. Lastly, it demonstrates the applications of roadside video data analysis including scene labelling, roadside vegetation classification and vegetation biomass estimation in fire risk assessment.

  9. Enumeration of Salmonids in the Okanogan Basin Using Underwater Video, Performance Period: October 2005 (Project Inception) - 31 December 2006.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Peter N.; Rayton, Michael D.; Nass, Bryan L.; Arterburn, John E.

    2007-06-01

    The Confederated Tribes of the Colville Reservation (Colville Tribes) identified the need for collecting baseline census data on the timing and abundance of adult salmonids in the Okanogan River Basin in order to determine basin and tributary-specific spawner distributions, evaluate the status and trends of natural salmonid production in the basin, document local fish populations, and augment existing fishery data. This report documents the design, installation, operation and evaluation of mainstem and tributary video systems in the Okanogan River Basin. The species-specific data collected by these fish enumeration systems are presented along with an evaluation of the operation of a facility that provides a count of fish using an automated method. Information collected by the Colville Tribes Fish & Wildlife Department, specifically the Okanogan Basin Monitoring and Evaluation Program (OBMEP), is intended to provide a relative abundance indicator for anadromous fish runs migrating past Zosel Dam and is not intended as an absolute census count. Okanogan Basin Monitoring and Evaluation Program collected fish passage data between October 2005 and December 2006. Video counting stations were deployed and data were collected at two locations in the basin: on the mainstem Okanogan River at Zosel Dam near Oroville, Washington, and on Bonaparte Creek, a tributary to the Okanogan River, in the town of Tonasket, Washington. Counts at Zosel Dam between 10 October 2005 and 28 February 2006 are considered partial, pilot year data as they were obtained from the operation of a single video array on the west bank fishway, and covered only a portion of the steelhead migration. A complete description of the apparatus and methodology can be found in 'Fish Enumeration Using Underwater Video Imagery - Operational Protocol' (Nass 2007). At Zosel Dam, totals of 57 and 481 adult Chinook salmon were observed with the video monitoring system in 2005 and 2006, respectively. Run

  10. Digital video recording and archiving in ophthalmic surgery

    Directory of Open Access Journals (Sweden)

    Raju Biju

    2006-01-01

    Full Text Available Currently most ophthalmic operating rooms are equipped with an analog video recording system [analog Charge Couple Device camera for video grabbing and a Video Cassette Recorder for recording]. We discuss the various advantages of a digital video capture device, its archiving capabilities and our experience during the transition from analog to digital video recording and archiving. The basic terminology and concepts related to analog and digital video, along with the choice of hardware, software and formats for archiving are discussed.

  11. Automated intelligent video surveillance system for ships

    Science.gov (United States)

    Wei, Hai; Nguyen, Hieu; Ramu, Prakash; Raju, Chaitanya; Liu, Xiaoqing; Yadegar, Jacob

    2009-05-01

    To protect naval and commercial ships from attack by terrorists and pirates, it is important to have automatic surveillance systems able to detect, identify, track and alert the crew on small watercrafts that might pursue malicious intentions, while ruling out non-threat entities. Radar systems have limitations on the minimum detectable range and lack high-level classification power. In this paper, we present an innovative Automated Intelligent Video Surveillance System for Ships (AIVS3) as a vision-based solution for ship security. Capitalizing on advanced computer vision algorithms and practical machine learning methodologies, the developed AIVS3 is not only capable of efficiently and robustly detecting, classifying, and tracking various maritime targets, but also able to fuse heterogeneous target information to interpret scene activities, associate targets with levels of threat, and issue the corresponding alerts/recommendations to the man-in- the-loop (MITL). AIVS3 has been tested in various maritime scenarios and shown accurate and effective threat detection performance. By reducing the reliance on human eyes to monitor cluttered scenes, AIVS3 will save the manpower while increasing the accuracy in detection and identification of asymmetric attacks for ship protection.

  12. Video repairing under variable illumination using cyclic motions.

    Science.gov (United States)

    Jia, Jiaya; Tai, Yu-Wing; Wu, Tai-Pang; Tang, Chi-Keung

    2006-05-01

    This paper presents a complete system capable of synthesizing a large number of pixels that are missing due to occlusion or damage in an uncalibrated input video. These missing pixels may correspond to the static background or cyclic motions of the captured scene. Our system employs user-assisted video layer segmentation, while the main processing in video repair is fully automatic. The input video is first decomposed into the color and illumination videos. The necessary temporal consistency is maintained by tensor voting in the spatio-temporal domain. Missing colors and illumination of the background are synthesized by applying image repairing. Finally, the occluded motions are inferred by spatio-temporal alignment of collected samples at multiple scales. We experimented on our system with some difficult examples with variable illumination, where the capturing camera can be stationary or in motion.

  13. System and Analysis for Low Latency Video Processing using Microservices

    OpenAIRE

    VASUKI BALASUBRAMANIAM, KARTHIKEYAN

    2017-01-01

    The evolution of big data processing and analysis has led to data-parallel frameworks such as Hadoop, MapReduce, Spark, and Hive, which are capable of analyzing large streams of data such as server logs, web transactions, and user reviews. Videos are one of the biggest sources of data and dominate the Internet traffic. Video processing on a large scale is critical and challenging as videos possess spatial and temporal features, which are not taken into account by the existing data-parallel fr...

  14. Video Sharing System Based on Wi-Fi Camera

    OpenAIRE

    Qidi Lin; Hewei Yu; Jinbin Huang; Weile Liang

    2015-01-01

    This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition, it is able to send commands to camera and control the camera's holder to rotate. The platform can be applied to interactive teaching and dangerous area's monitoring and so on. Testing results show that the platform can share ...

  15. Validation of a new tool for automatic assessment of tremor frequency from video recordings

    Czech Academy of Sciences Publication Activity Database

    Uhríková, Z.; Šprdlík, Otakar; Hoskovcová, M.; Komárek, A.; Ulmanová, O.; Hlaváč, V.; Nugent, Ch. D.; Růžička, E.

    2011-01-01

    Roč. 198, č. 1 (2011), s. 110-113 ISSN 0165-0270 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10750506 Keywords : Tremor frequency * essential tremor * video analysis * Fourier transformation * accelerometry Subject RIV: BC - Control Systems Theory Impact factor: 1.980, year: 2011 http://library.utia.cas.cz/separaty/2011/TR/sprdlik-0359324.pdf

  16. Orbiter CCTV video signal noise analysis

    Science.gov (United States)

    Lawton, R. M.; Blanke, L. R.; Pannett, R. F.

    1977-01-01

    The amount of steady state and transient noise which will couple to orbiter CCTV video signal wiring is predicted. The primary emphasis is on the interim system, however, some predictions are made concerning the operational system wiring in the cabin area. Noise sources considered are RF fields from on board transmitters, precipitation static, induced lightning currents, and induced noise from adjacent wiring. The most significant source is noise coupled to video circuits from associated circuits in common connectors. Video signal crosstalk is the primary cause of steady state interference, and mechanically switched control functions cause the largest induced transients.

  17. Video-based lane estimation and tracking for driver assistance: Survey, system, and evaluation

    OpenAIRE

    McCall, J C; Trivedi, Mohan Manubhai

    2006-01-01

    Driver-assistance systems that monitor driver intent, warn drivers of lane departures, or assist in vehicle guidance are all being actively considered. It is therefore important to take a critical look at key aspects of these systems, one of which is lane-position tracking. It is for these driver-assistance objectives that motivate the development of the novel "video-based lane estimation and tracking" (VioLET) system. The system is designed using steerable filters for robust and accurate lan...

  18. Genre-Specific Semantic Video Indexing

    NARCIS (Netherlands)

    Wu, J.; Worring, M.

    2010-01-01

    In many applications, we find large video collections from different genres where the user is often only interested in one or two specific video genres. So, when users are querying the system with a specific semantic concept, they are likely aiming a genre specific instantiation of this concept.

  19. Fast Aerial Video Stitching

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-10-01

    Full Text Available The highly efficient and robust stitching of aerial video captured by unmanned aerial vehicles (UAVs is a challenging problem in the field of robot vision. Existing commercial image stitching systems have seen success with offline stitching tasks, but they cannot guarantee high-speed performance when dealing with online aerial video sequences. In this paper, we present a novel system which has an unique ability to stitch high-frame rate aerial video at a speed of 150 frames per second (FPS. In addition, rather than using a high-speed vision platform such as FPGA or CUDA, our system is running on a normal personal computer. To achieve this, after the careful comparison of the existing invariant features, we choose the FAST corner and binary descriptor for efficient feature extraction and representation, and present a spatial and temporal coherent filter to fuse the UAV motion information into the feature matching. The proposed filter can remove the majority of feature correspondence outliers and significantly increase the speed of robust feature matching by up to 20 times. To achieve a balance between robustness and efficiency, a dynamic key frame-based stitching framework is used to reduce the accumulation errors. Extensive experiments on challenging UAV datasets demonstrate that our approach can break through the speed limitation and generate an accurate stitching image for aerial video stitching tasks.

  20. Performance comparison of AV1, HEVC, and JVET video codecs on 360 (spherical) video

    Science.gov (United States)

    Topiwala, Pankaj; Dai, Wei; Krishnan, Madhu; Abbas, Adeel; Doshi, Sandeep; Newman, David

    2017-09-01

    This paper compares the coding efficiency performance on 360 videos, of three software codecs: (a) AV1 video codec from the Alliance for Open Media (AOM); (b) the HEVC Reference Software HM; and (c) the JVET JEM Reference SW. Note that 360 video is especially challenging content, in that one codes full res globally, but typically looks locally (in a viewport), which magnifies errors. These are tested in two different projection formats ERP and RSP, to check consistency. Performance is tabulated for 1-pass encoding on two fronts: (1) objective performance based on end-to-end (E2E) metrics such as SPSNR-NN, and WS-PSNR, currently developed in the JVET committee; and (2) informal subjective assessment of static viewports. Constant quality encoding is performed with all the three codecs for an unbiased comparison of the core coding tools. Our general conclusion is that under constant quality coding, AV1 underperforms HEVC, which underperforms JVET. We also test with rate control, where AV1 currently underperforms the open source X265 HEVC codec. Objective and visual evidence is provided.

  1. User-based key frame detection in social web video

    OpenAIRE

    Chorianopoulos, Konstantinos

    2012-01-01

    Video search results and suggested videos on web sites are represented with a video thumbnail, which is manually selected by the video up-loader among three randomly generated ones (e.g., YouTube). In contrast, we present a grounded user-based approach for automatically detecting interesting key-frames within a video through aggregated users' replay interactions with the video player. Previous research has focused on content-based systems that have the benefit of analyzing a video without use...

  2. Real-time embedded system for stereo video processing for multiview displays

    Science.gov (United States)

    Berretty, R.-P. M.; Riemens, A. K.; Machado, P. F.

    2007-02-01

    In video systems, the introduction of 3D video might be the next revolution after the introduction of color. Nowadays multiview auto-stereoscopic displays are entering the market. Such displays offer various views at the same time. Depending on its positions, the viewers' eyes see different images. Hence, the viewers' left eye receives a signal that is different from what his right eye gets; this gives, provided the signals have been properly processed, the impression of depth. New auto-stereoscopic products use an image-plus-depth interface. On the other hand, a growing number of 3D productions from the entertainment industry use a stereo format. In this paper, we show how to compute depth from the stereo signal to comply with the display interface format. Furthermore, we present a realisation suitable for a real-time cost-effective implementation on an embedded media processor.

  3. An efficient HW and SW design of H.264 video compression, storage and playback on FPGA devices for handheld thermal imaging systems

    Science.gov (United States)

    Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih

    2017-05-01

    Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.

  4. Telemedicina: a multimedia broadband teleradiology and radiosurgery project

    Science.gov (United States)

    de Blas-Garcia, Pedro; Lopez-Viver, Rodolfo; Martinez, Demetrio; Ruiz, Ignacio; Barriuso, Daniel; Janez-Escalada, Luis; Gomez, Jose L.; Luyando, Luis

    1996-05-01

    Telemedicina is a Spanish project that covers teleradiology and radiosurgery areas. This project is under the frame of the Spanish Broadband National Plan (PLANBA). The final technical tests are being ended over Ethernet and ATM and it is planned to get their first clinical results on February-96. Two pilots will be installed: one in Madrid linking two sites through a ATM network (provided by Telefonica, Spanish PTT) and a second one in Asturias (north of Spain) using ISDN primary access (2 Mbps). The system handles still images, voice and video records, scanned documents, text and slides allowing doctors to interchange this data using cooperative tools. The system is based on a multimedia Unix platform with voice, video and videoconference devices and boards. The platform will be used in several ways: as desktop videoconferencing, primary diagnosis and review. Communications are based on ATM (over AAL5) at 155 Mbps and ISDN (primary access). The protocol used in both networks is TCP/IP. The application is written in C++ (object oriented design and programming) and C. GUI is built under X-Windows and Motif. The codification of video is MJPEG done through dedicated hardware. The system is integrated in a small PACS (previously installed); the images are captured from the modalities such as CT using the DICOM standard and it is connected with the Radiological Information System. The application allows collaborative work: telepointer, shared windows, editors and actions. Main news of the Telemedicina project will be the incorporation of broadband networks (ATM at 155 Mbps) and the integration of collaborative work. This two aspects allow the doctors to improve their work speeding up the transmission and retrieval of medical records. Also this platform can be used to achieve several goals: such as primary diagnosis, videoconference, review.

  5. Video Monitoring a Simulation-Based Quality Improvement Program in Bihar, India.

    Science.gov (United States)

    Dyer, Jessica; Spindler, Hilary; Christmas, Amelia; Shah, Malay Bharat; Morgan, Melissa; Cohen, Susanna R; Sterne, Jason; Mahapatra, Tanmay; Walker, Dilys

    2018-04-01

    Simulation-based training has become an accepted clinical training andragogy in high-resource settings with its use increasing in low-resource settings. Video recordings of simulated scenarios are commonly used by facilitators. Beyond using the videos during debrief sessions, researchers can also analyze the simulation videos to quantify technical and nontechnical skills during simulated scenarios over time. Little is known about the feasibility and use of large-scale systems to video record and analyze simulation and debriefing data for monitoring and evaluation in low-resource settings. This manuscript describes the process of designing and implementing a large-scale video monitoring system. Mentees and Mentors were consented and all simulations and debriefs conducted at 320 Primary Health Centers (PHCs) were video recorded. The system design, number of video recordings, and inter-rater reliability of the coded videos were assessed. The final dataset included a total of 11,278 videos. Overall, a total of 2,124 simulation videos were coded and 183 (12%) were blindly double-coded. For the double-coded sample, the average inter-rater reliability (IRR) scores were 80% for nontechnical skills, and 94% for clinical technical skills. Among 4,450 long debrief videos received, 216 were selected for coding and all were double-coded. Data quality of simulation videos was found to be very good in terms of recorded instances of "unable to see" and "unable to hear" in Phases 1 and 2. This study demonstrates that video monitoring systems can be effectively implemented at scale in resource limited settings. Further, video monitoring systems can play several vital roles within program implementation, including monitoring and evaluation, provision of actionable feedback to program implementers, and assurance of program fidelity.

  6. Real-time video compressing under DSP/BIOS

    Science.gov (United States)

    Chen, Qiu-ping; Li, Gui-ju

    2009-10-01

    This paper presents real-time MPEG-4 Simple Profile video compressing based on the DSP processor. The programming framework of video compressing is constructed using TMS320C6416 Microprocessor, TDS510 simulator and PC. It uses embedded real-time operating system DSP/BIOS and the API functions to build periodic function, tasks and interruptions etcs. Realize real-time video compressing. To the questions of data transferring among the system. Based on the architecture of the C64x DSP, utilized double buffer switched and EDMA data transfer controller to transit data from external memory to internal, and realize data transition and processing at the same time; the architecture level optimizations are used to improve software pipeline. The system used DSP/BIOS to realize multi-thread scheduling. The whole system realizes high speed transition of a great deal of data. Experimental results show the encoder can realize real-time encoding of 768*576, 25 frame/s video images.

  7. Initial clinical experience with an interactive, video-based patient-positioning system for head and neck treatment

    International Nuclear Information System (INIS)

    Johnson, L.; Hadley, Scott W.; Milliken, Barrett D.; Pelizzari, Charles A.; Haraf, Daniel J.; Nguyen, Ai; Chen, George T.Y.

    1996-01-01

    Objective: To evaluate an interactive, video-based system for positioning head and neck patients. Materials and Methods: System hardware includes two B and W CCD cameras (mounted to provide left-lateral and AP-inferior views), zoom lenses, and a PC equipped with a frame grabber. Custom software is used to acquire and archive video images, as well as to display real-time subtraction images revealing patient misalignment in multiple views. Live subtraction images are obtained by subtracting a reference image (i.e., an image of the patient in the correct position) from real-time video. As seen in the figure, darker regions of the subtraction image indicate where the patient is currently, while lighter regions indicate where the patient should be. Adjustments in the patient's position are updated and displayed in less than 0.07s, allowing the therapist to interactively detect and correct setup discrepancies. Patients selected for study are treated BID and immobilized with conventional litecast straps attached to a baseframe which is registered to the treatment couch. Morning setups are performed by aligning litecast marks and patient anatomy to treatment room lasers. Afternoon setups begin with the same procedure, and then live subtraction images are used to fine-tune the setup. At morning and afternoon setups, video images and verification films are taken after positioning is complete. These are visually registered offline to determine the distribution of setup errors per patient, with and without video assistance. Results: Without video assistance, the standard deviation of setup errors typically ranged from 5 to 7mm and was patient-dependent. With video assistance, standard deviations are reduced to 1 to 4mm, with the result depending on patient coopertiveness and the length of time spent fine-tuning the setups. At current levels of experience, 3 to 4mm accuracy is easily achieved in about 30s, while 1 to 3mm accuracy is achieved in about 1 to 2 minutes. Studies

  8. "Comuniquemonos, Ya]": strengthening interpersonal communication and health through video.

    Science.gov (United States)

    1992-01-01

    The Nutrition Communication Project has overseen production of a training video interpersonal communication for health workers involved in growth monitoring and promotion (GMP) programs in Latin America entitled Comuniquemonos, Ya] Producers used the following questions as their guidelines: Who is the audience?, Why is the training needed?, and What are the objectives and advantages of using video? Communication specialists, anthropologists, educators, and nutritionists worked together to write the script. Then video camera specialists taped the video in Bolivia and Guatemala. A facilitator's guide complete with an outline of an entire workshop comes with the video. The guide encourages trainees to participate in various situations. Trainees are able to compare their interpersonal skills with those of the health workers on the video. Further they can determine cause and effect. The video has 2 scenes to demonstrate poor and good communication skills using the same health worker in both situations. Other scenes highlight 6 communication skills: developing a warm environment, asking questions, sharing results, listening, observing, and doing demonstration. All types of health workers ranging from physicians to community health workers as well as health workers from various countries (Guatemala, Honduras, Bolivia, and Ecuador) approve of the video. Some trainers have used the video without using the guide and comment that it began a debate on communication 's role in GMP efforts.

  9. Adaptive live multicast video streaming of SVC with UEP FEC

    Science.gov (United States)

    Lev, Avram; Lasry, Amir; Loants, Maoz; Hadar, Ofer

    2014-09-01

    Ideally, video streaming systems should provide the best quality video a user's device can handle without compromising on downloading speed. In this article, an improved video transmission system is presented which dynamically enhances the video quality based on a user's current network state and repairs errors from data lost in the video transmission. The system incorporates three main components: Scalable Video Coding (SVC) with three layers, multicast based on Receiver Layered Multicast (RLM) and an UnEqual Forward Error Correction (FEC) algorithm. The SVC provides an efficient method for providing different levels of video quality, stored as enhancement layers. In the presented system, a proportional-integral-derivative (PID) controller was implemented to dynamically adjust the video quality, adding or subtracting quality layers as appropriate. In addition, an FEC algorithm was added to compensate for data lost in transmission. A two dimensional FEC was used. The FEC algorithm came from the Pro MPEG code of practice #3 release 2. Several bit errors scenarios were tested (step function, cosine wave) with different bandwidth size and error values were simulated. The suggested scheme which includes SVC video encoding with 3 layers over IP Multicast with Unequal FEC algorithm was investigated under different channel conditions, variable bandwidths and different bit error rates. The results indicate improvement of the video quality in terms of PSNR over previous transmission schemes.

  10. Hierarchical event selection for video storyboards with a case study on snooker video visualization.

    Science.gov (United States)

    Parry, Matthew L; Legg, Philip A; Chung, David H S; Griffiths, Iwan W; Chen, Min

    2011-12-01

    Video storyboard, which is a form of video visualization, summarizes the major events in a video using illustrative visualization. There are three main technical challenges in creating a video storyboard, (a) event classification, (b) event selection and (c) event illustration. Among these challenges, (a) is highly application-dependent and requires a significant amount of application specific semantics to be encoded in a system or manually specified by users. This paper focuses on challenges (b) and (c). In particular, we present a framework for hierarchical event representation, and an importance-based selection algorithm for supporting the creation of a video storyboard from a video. We consider the storyboard to be an event summarization for the whole video, whilst each individual illustration on the board is also an event summarization but for a smaller time window. We utilized a 3D visualization template for depicting and annotating events in illustrations. To demonstrate the concepts and algorithms developed, we use Snooker video visualization as a case study, because it has a concrete and agreeable set of semantic definitions for events and can make use of existing techniques of event detection and 3D reconstruction in a reliable manner. Nevertheless, most of our concepts and algorithms developed for challenges (b) and (c) can be applied to other application areas. © 2010 IEEE

  11. Falling-incident detection and throughput enhancement in a multi-camera video-surveillance system.

    Science.gov (United States)

    Shieh, Wann-Yun; Huang, Ju-Chin

    2012-09-01

    For most elderly, unpredictable falling incidents may occur at the corner of stairs or a long corridor due to body frailty. If we delay to rescue a falling elder who is likely fainting, more serious consequent injury may occur. Traditional secure or video surveillance systems need caregivers to monitor a centralized screen continuously, or need an elder to wear sensors to detect falling incidents, which explicitly waste much human power or cause inconvenience for elders. In this paper, we propose an automatic falling-detection algorithm and implement this algorithm in a multi-camera video surveillance system. The algorithm uses each camera to fetch the images from the regions required to be monitored. It then uses a falling-pattern recognition algorithm to determine if a falling incident has occurred. If yes, system will send short messages to someone needs to be noticed. The algorithm has been implemented in a DSP-based hardware acceleration board for functionality proof. Simulation results show that the accuracy of falling detection can achieve at least 90% and the throughput of a four-camera surveillance system can be improved by about 2.1 times. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  12. Kalman Filter Based Tracking in an Video Surveillance System

    Directory of Open Access Journals (Sweden)

    SULIMAN, C.

    2010-05-01

    Full Text Available In this paper we have developed a Matlab/Simulink based model for monitoring a contact in a video surveillance sequence. For the segmentation process and corect identification of a contact in a surveillance video, we have used the Horn-Schunk optical flow algorithm. The position and the behavior of the correctly detected contact were monitored with the help of the traditional Kalman filter. After that we have compared the results obtained from the optical flow method with the ones obtained from the Kalman filter, and we show the correct functionality of the Kalman filter based tracking. The tests were performed using video data taken with the help of a fix camera. The tested algorithm has shown promising results.

  13. Earth System Science Project

    Science.gov (United States)

    Rutherford, Sandra; Coffman, Margaret

    2004-01-01

    For several decades, science teachers have used bottles for classroom projects designed to teach students about biology. Bottle projects do not have to just focus on biology, however. These projects can also be used to engage students in Earth science topics. This article describes the Earth System Science Project, which was adapted and developed…

  14. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    Directory of Open Access Journals (Sweden)

    Nakamura Satoshi

    2004-01-01

    Full Text Available We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  15. The live service of video geo-information

    Science.gov (United States)

    Xue, Wu; Zhang, Yongsheng; Yu, Ying; Zhao, Ling

    2016-03-01

    In disaster rescue, emergency response and other occasions, traditional aerial photogrammetry is difficult to meet real-time monitoring and dynamic tracking demands. To achieve the live service of video geo-information, a system is designed and realized—an unmanned helicopter equipped with video sensor, POS, and high-band radio. This paper briefly introduced the concept and design of the system. The workflow of video geo-information live service is listed. Related experiments and some products are shown. In the end, the conclusion and outlook is given.

  16. Reduced bandwidth video for remote vehicle operations

    Energy Technology Data Exchange (ETDEWEB)

    Noell, T.E.; DePiero, F.W.

    1993-08-01

    Oak Ridge National Laboratory staff have developed a video compression system for low-bandwidth remote operations. The objective is to provide real-time video at data rates comparable to available tactical radio links, typically 16 to 64 thousand bits per second (kbps), while maintaining sufficient quality to achieve mission objectives. The system supports both continuous lossy transmission of black and white (gray scale) video for remote driving and progressive lossless transmission of black and white images for remote automatic target acquisition. The average data rate of the resulting bit stream is 64 kbps. This system has been demonstrated to provide video of sufficient quality to allow remote driving of a High-Mobility Multipurpose Wheeled Vehicle at speeds up to 15 mph (24.1 kph) on a moguled dirt track. The nominal driving configuration provides a frame rate of 4 Hz, a compression per frame of 125:1, and a resulting latency of {approximately}1s. This paper reviews the system approach and implementation, and further describes some of our experiences when using the system to support remote driving.

  17. Developing user-centered concepts for language learning video games

    OpenAIRE

    Poels, Yorick; Annema, Jan Henk; Zaman, Bieke; Cornillie, Frederik

    2012-01-01

    This paper will report on an ongoing project which aims to develop video games for language learning through a user-centered and evidence-based approach. Therefore, codesign sessions were held with adolescents between 14 and 16 years old, in order to gain insight into their preferences for educational games for language learning. During these sessions, 11 concepts for video games were developed. We noticed a divide between the concepts for games that were oriented towa...

  18. Playing Action Video Games Improves Visuomotor Control.

    Science.gov (United States)

    Li, Li; Chen, Rongrong; Chen, Jing

    2016-08-01

    Can playing action video games improve visuomotor control? If so, can these games be used in training people to perform daily visuomotor-control tasks, such as driving? We found that action gamers have better lane-keeping and visuomotor-control skills than do non-action gamers. We then trained non-action gamers with action or nonaction video games. After they played a driving or first-person-shooter video game for 5 or 10 hr, their visuomotor control improved significantly. In contrast, non-action gamers showed no such improvement after they played a nonaction video game. Our model-driven analysis revealed that although different action video games have different effects on the sensorimotor system underlying visuomotor control, action gaming in general improves the responsiveness of the sensorimotor system to input error signals. The findings support a causal link between action gaming (for as little as 5 hr) and enhancement in visuomotor control, and suggest that action video games can be beneficial training tools for driving. © The Author(s) 2016.

  19. An optimized video system for augmented reality in endodontics: a feasibility study.

    Science.gov (United States)

    Bruellmann, D D; Tjaden, H; Schwanecke, U; Barth, P

    2013-03-01

    We propose an augmented reality system for the reliable detection of root canals in video sequences based on a k-nearest neighbor color classification and introduce a simple geometric criterion for teeth. The new software was implemented using C++, Qt, and the image processing library OpenCV. Teeth are detected in video images to restrict the segmentation of the root canal orifices by using a k-nearest neighbor algorithm. The location of the root canal orifices were determined using Euclidean distance-based image segmentation. A set of 126 human teeth with known and verified locations of the root canal orifices was used for evaluation. The software detects root canals orifices for automatic classification of the teeth in video images and stores location and size of the found structures. Overall 287 of 305 root canals were correctly detected. The overall sensitivity was about 94 %. Classification accuracy for molars ranged from 65.0 to 81.2 % and from 85.7 to 96.7 % for premolars. The realized software shows that observations made in anatomical studies can be exploited to automate real-time detection of root canal orifices and tooth classification with a software system. Automatic storage of location, size, and orientation of the found structures with this software can be used for future anatomical studies. Thus, statistical tables with canal locations will be derived, which can improve anatomical knowledge of the teeth to alleviate root canal detection in the future. For this purpose the software is freely available at: http://www.dental-imaging.zahnmedizin.uni-mainz.de/.

  20. “Mass Centre” Vectorization Algorithm for Vehicle’s Counting Portable Video System

    Directory of Open Access Journals (Sweden)

    Gaidash Vladislav

    2016-12-01

    Full Text Available Vehicle counting is one of the most basic challenges during the development and establishment of Intelligent Transport Systems (ITS. The main reason for vehicle counting is the necessity of monitoring and maintaining the transport infrastructure, preventing different kind of faults such as traffic jams. The main applied solution to this problem is video surveillance, which is presented by different kind of systems. Some of these systems use a network of static traffic cameras, expensive for establish and maintain, or mobile units, fast for redeployment, but fewer in diversity.

  1. An evaluation of the benefits and challenges of video consulting between general practitioners and residential aged care facilities.

    Science.gov (United States)

    Wade, Victoria; Whittaker, Frank; Hamlyn, Jeremy

    2015-12-01

    This research evaluated a project that provided video consultations between general practitioners (GPs) and residential aged care facilities (RACFs), with the aim of enabling faster access to medical care and avoidance of unnecessary hospital transfers. GPs were paid for video consultations at a rate equivalent to existing insurance reimbursement for supporting telehealth services. Evaluation data were gathered by direct observation at the project sites, semi-structured interviews and video call data from the technical network. Three pairs of general practices and RACFs were recruited to the project. 40 video consultations eligible for payment occurred over a 6 month period, three of which were judged to have avoided hospital attendance. The process development and change management aspects of the project required substantially more effort than was anticipated. This was due to problems with RACF technical infrastructure, the need for repeated training and awareness raising in RACFs, the challenge of establishing new clinical procedures, the short length of the project and broader difficulties in the relationships between GPs and RACFs. Video consulting between GPs and RACFs was clinically useful and avoided hospital attendance on a small scale, but further focus on process development is needed to embed this as a routine method of service delivery. © The Author(s) 2015.

  2. Data Management Rubric for Video Data in Organismal Biology.

    Science.gov (United States)

    Brainerd, Elizabeth L; Blob, Richard W; Hedrick, Tyson L; Creamer, Andrew T; Müller, Ulrike K

    2017-07-01

    Standards-based data management facilitates data preservation, discoverability, and access for effective data reuse within research groups and across communities of researchers. Data sharing requires community consensus on standards for data management, such as storage and formats for digital data preservation, metadata (i.e., contextual data about the data) that should be recorded and stored, and data access. Video imaging is a valuable tool for measuring time-varying phenotypes in organismal biology, with particular application for research in functional morphology, comparative biomechanics, and animal behavior. The raw data are the videos, but videos alone are not sufficient for scientific analysis. Nearly endless videos of animals can be found on YouTube and elsewhere on the web, but these videos have little value for scientific analysis because essential metadata such as true frame rate, spatial calibration, genus and species, weight, age, etc. of organisms, are generally unknown. We have embarked on a project to build community consensus on video data management and metadata standards for organismal biology research. We collected input from colleagues at early stages, organized an open workshop, "Establishing Standards for Video Data Management," at the Society for Integrative and Comparative Biology meeting in January 2017, and then collected two more rounds of input on revised versions of the standards. The result we present here is a rubric consisting of nine standards for video data management, with three levels within each standard: good, better, and best practices. The nine standards are: (1) data storage; (2) video file formats; (3) metadata linkage; (4) video data and metadata access; (5) contact information and acceptable use; (6) camera settings; (7) organism(s); (8) recording conditions; and (9) subject matter/topic. The first four standards address data preservation and interoperability for sharing, whereas standards 5-9 establish minimum metadata

  3. Video camera use at nuclear power plants

    International Nuclear Information System (INIS)

    Estabrook, M.L.; Langan, M.O.; Owen, D.E.

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs

  4. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yasaman Samei

    2008-08-01

    Full Text Available Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN. With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture. This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  5. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks.

    Science.gov (United States)

    Aghdasi, Hadi S; Abbaspour, Maghsoud; Moghadam, Mohsen Ebrahimi; Samei, Yasaman

    2008-08-04

    Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS) and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN). With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture). This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  6. Utilizing a scale model solar system project to visualize important planetary science concepts and develop technology and spatial reasoning skills

    Science.gov (United States)

    Kortenkamp, Stephen J.; Brock, Laci

    2016-10-01

    Scale model solar systems have been used for centuries to help educate young students and the public about the vastness of space and the relative sizes of objects. We have adapted the classic scale model solar system activity into a student-driven project for an undergraduate general education astronomy course at the University of Arizona. Students are challenged to construct and use their three dimensional models to demonstrate an understanding of numerous concepts in planetary science, including: 1) planetary obliquities, eccentricities, inclinations; 2) phases and eclipses; 3) planetary transits; 4) asteroid sizes, numbers, and distributions; 5) giant planet satellite and ring systems; 6) the Pluto system and Kuiper belt; 7) the extent of space travel by humans and robotic spacecraft; 8) the diversity of extrasolar planetary systems. Secondary objectives of the project allow students to develop better spatial reasoning skills and gain familiarity with technology such as Excel formulas, smart-phone photography, and audio/video editing.During our presentation we will distribute a formal description of the project and discuss our expectations of the students as well as present selected highlights from preliminary submissions.

  7. Content-based video retrieval by example video clip

    Science.gov (United States)

    Dimitrova, Nevenka; Abdel-Mottaleb, Mohamed

    1997-01-01

    This paper presents a novel approach for video retrieval from a large archive of MPEG or Motion JPEG compressed video clips. We introduce a retrieval algorithm that takes a video clip as a query and searches the database for clips with similar contents. Video clips are characterized by a sequence of representative frame signatures, which are constructed from DC coefficients and motion information (`DC+M' signatures). The similarity between two video clips is determined by using their respective signatures. This method facilitates retrieval of clips for the purpose of video editing, broadcast news retrieval, or copyright violation detection.

  8. Characterizing popularity dynamics of online videos

    Science.gov (United States)

    Ren, Zhuo-Ming; Shi, Yu-Qiang; Liao, Hao

    2016-07-01

    Online popularity has a major impact on videos, music, news and other contexts in online systems. Characterizing online popularity dynamics is nature to explain the observed properties in terms of the already acquired popularity of each individual. In this paper, we provide a quantitative, large scale, temporal analysis of the popularity dynamics in two online video-provided websites, namely MovieLens and Netflix. The two collected data sets contain over 100 million records and even span a decade. We characterize that the popularity dynamics of online videos evolve over time, and find that the dynamics of the online video popularity can be characterized by the burst behaviors, typically occurring in the early life span of a video, and later restricting to the classic preferential popularity increase mechanism.

  9. An integrated video- and weight-monitoring system for the surveillance of highly enriched uranium blend down operations

    International Nuclear Information System (INIS)

    Lenarduzzi, R.; Castleberry, K.; Whitaker, M.; Martinez, R.

    1998-01-01

    An integrated video-surveillance and weight-monitoring system has been designed and constructed for tracking the blending down of weapons-grade uranium by the US Department of Energy. The instrumentation is being used by the International Atomic Energy Agency in its task of tracking and verifying the blended material at the Portsmouth Gaseous Diffusion Plant, Portsmouth, Ohio. The weight instrumentation developed at the Oak Ridge National Laboratory monitors and records the weight of cylinders of the highly enriched uranium as their contents are fed into the blending facility while the video equipment provided by Sandia National Laboratory records periodic and event triggered images of the blending area. A secure data network between the scales, cameras, and computers insures data integrity and eliminates the possibility of tampering. The details of the weight monitoring instrumentation, video- and weight-system interaction, and the secure data network is discussed

  10. The Effect of the Instructional Media Based on Lecture Video and Slide Synchronization System on Statistics Learning Achievement

    Directory of Open Access Journals (Sweden)

    Partha Sindu I Gede

    2018-01-01

    Full Text Available The purpose of this study was to determine the effect of the use of the instructional media based on lecture video and slide synchronization system on Statistics learning achievement of the students of PTI department . The benefit of this research is to help lecturers in the instructional process i to improve student's learning achievements that lead to better students’ learning outcomes. Students can use instructional media which is created from the lecture video and slide synchronization system to support more interactive self-learning activities. Students can conduct learning activities more efficiently and conductively because synchronized lecture video and slide can assist students in the learning process. The population of this research was all students of semester VI (six majoring in Informatics Engineering Education. The sample of the research was the students of class VI B and VI D of the academic year 2016/2017. The type of research used in this study was quasi-experiment. The research design used was post test only with non equivalent control group design. The result of this research concluded that there was a significant influence in the application of learning media based on lectures video and slide synchronization system on statistics learning result on PTI department.

  11. Intelligent keyframe extraction for video printing

    Science.gov (United States)

    Zhang, Tong

    2004-10-01

    Nowadays most digital cameras have the functionality of taking short video clips, with the length of video ranging from several seconds to a couple of minutes. The purpose of this research is to develop an algorithm which extracts an optimal set of keyframes from each short video clip so that the user could obtain proper video frames to print out. In current video printing systems, keyframes are normally obtained by evenly sampling the video clip over time. Such an approach, however, may not reflect highlights or regions of interest in the video. Keyframes derived in this way may also be improper for video printing in terms of either content or image quality. In this paper, we present an intelligent keyframe extraction approach to derive an improved keyframe set by performing semantic analysis of the video content. For a video clip, a number of video and audio features are analyzed to first generate a candidate keyframe set. These features include accumulative color histogram and color layout differences, camera motion estimation, moving object tracking, face detection and audio event detection. Then, the candidate keyframes are clustered and evaluated to obtain a final keyframe set. The objective is to automatically generate a limited number of keyframes to show different views of the scene; to show different people and their actions in the scene; and to tell the story in the video shot. Moreover, frame extraction for video printing, which is a rather subjective problem, is considered in this work for the first time, and a semi-automatic approach is proposed.

  12. Three-dimensional, automated, real-time video system for tracking limb motion in brain-machine interface studies.

    Science.gov (United States)

    Peikon, Ian D; Fitzsimmons, Nathan A; Lebedev, Mikhail A; Nicolelis, Miguel A L

    2009-06-15

    Collection and analysis of limb kinematic data are essential components of the study of biological motion, including research into biomechanics, kinesiology, neurophysiology and brain-machine interfaces (BMIs). In particular, BMI research requires advanced, real-time systems capable of sampling limb kinematics with minimal contact to the subject's body. To answer this demand, we have developed an automated video tracking system for real-time tracking of multiple body parts in freely behaving primates. The system employs high-contrast markers painted on the animal's joints to continuously track the three-dimensional positions of their limbs during activity. Two-dimensional coordinates captured by each video camera are combined and converted to three-dimensional coordinates using a quadratic fitting algorithm. Real-time operation of the system is accomplished using direct memory access (DMA). The system tracks the markers at a rate of 52 frames per second (fps) in real-time and up to 100fps if video recordings are captured to be later analyzed off-line. The system has been tested in several BMI primate experiments, in which limb position was sampled simultaneously with chronic recordings of the extracellular activity of hundreds of cortical cells. During these recordings, multiple computational models were employed to extract a series of kinematic parameters from neuronal ensemble activity in real-time. The system operated reliably under these experimental conditions and was able to compensate for marker occlusions that occurred during natural movements. We propose that this system could also be extended to applications that include other classes of biological motion.

  13. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  14. Student-Created Homework Problems Based on YouTube Videos

    Science.gov (United States)

    Liberatore, Matthew W.; Marr, David W. M.; Herring, Andrew M.; Way, J. Douglas

    2013-01-01

    Inspired by YouTube videos, students created homework problems as part of a class project. The project has been successful at different parts of the semester and demonstrated learning of course concepts. These new problems were implemented both in class and as part of homework assignments without significant changes. Examples from a material and…

  15. A simple video-based timing system for on-ice team testing in ice hockey: a technical report.

    Science.gov (United States)

    Larson, David P; Noonan, Benjamin C

    2014-09-01

    The purpose of this study was to describe and evaluate a newly developed on-ice timing system for team evaluation in the sport of ice hockey. We hypothesized that this new, simple, inexpensive, timing system would prove to be highly accurate and reliable. Six adult subjects (age 30.4 ± 6.2 years) performed on ice tests of acceleration and conditioning. The performance times of the subjects were recorded using a handheld stopwatch, photocell, and high-speed (240 frames per second) video. These results were then compared to allow for accuracy calculations of the stopwatch and video as compared with filtered photocell timing that was used as the "gold standard." Accuracy was evaluated using maximal differences, typical error/coefficient of variation (CV), and intraclass correlation coefficients (ICCs) between the timing methods. The reliability of the video method was evaluated using the same variables in a test-retest analysis both within and between evaluators. The video timing method proved to be both highly accurate (ICC: 0.96-0.99 and CV: 0.1-0.6% as compared with the photocell method) and reliable (ICC and CV within and between evaluators: 0.99 and 0.08%, respectively). This video-based timing method provides a very rapid means of collecting a high volume of very accurate and reliable on-ice measures of skating speed and conditioning, and can easily be adapted to other testing surfaces and parameters.

  16. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  17. Video pedagogy

    OpenAIRE

    Länsitie, Janne; Stevenson, Blair; Männistö, Riku; Karjalainen, Tommi; Karjalainen, Asko

    2016-01-01

    The short film is an introduction to the concept of video pedagogy. The five categories of video pedagogy further elaborate how videos can be used as a part of instruction and learning process. Most pedagogical videos represent more than one category. A video itself doesn’t necessarily define the category – the ways in which the video is used as a part of pedagogical script are more defining factors. What five categories did you find? Did you agree with the categories, or are more...

  18. Community-made mobile videos as a mechanism for maternal ...

    African Journals Online (AJOL)

    Keywords: Community-made mobile videos, maternal, newborn, child health education, rural Uganda, a qualitative ... munications need to engage participants at a social level ... Health, Global Health Media project and a representative.

  19. DASH-based network performance-aware solution for personalised video delivery systems

    OpenAIRE

    Rovcanin, Lejla

    2016-01-01

    Video content is an increasingly prevalent contributor of Internet traffic. The proliferation of available video content has been fuelled by both Internet expansion and the growing power and affordability of viewing devices. Such content can be consumed anywhere and anytime, using a variety of technologies. The high data rates required for streaming video content and the large volume of requests for such content degrade network performance when devices compete for finite network bandwidth. Th...

  20. Content-based TV sports video retrieval using multimodal analysis

    Science.gov (United States)

    Yu, Yiqing; Liu, Huayong; Wang, Hongbin; Zhou, Dongru

    2003-09-01

    In this paper, we propose content-based video retrieval, which is a kind of retrieval by its semantical contents. Because video data is composed of multimodal information streams such as video, auditory and textual streams, we describe a strategy of using multimodal analysis for automatic parsing sports video. The paper first defines the basic structure of sports video database system, and then introduces a new approach that integrates visual stream analysis, speech recognition, speech signal processing and text extraction to realize video retrieval. The experimental results for TV sports video of football games indicate that the multimodal analysis is effective for video retrieval by quickly browsing tree-like video clips or inputting keywords within predefined domain.

  1. Student’s Video Production as Formative Assessment

    Directory of Open Access Journals (Sweden)

    Eduardo Gama

    2017-04-01

    Full Text Available Learning assessments are subject of discussion both in their theoretical and practical approaches. The process of measuring learning in physics by high school students, either qualitatively or quantitatively, is one in which it should be possible to identify not only the concepts and contents students failed to achieve but also the reasons for the failure. We propose that students’ video production offers a very effective formative assessment tool to teachers: as a formative assessment, it produces information that allows the understanding of where and when the learning process succeeded or failed, of identifying, as a subject or as a group, the deficiencies or misunderstandings related to the theme under analysis and their interpretation by students, and it provides also a different kind of assessment, related to some other life skills, such as ability to carry on a project till its conclusion and to work cooperatively. In this paper, we describe the use of videos produced by high school students as an assessment resource. The students were asked to prepare a short video, which was then presented to the whole group and discussed. The videos reveal aspects of students’ difficulties that usually do not appear in formal assessments such as tests and questionnaires. After the use of the videos as a component of classroom assessments and the use of the discussions to rethink learning activities in the group, the videos were analysed and classified in various categories. This analysis showed a strong correlation between the technical quality of the video and the content quality of the students’ argumentation. Also, it was shown that the students do not prepare their video based on quick and easy production; they usually choose forms of video production that require careful planning and implementation, and this reflects directly on the overall quality of the video and of the learning process.

  2. Flow Genres: The Varieties of Video Game Experience

    Czech Academy of Sciences Publication Activity Database

    Hrabec, O.; Chrz, Vladimír

    2015-01-01

    Roč. 7, č. 1 (2015), s. 1-19 ISSN 1942-3888 R&D Projects: GA ČR(CZ) GAP407/12/2432 Institutional support: RVO:68081740 Keywords : flow * optimal experience * genre * video game Subject RIV: AN - Psychology

  3. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    Science.gov (United States)

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Design and Implementation of Dual-Mode Wireless Video Monitoring System

    Directory of Open Access Journals (Sweden)

    BAO Song-Jian

    2014-10-01

    Full Text Available Dual-mode wireless video transmission has two major problems. Firstly, one is time delay difference bringing about asynchronous reception decoding frame error phenomenon; secondly, dual-mode network bandwidth inconformity causes scheduling problem. In order to solve above two problems, a kind of TD-SCDMA/CDMA20001x dual-mode wireless video transmission design method is proposed. For the solution of decoding frame error phenomenon, the design puts forward adding frame identification and packet preprocessing at the sending and synchronizing combination at the receiving end. For the solution of scheduling problem, the wireless communication channel cooperative work and video data transmission scheduling management algorithm is proposed in the design.

  5. An introduction to video image compression and authentication technology for safeguards applications

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1995-01-01

    Verification of a video image has been a major problem for safeguards for several years. Various verification schemes have been tried on analog video signals ever since the mid-1970's. These schemes have provided a measure of protection but have never been widely adopted. The development of reasonably priced complex video processing integrated circuits makes it possible to digitize a video image and then compress the resulting digital file into a smaller file without noticeable loss of resolution. Authentication and/or encryption algorithms can be more easily applied to digital video files that have been compressed. The compressed video files require less time for algorithm processing and image transmission. An important safeguards application for authenticated, compressed, digital video images is in unattended video surveillance systems and remote monitoring systems. The use of digital images in the surveillance system makes it possible to develop remote monitoring systems that send images over narrow bandwidth channels such as the common telephone line. This paper discusses the video compression process, authentication algorithm, and data format selected to transmit and store the authenticated images

  6. Turning Video Resource Management into Cloud Computing

    Directory of Open Access Journals (Sweden)

    Weili Kou

    2016-07-01

    Full Text Available Big data makes cloud computing more and more popular in various fields. Video resources are very useful and important to education, security monitoring, and so on. However, issues of their huge volumes, complex data types, inefficient processing performance, weak security, and long times for loading pose challenges in video resource management. The Hadoop Distributed File System (HDFS is an open-source framework, which can provide cloud-based platforms and presents an opportunity for solving these problems. This paper presents video resource management architecture based on HDFS to provide a uniform framework and a five-layer model for standardizing the current various algorithms and applications. The architecture, basic model, and key algorithms are designed for turning video resources into a cloud computing environment. The design was tested by establishing a simulation system prototype.

  7. 100 Million Views of Electronic Cigarette YouTube Videos and Counting: Quantification, Content Evaluation, and Engagement Levels of Videos.

    Science.gov (United States)

    Huang, Jidong; Kornfield, Rachel; Emery, Sherry L

    2016-03-18

    The video-sharing website, YouTube, has become an important avenue for product marketing, including tobacco products. It may also serve as an important medium for promoting electronic cigarettes, which have rapidly increased in popularity and are heavily marketed online. While a few studies have examined a limited subset of tobacco-related videos on YouTube, none has explored e-cigarette videos' overall presence on the platform. To quantify e-cigarette-related videos on YouTube, assess their content, and characterize levels of engagement with those videos. Understanding promotion and discussion of e-cigarettes on YouTube may help clarify the platform's impact on consumer attitudes and behaviors and inform regulations. Using an automated crawling procedure and keyword rules, e-cigarette-related videos posted on YouTube and their associated metadata were collected between July 1, 2012, and June 30, 2013. Metadata were analyzed to describe posting and viewing time trends, number of views, comments, and ratings. Metadata were content coded for mentions of health, safety, smoking cessation, promotional offers, Web addresses, product types, top-selling brands, or names of celebrity endorsers. As of June 30, 2013, approximately 28,000 videos related to e-cigarettes were captured. Videos were posted by approximately 10,000 unique YouTube accounts, viewed more than 100 million times, rated over 380,000 times, and commented on more than 280,000 times. More than 2200 new videos were being uploaded every month by June 2013. The top 1% of most-viewed videos accounted for 44% of total views. Text fields for the majority of videos mentioned websites (70.11%); many referenced health (13.63%), safety (10.12%), smoking cessation (9.22%), or top e-cigarette brands (33.39%). The number of e-cigarette-related YouTube videos was projected to exceed 65,000 by the end of 2014, with approximately 190 million views. YouTube is a major information-sharing platform for electronic cigarettes

  8. Enabling 'Togetherness' in High-Quality Domestic Video Conferencing

    NARCIS (Netherlands)

    I. Kegel; P.S. Cesar Garcia (Pablo Santiago); A.J. Jansen (Jack); D.C.A. Bulterman (Dick); J. Kort; T. Stevens; N. Farber

    2012-01-01

    htmlabstractLow-cost video conferencing systems have provided an existence proof for the value of video communication in a home setting. At the same time, current systems have a number of fundamental limitations that inhibit more general social interactions among multiple groups of participants. In

  9. Advanced EVA Suit Camera System Development Project

    Science.gov (United States)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  10. Privacy enabling technology for video surveillance

    Science.gov (United States)

    Dufaux, Frédéric; Ouaret, Mourad; Abdeljaoued, Yousri; Navarro, Alfonso; Vergnenègre, Fabrice; Ebrahimi, Touradj

    2006-05-01

    In this paper, we address the problem privacy in video surveillance. We propose an efficient solution based on transformdomain scrambling of regions of interest in a video sequence. More specifically, the sign of selected transform coefficients is flipped during encoding. We address more specifically the case of Motion JPEG 2000. Simulation results show that the technique can be successfully applied to conceal information in regions of interest in the scene while providing with a good level of security. Furthermore, the scrambling is flexible and allows adjusting the amount of distortion introduced. This is achieved with a small impact on coding performance and negligible computational complexity increase. In the proposed video surveillance system, heterogeneous clients can remotely access the system through the Internet or 2G/3G mobile phone network. Thanks to the inherently scalable Motion JPEG 2000 codestream, the server is able to adapt the resolution and bandwidth of the delivered video depending on the usage environment of the client.

  11. Video Monitoring and Analysis System for Vivarium Cage Racks | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    This invention pertains to a system for continuous observation of rodents in home-cage environments with the specific aim to facilitate the quantification of activity levels and behavioral patterns for mice housed in a commercial ventilated cage rack.  The National Cancer Institute’s Radiation Biology Branch seeks partners interested in collaborative research to co-develop a video monitoring system for laboratory animals.

  12. Video-Voice Project (Zambia) | CRDI - Centre de recherches pour le ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    This approach requires an informed citizenry, however, at a time when the country is facing increased poverty, an increased disease burden and declining literacy. This project will endeavor to ensure that even disadvantaged communities are empowered to participate in the health care system. It will do so by constructing ...

  13. Spherical rotation orientation indication for HEVC and JEM coding of 360 degree video

    Science.gov (United States)

    Boyce, Jill; Xu, Qian

    2017-09-01

    Omnidirectional (or "360 degree") video, representing a panoramic view of a spherical 360° ×180° scene, can be encoded using conventional video compression standards, once it has been projection mapped to a 2D rectangular format. Equirectangular projection format is currently used for mapping 360 degree video to a rectangular representation for coding using HEVC/JEM. However, video in the top and bottom regions of the image, corresponding to the "north pole" and "south pole" of the spherical representation, is significantly warped. We propose to perform spherical rotation of the input video prior to HEVC/JEM encoding in order to improve the coding efficiency, and to signal parameters in a supplemental enhancement information (SEI) message that describe the inverse rotation process recommended to be applied following HEVC/JEM decoding, prior to display. Experiment results show that up to 17.8% bitrate gain (using the WS-PSNR end-to-end metric) can be achieved for the Chairlift sequence using HM16.15 and 11.9% gain using JEM6.0, and an average gain of 2.9% for HM16.15 and 2.2% for JEM6.0.

  14. Development of a YouTube videos feelings analiser = Desarrollo de un analizador de sentimientos de videos de Youtube

    OpenAIRE

    Valle Salas, José Miguel del

    2018-01-01

    Nowadays, Youtube is one of the most successful social networks, therefore it has more and more impact in our society. Due to this it's quite useful to know the sentiments that this platform videos produces. This project has been focused in the development of a tool able to analise this sentiments, which could be used for di�erent purposes like Market studies or emotional learning for people who has some functional diversity. The technologies used during the project development has b...

  15. Provocative Video Scenarios

    DEFF Research Database (Denmark)

    Caglio, Agnese

    This paper presents the use of ”provocative videos”, as a tool to support and deepen findings from ethnographic investigation on the theme of remote videocommunication. The videos acted as a resource to also investigate potential for novel technologies supporting continuous connection between...... households. They were deployed online as part of a 6 months research project in collaboration with the Danish electronics manifacturer Bang & Olufsen, involving participants from different continents. The intention is to propose the integration of tools that have been always seen as part of the design domain...

  16. Enabling 'togetherness' in high-quality domestic video conferencing

    NARCIS (Netherlands)

    Kegel, I.; Cesar, P.; Jansen, J.; Bulterman, D.C.A.; Stevens, T.; Kort, J.; Färber, N.

    2012-01-01

    Low-cost video conferencing systems have provided an existence proof for the value of video communication in a home setting. At the same time, current systems have a number of fundamental limitations that inhibit more general social interactions among multiple groups of participants. In our work, we

  17. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  18. Online videos to promote sun safety: results of a contest

    Directory of Open Access Journals (Sweden)

    Annelise Lorelei Dawson

    2011-06-01

    Full Text Available Seventy-percent of Americans search health information online, half of whom access medical content on social media websites.  In spite of this broad usage, the medical community underutilizes social media to distribute preventive health information.  This project aimed to highlight the promise of social media for delivering skin cancer prevention messaging by hosting and quantifying the impact of an online video contest. In 2010 and 2011, we solicited video submissions and searched existing YouTube videos.  Three finalists were selected and ranked. Winners were announced at national dermatology meetings and publicized via a contest website. Afterwards, YouTube view counts were monitored.  No increase in video viewing frequency was observed following the 2010 or 2011 contest.  This contest successfully identified exemplary online sun safety videos; however, increased viewership remains to be seen.  Social media offers a promising outlet for preventive health messaging. Future efforts must explore strategies for enhancing viewership of online content.

  19. "Use Condoms for Safe Sex!" Youth-Led Video Making and Sex Education

    Science.gov (United States)

    Yang, Kyung-Hwa; MacEntee, Katie

    2015-01-01

    Situated at the intersection between child-led visual methods and sex education, this paper focuses on the potential of youth-led video making to enable young people to develop guiding principles to inform their own sexual behaviour. It draws on findings from a video-making project carried out with a group of South African young people, which…

  20. Video sensor architecture for surveillance applications.

    Science.gov (United States)

    Sánchez, Jordi; Benet, Ginés; Simó, José E

    2012-01-01

    This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.

  1. Video Sensor Architecture for Surveillance Applications

    Directory of Open Access Journals (Sweden)

    José E. Simó

    2012-02-01

    Full Text Available This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.

  2. Design and Implementation of a Video-Zoom Driven Digital Audio-Zoom System for Portable Digital Imaging Devices

    Science.gov (United States)

    Park, Nam In; Kim, Seon Man; Kim, Hong Kook; Kim, Ji Woon; Kim, Myeong Bo; Yun, Su Won

    In this paper, we propose a video-zoom driven audio-zoom algorithm in order to provide audio zooming effects in accordance with the degree of video-zoom. The proposed algorithm is designed based on a super-directive beamformer operating with a 4-channel microphone system, in conjunction with a soft masking process that considers the phase differences between microphones. Thus, the audio-zoom processed signal is obtained by multiplying an audio gain derived from a video-zoom level by the masked signal. After all, a real-time audio-zoom system is implemented on an ARM-CORETEX-A8 having a clock speed of 600 MHz after different levels of optimization are performed such as algorithmic level, C-code, and memory optimizations. To evaluate the complexity of the proposed real-time audio-zoom system, test data whose length is 21.3 seconds long is sampled at 48 kHz. As a result, it is shown from the experiments that the processing time for the proposed audio-zoom system occupies 14.6% or less of the ARM clock cycles. It is also shown from the experimental results performed in a semi-anechoic chamber that the signal with the front direction can be amplified by approximately 10 dB compared to the other directions.

  3. Management systems for environmental restoration projects

    International Nuclear Information System (INIS)

    Harbert, R.R.

    1990-01-01

    This paper reports that the success fo large environmental restoration projects depends on sound management systems to guide the team of organizations and individuals responsible for the project. Public concern about and scrutiny of these environmental projects increase the stakes for those involved in the management of projects. The Department of Energy (DOE) Formerly Utilized Sites Remedial Action Program (FUSRAP) uses a system approach to performing and improving the work necessary to meet FUSRAP objectives. This approach to preforming and improving the work necessary to meet FUSRAP objectives. This approach is based upon management criteria embodied in DOE cost and schedule control system and the quality assurance requirements. The project team used complementary criteria to develop a system of related parts and processes working together to accomplish the goals of the project

  4. USABILITY TESTING OF JAPANESE CAPTIONS SEGMENTATION SYSTEM TO SCAFFOLD BEGINNERS TO COMPREHEND JAPANESE VIDEOS

    Directory of Open Access Journals (Sweden)

    Ya-Fei Yang

    2013-06-01

    Full Text Available A major learning difficulty of Japanese foreign language (JFL learners is the complex composition of two syllabaries, hiragana and katakana, and kanji characters adopted from logographic Chinese ones. As the number of Japanese language learners increases, computer-assisted Japanese language education gradually gains more attention. This study aimed to adopt a Japanese word segmentation system to help JFL learners overcome literacy problems. This study adopted MeCab, a Japanese morphological analyzer and part-of-speech (POS tagger, to segment Japanese texts into separate morphemes by adding spaces and to attach POS tags to each morpheme for beginners. The participants were asked to participate in three experimental activities involvingwatching two Japanese videos with general or segmented Japanese captions and complete the Nielsen’s Attributes of Usability (NAU survey and the After Scenario Questionnaire (ASQ to evaluate the usability of the learning activities. The results of the system evaluation showed that the videos with the segmented captions could increase the participants’ learning motivation and willingness to adopt the word segmentation system to learn Japanese.

  5. Using Mixed Methods to Analyze Video Data: A Mathematics Teacher Professional Development Example

    Science.gov (United States)

    DeCuir-Gunby, Jessica T.; Marshall, Patricia L.; McCulloch, Allison W.

    2012-01-01

    This article uses data from 65 teachers participating in a K-2 mathematics professional development research project as an example of how to analyze video recordings of teachers' classroom lessons using mixed methods. Through their discussion, the authors demonstrate how using a mixed methods approach to classroom video analysis allows researchers…

  6. Age vs. experience : evaluation of a video feedback intervention for newly licensed teen drivers.

    Science.gov (United States)

    2013-02-06

    This project examines the effects of age, experience, and video-based feedback on the rate and type of safety-relevant events captured on video event : recorders in the vehicles of three groups of newly licensed young drivers: : 1. 14.5- to 15.5-year...

  7. Video equipment of tele dosimetry and audio

    International Nuclear Information System (INIS)

    Ojeda R, M.A.; Padilla C, I.

    2007-01-01

    To develop a work in an area with high radiation, it requires of a detailed knowledge of the surroundings work, a communication and effective vision, a near dosimetric control. In a work where the spaces variables and reduced accesses exist, noise that hinders the communication, defendant operative condition, radiation field and taking of decision, it is necessary to have tools that allow a total control of the environment to make opportune and effective decisions, there where the task is developed. Under this elementary concept, it was developed in the Laguna Verde Central a project that it allowed a mechanism, interactive of control in spaces complex; to see, to hear, to speak, to measure. This concept takes to the creation of an equipped system with closed circuit of television, wireless communication systems, tele dosimetry wireless systems, VHS and DVD recording equipment, uninterrupted energy units. The system requires of an electric power socket, and the installation of two cables by CCTV camera. The system is mobilized by a person. He puts on in operation in 5 minutes using a verification list. The concept was developed in the project denominated VETA-1, (Video Equipment of Tele dosimetry and Audio). It is objective of this work to present before the society the development of the VETA-1 tool that conclude in their first prototype in May of the present year. The VETA-1 project arises by a necessity of optimizing dose, it is an ALARA tool, with a countless applications, like it was proven in the 12 recharge stop of the Unit 1. The VETA-1 project integrate a recording system, with the primary end of analyzing in the place where the task is developed the details for an effective and opportune decision, but the resulting information is of utility for the personnel's training and the planning of future works. The VETA-1 system is an ALARA tool of quick response control. (Author)

  8. Double duplex fiberoptic-based teleconferencing system for radiology

    International Nuclear Information System (INIS)

    Lowinger, T.; Hodara, M.; Potter, G.; Ablow, R.C.

    1989-01-01

    The teleconferencing system between two hospital sites is capable of simultaneously transmitting on four video channels (two in each direction) and on two audio channels. The two video signals in each conference room may be selected from a choice of an x-ray viewbox, a room camera, and two slide projectors, hence permitting dual-slide-projection teleconferencing. The signals are transmitted with four optical fibers over a distance of 3 miles. Two video enhancers on each site provide edge and contrast enhancement. An electronic video pointer can be superimposed on each image. The audio component is based on an automatic microphone system with background noise suppression

  9. Heartbeat Rate Measurement from Facial Video

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal

    2016-01-01

    Heartbeat Rate (HR) reveals a person’s health condition. This paper presents an effective system for measuring HR from facial videos acquired in a more realistic environment than the testing environment of current systems. The proposed method utilizes a facial feature point tracking method...... by combining a ‘Good feature to track’ and a ‘Supervised descent method’ in order to overcome the limitations of currently available facial video based HR measuring systems. Such limitations include, e.g., unrealistic restriction of the subject’s movement and artificial lighting during data capture. A face...

  10. Progress in video immersion using Panospheric imaging

    Science.gov (United States)

    Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.

    1998-09-01

    Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).

  11. Rapid, low-cost, image analysis through video processing

    International Nuclear Information System (INIS)

    Levinson, R.A.; Marrs, R.W.; Grantham, D.G.

    1976-01-01

    Remote Sensing now provides the data necessary to solve many resource problems. However, many of the complex image processing and analysis functions used in analysis of remotely-sensed data are accomplished using sophisticated image analysis equipment. High cost of this equipment places many of these techniques beyond the means of most users. A new, more economical, video system capable of performing complex image analysis has now been developed. This report describes the functions, components, and operation of that system. Processing capability of the new video image analysis system includes many of the tasks previously accomplished with optical projectors and digital computers. Video capabilities include: color separation, color addition/subtraction, contrast stretch, dark level adjustment, density analysis, edge enhancement, scale matching, image mixing (addition and subtraction), image ratioing, and construction of false-color composite images. Rapid input of non-digital image data, instantaneous processing and display, relatively low initial cost, and low operating cost gives the video system a competitive advantage over digital equipment. Complex pre-processing, pattern recognition, and statistical analyses must still be handled through digital computer systems. The video system at the University of Wyoming has undergone extensive testing, comparison to other systems, and has been used successfully in practical applications ranging from analysis of x-rays and thin sections to production of color composite ratios of multispectral imagery. Potential applications are discussed including uranium exploration, petroleum exploration, tectonic studies, geologic mapping, hydrology sedimentology and petrography, anthropology, and studies on vegetation and wildlife habitat

  12. 100 Million Views of Electronic Cigarette YouTube Videos and Counting: Quantification, Content Evaluation, and Engagement Levels of Videos

    Science.gov (United States)

    2016-01-01

    Background The video-sharing website, YouTube, has become an important avenue for product marketing, including tobacco products. It may also serve as an important medium for promoting electronic cigarettes, which have rapidly increased in popularity and are heavily marketed online. While a few studies have examined a limited subset of tobacco-related videos on YouTube, none has explored e-cigarette videos’ overall presence on the platform. Objective To quantify e-cigarette-related videos on YouTube, assess their content, and characterize levels of engagement with those videos. Understanding promotion and discussion of e-cigarettes on YouTube may help clarify the platform’s impact on consumer attitudes and behaviors and inform regulations. Methods Using an automated crawling procedure and keyword rules, e-cigarette-related videos posted on YouTube and their associated metadata were collected between July 1, 2012, and June 30, 2013. Metadata were analyzed to describe posting and viewing time trends, number of views, comments, and ratings. Metadata were content coded for mentions of health, safety, smoking cessation, promotional offers, Web addresses, product types, top-selling brands, or names of celebrity endorsers. Results As of June 30, 2013, approximately 28,000 videos related to e-cigarettes were captured. Videos were posted by approximately 10,000 unique YouTube accounts, viewed more than 100 million times, rated over 380,000 times, and commented on more than 280,000 times. More than 2200 new videos were being uploaded every month by June 2013. The top 1% of most-viewed videos accounted for 44% of total views. Text fields for the majority of videos mentioned websites (70.11%); many referenced health (13.63%), safety (10.12%), smoking cessation (9.22%), or top e-cigarette brands (33.39%). The number of e-cigarette-related YouTube videos was projected to exceed 65,000 by the end of 2014, with approximately 190 million views. Conclusions YouTube is a major

  13. A new method for wireless video monitoring of bird nests

    Science.gov (United States)

    David I. King; Richard M. DeGraaf; Paul J. Champlin; Tracey B. Champlin

    2001-01-01

    Video monitoring of active bird nests is gaining popularity among researchers because it eliminates many of the biases associated with reliance on incidental observations of predation events or use of artificial nests, but the expense of video systems may be prohibitive. Also, the range and efficiency of current video monitoring systems may be limited by the need to...

  14. Behavioral System Level Power Consumption Modeling of Mobile Video Streaming applications

    OpenAIRE

    Benmoussa , Yahia; Boukhobza , Jalil; Hadjadj-Aoul , Yassine; Lagadec , Loïc; Benazzouz , Djamel

    2012-01-01

    National audience; Nowadays, the use of mobile applications and terminals faces fundamental challenges related to energy constraint. This is due to the limited battery lifetime as compared to the increasing hardware evolution. Video streaming is one of the most energy consuming applications in a mobile system because of its intensive use of bandwidth, memory and processing power. In this work, we aim to propose a methodology for building and validating a high level global power consumption mo...

  15. Real-Time Video Stylization Using Object Flows.

    Science.gov (United States)

    Lu, Cewu; Xiao, Yao; Tang, Chi-Keung

    2017-05-05

    We present a real-time video stylization system and demonstrate a variety of painterly styles rendered on real video inputs. The key technical contribution lies on the object flow, which is robust to inaccurate optical flow, unknown object transformation and partial occlusion as well. Since object flows relate regions of the same object across frames, shower-door effect can be effectively reduced where painterly strokes and textures are rendered on video objects. The construction of object flows is performed in real time and automatically after applying metric learning. To reduce temporal flickering, we extend the bilateral filtering into motion bilateral filtering. We propose quantitative metrics to measure the temporal coherence on structures and textures of our stylized videos, and perform extensive experiments to compare our stylized results with baseline systems and prior works specializing in watercolor and abstraction.

  16. Quality and noise measurements in mobile phone video capture

    Science.gov (United States)

    Petrescu, Doina; Pincenti, John

    2011-02-01

    The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.

  17. Unique rod lens/video system designed to observe flow conditions in emergency core coolant loops of pressurized water reactors

    International Nuclear Information System (INIS)

    Carter, G.W.

    1979-01-01

    Techniques and equipment are described which are used for video recordings of the single- and two-phase fluid flow tests conducted with the PKL Spool Piece Measurement System designed by Lawrence Livermore Laboratory and EG and G Inc. The instrumented spool piece provides valuable information on what would happen in pressurized water reactor emergency coolant loops should an accident or rupture result in loss of fluid. The complete closed-circuit television video system, including rod lens, light supply, and associated spool mounting fixtures, is discussed in detail. Photographic examples of test flows taken during actual spool piece system operation are shown

  18. Security and Privacy in Video Surveillance: Requirements and Challenges

    DEFF Research Database (Denmark)

    Mahmood Rajpoot, Qasim; Jensen, Christian D.

    2014-01-01

    observed by the system. Several techniques to protect the privacy of individuals have therefore been proposed, but very little research work has focused on the specific security requirements of video surveillance data (in transit or in storage) and on authorizing access to this data. In this paper, we...... present a general model of video surveillance systems that will help identify the major security and privacy requirements for a video surveillance system and we use this model to identify practical challenges in ensuring the security of video surveillance data in all stages (in transit and at rest). Our...... study shows a gap between the identified security requirements and the proposed security solutions where future research efforts may focus in this domain....

  19. Environmental Restoration Project - Systems Engineering Management Plan

    International Nuclear Information System (INIS)

    Anderson, T.D.

    1998-06-01

    This Environmental Restoration (ER) Project Systems Engineering Management Plan (SEMP) describes relevant Environmental Restoration Contractor (ERC) management processes and shows how they implement systems engineering. The objective of this SEMP is to explain and demonstrate how systems engineering is being approached and implemented in the ER Project. The application of systems engineering appropriate to the general nature and scope of the project is summarized in Section 2.0. The basic ER Project management approach is described in Section 3.0. The interrelation and integration of project practices and systems engineering are outlined in Section 4.0. Integration with sitewide systems engineering under the Project Hanford Management Contract is described in Section 5.0

  20. Open-source telemedicine platform for wireless medical video communication.

    Science.gov (United States)

    Panayides, A; Eleftheriou, I; Pantziaris, M

    2013-01-01

    An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings.

  1. Open-Source Telemedicine Platform for Wireless Medical Video Communication

    Science.gov (United States)

    Panayides, A.; Eleftheriou, I.; Pantziaris, M.

    2013-01-01

    An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings. PMID:23573082

  2. Open-Source Telemedicine Platform for Wireless Medical Video Communication

    Directory of Open Access Journals (Sweden)

    A. Panayides

    2013-01-01

    Full Text Available An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN and 3.5G high-speed packet access (HSPA wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings.

  3. Secure and Efficient Reactive Video Surveillance for Patient Monitoring

    Directory of Open Access Journals (Sweden)

    An Braeken

    2016-01-01

    Full Text Available Video surveillance is widely deployed for many kinds of monitoring applications in healthcare and assisted living systems. Security and privacy are two promising factors that align the quality and validity of video surveillance systems with the caliber of patient monitoring applications. In this paper, we propose a symmetric key-based security framework for the reactive video surveillance of patients based on the inputs coming from data measured by a wireless body area network attached to the human body. Only authenticated patients are able to activate the video cameras, whereas the patient and authorized people can consult the video data. User and location privacy are at each moment guaranteed for the patient. A tradeoff between security and quality of service is defined in order to ensure that the surveillance system gets activated even in emergency situations. In addition, the solution includes resistance against tampering with the device on the patient’s side.

  4. Global Internet Video Classroom: A Technology Supported Learner-Centered Classroom

    Science.gov (United States)

    Lawrence, Oliver

    2010-01-01

    The Global Internet Video Classroom (GIVC) Project connected Chicago Civil Rights activists of the 1960s with Cape Town Anti-Apartheid activists of the 1960s in a classroom setting where learners from Cape Town and Chicago engaged activists in conversations about their motivation, principles, and strategies. The project was launched in order to…

  5. Gradual cut detection using low-level vision for digital video

    Science.gov (United States)

    Lee, Jae-Hyun; Choi, Yeun-Sung; Jang, Ok-bae

    1996-09-01

    Digital video computing and organization is one of the important issues in multimedia system, signal compression, or database. Video should be segmented into shots to be used for identification and indexing. This approach requires a suitable method to automatically locate cut points in order to separate shot in a video. Automatic cut detection to isolate shots in a video has received considerable attention due to many practical applications; our video database, browsing, authoring system, retrieval and movie. Previous studies are based on a set of difference mechanisms and they measured the content changes between video frames. But they could not detect more special effects which include dissolve, wipe, fade-in, fade-out, and structured flashing. In this paper, a new cut detection method for gradual transition based on computer vision techniques is proposed. And then, experimental results applied to commercial video are presented and evaluated.

  6. Delivering Diagnostic Quality Video over Mobile Wireless Networks for Telemedicine

    Directory of Open Access Journals (Sweden)

    Sira P. Rao

    2009-01-01

    Full Text Available In real-time remote diagnosis of emergency medical events, mobility can be enabled by wireless video communications. However, clinical use of this potential advance will depend on definitive and compelling demonstrations of the reliability of diagnostic quality video. Because the medical domain has its own fidelity criteria, it is important to incorporate diagnostic video quality criteria into any video compression system design. To this end, we used flexible algorithms for region-of-interest (ROI video compression and obtained feedback from medical experts to develop criteria for diagnostically lossless (DL quality. The design of the system occurred in three steps-measurement of bit rate at which DL quality is achieved through evaluation of videos by medical experts, incorporation of that information into a flexible video encoder through the notion of encoder states, and an encoder state update option based on a built-in quality criterion. Medical experts then evaluated our system for the diagnostic quality of the video, allowing us to verify that it is possible to realize DL quality in the ROI at practical communication data transfer rates, enabling mobile medical assessment over bit-rate limited wireless channels. This work lays the scientific foundation for additional validation through prototyped technology, field testing, and clinical trials.

  7. Modernization projects in Santa Maria e Garona; Proyectos de modernizacion en Santa Maria de Garona

    Energy Technology Data Exchange (ETDEWEB)

    Marcos, R.; Alutiz, J. I.; Garcia Sanchez, M.

    2011-07-01

    This article shows a vision of the Santa Maria de Garona power Plant modernization guidelines and it also presents the most significant projects deployed in the last decade at the power plant grouped in mechanics projects, electrical projects, instrumentations projects and IT projects. At the same time three projects are explained in more detail: the change of one of the main transformers, the evolution from paper recorders to paperless video graphic recorders and the new plant data information system. (Author)

  8. Automatic Traffic Data Collection under Varying Lighting and Temperature Conditions in Multimodal Environments: Thermal versus Visible Spectrum Video-Based Systems

    Directory of Open Access Journals (Sweden)

    Ting Fu

    2017-01-01

    Full Text Available Vision-based monitoring systems using visible spectrum (regular video cameras can complement or substitute conventional sensors and provide rich positional and classification data. Although new camera technologies, including thermal video sensors, may improve the performance of digital video-based sensors, their performance under various conditions has rarely been evaluated at multimodal facilities. The purpose of this research is to integrate existing computer vision methods for automated data collection and evaluate the detection, classification, and speed measurement performance of thermal video sensors under varying lighting and temperature conditions. Thermal and regular video data was collected simultaneously under different conditions across multiple sites. Although the regular video sensor narrowly outperformed the thermal sensor during daytime, the performance of the thermal sensor is significantly better for low visibility and shadow conditions, particularly for pedestrians and cyclists. Retraining the algorithm on thermal data yielded an improvement in the global accuracy of 48%. Thermal speed measurements were consistently more accurate than for the regular video at daytime and nighttime. Thermal video is insensitive to lighting interference and pavement temperature, solves issues associated with visible light cameras for traffic data collection, and offers other benefits such as privacy, insensitivity to glare, storage space, and lower processing requirements.

  9. DAVID: A new video motion sensor for outdoor perimeter applications

    International Nuclear Information System (INIS)

    Alexander, J.C.

    1986-01-01

    To be effective, a perimeter intrusion detection system must comprise both sensor and rapid assessment components. The use of closed circuit television (CCTV) to provide the rapid assessment capability, makes possible the use of video motion detection (VMD) processing as a system sensor component. Despite it's conceptual appeal, video motion detection has not been widely used in outdoor perimeter systems because of an inability to discriminate between genuine intrusions and numerous environmental effects such as cloud shadows, wind motion, reflections, precipitation, etc. The result has been an unacceptably high false alarm rate and operator work-load. DAVID (Digital Automatic Video Intrusion Detector) utilizes new digital signal processing techniques to achieve a dramatic improvement in discrimination performance thereby making video motion detection practical for outdoor applications. This paper begins with a discussion of the key considerations in implementing an outdoor video intrusion detection system, followed by a description of the DAVID design in light of these considerations

  10. A Depth Video Sensor-Based Life-Logging Human Activity Recognition System for Elderly Care in Smart Indoor Environments

    Directory of Open Access Journals (Sweden)

    Ahmad Jalal

    2014-07-01

    Full Text Available Recent advancements in depth video sensors technologies have made human activity recognition (HAR realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.

  11. A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments.

    Science.gov (United States)

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-07-02

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.

  12. Dashboard Videos

    Science.gov (United States)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  13. Quantification of Urine Elimination Behaviors in Cats with a Video Recording System

    OpenAIRE

    R. Dulaney, D.; Hopfensperger, M.; Malinowski, R.; Hauptman, J.; Kruger, J.M.

    2017-01-01

    Background Urinary disorders in cats often require subjective caregiver quantification of clinical signs to establish a diagnosis and monitor therapeutic outcomes. Objective To investigate use of a video recording system (VRS) to better assess and quantify urination behaviors in cats. Animals Eleven healthy cats and 8 cats with disorders potentially associated with abnormal urination patterns. Methods Prospective study design. Litter box urination behaviors were quantified with a VRS for 14 d...

  14. 47 CFR 76.1504 - Rates, terms and conditions for carriage on open video systems.

    Science.gov (United States)

    2010-10-01

    ....1504 Rates, terms and conditions for carriage on open video systems. (a) Reasonable rate principle. An... operator will bear the burden of proof to demonstrate, using the principles set forth below, that the...; (2) Packaging, including marketing and other fees; (3) Talent fees; and (4) A reasonable overhead...

  15. 76 FR 75911 - Certain Video Game Systems and Controllers; Investigations: Terminations, Modifications and Rulings

    Science.gov (United States)

    2011-12-05

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-743] Certain Video Game Systems and Controllers; Investigations: Terminations, Modifications and Rulings AGENCY: U.S. International Trade Commission. ACTION: Notice. Section 337 of the Tariff Act of 1930 provides that if the Commission finds a violation it shall exclude the articles...

  16. Learning Sterile Procedures Through Transformative Reflection: Use of iPad Videos in a Nursing Laboratory Course.

    Science.gov (United States)

    Cernusca, Dan; Thompson, Shila; Riggins, Janet

    2018-01-12

    This project was implemented to determine if the combination of video recording and reflection could enhance student learning of specific nursing skills. Students' answers to open-ended questions validated the importance of iPad videos for their skill improvement. The findings confirmed that iPad videos provided an effective tool for students to evaluate their performance and reflect on methods for improvement.

  17. Fixed-point data-collection method of video signal

    International Nuclear Information System (INIS)

    Tang Yu; Yin Zejie; Qian Weiming; Wu Xiaoyi

    1997-01-01

    The author describes a Fixed-point data-collection method of video signal. The method provides an idea of fixed-point data-collection, and has been successfully applied in the research of real-time radiography on dose field, a project supported by National Science Fund

  18. Embedded Video Abstraction and Design of Intelligent Video Surveillance System%嵌入式视频摘要及智能视频监控系统设计

    Institute of Scientific and Technical Information of China (English)

    刘胜楠; 汪恭焰; 李京; 李鑫磊; 方明

    2017-01-01

    Compared with traditional PC-based image processing system, ARM-based embedded image processing sys-tem has the advantages of small size, low power consumption and low cost, this paper has implemented a video ab-straction algorithm based on the Tiny4412 friendly demo board. The algorithm extracts foreground of image by Vibe al-gorithm and creates an event by distinguishing start and end key frames, then connects the created events to construct video abstraction. Compared with original video,the abstracted video only contains valid events,solving the problem of massive video redundancy, which saves storage space and helps users to browse quickly afterwards. Meanwhile, the key frames are transmitted to the user via e-mail, achieving an alarm-in-time function. The result shows this system can realize the intelligent monitoring effectively,which is the beginning of marketization.%利用基于ARM嵌入式的图像处理系统和传统的PC系统相比具有体积小、功耗低、成本低、易于部署等优点,基于友善之臂的Tiny4412开发板设计并实现了视频摘要及智能视频监控算法.该算法通过Vibe算法提取前景事件,并区分起点关键帧和终点关键帧以形成有效事件帧,再连接有效事件帧形成摘要视频.摘要视频和原始视频相比,由于只包含有效事件帧,解决了海量视频冗余度过大的问题,有利于节省存储空间,并且有助于用户事后快速浏览.同时,系统将关键帧通过邮件发送给指定用户,达到了实时智能监控的目的.结果表明,该系统能有效的实现智能监控,是市场化的开端.

  19. Underwater Communications for Video Surveillance Systems at 2.4 GHz

    Directory of Open Access Journals (Sweden)

    Sandra Sendra

    2016-10-01

    Full Text Available Video surveillance is needed to control many activities performed in underwater environments. The use of wired media can be a problem since the material specially designed for underwater environments is very expensive. In order to transmit the images and videos wirelessly under water, three main technologies can be used: acoustic waves, which do not provide high bandwidth, optical signals, although the effect of light dispersion in water severely penalizes the transmitted signals and therefore, despite offering high transfer rates, the maximum distance is very small, and electromagnetic (EM waves, which can provide enough bandwidth for video delivery. In the cases where the distance between transmitter and receiver is short, the use of EM waves would be an interesting option since they provide high enough data transfer rates to transmit videos with high resolution. This paper presents a practical study of the behavior of EM waves at 2.4 GHz in freshwater underwater environments. First, we discuss the minimum requirements of a network to allow video delivery. From these results, we measure the maximum distance between nodes and the round trip time (RTT value depending on several parameters such as data transfer rate, signal modulations, working frequency, and water temperature. The results are statistically analyzed to determine their relation. Finally, the EM waves’ behavior is modeled by a set of equations. The results show that there are some combinations of working frequency, modulation, transfer rate and temperature that offer better results than others. Our work shows that short communication distances with high data transfer rates is feasible.

  20. Video game addiction, ADHD symptomatology, and video game reinforcement.

    Science.gov (United States)

    Mathews, Christine L; Morrell, Holly E R; Molle, Jon E

    2018-06-06

    Up to 23% of people who play video games report symptoms of addiction. Individuals with attention deficit hyperactivity disorder (ADHD) may be at increased risk for video game addiction, especially when playing games with more reinforcing properties. The current study tested whether level of video game reinforcement (type of game) places individuals with greater ADHD symptom severity at higher risk for developing video game addiction. Adult video game players (N = 2,801; Mean age = 22.43, SD = 4.70; 93.30% male; 82.80% Caucasian) completed an online survey. Hierarchical multiple linear regression analyses were used to test type of game, ADHD symptom severity, and the interaction between type of game and ADHD symptomatology as predictors of video game addiction severity, after controlling for age, gender, and weekly time spent playing video games. ADHD symptom severity was positively associated with increased addiction severity (b = .73 and .68, ps .05. The relationship between ADHD symptom severity and addiction severity did not depend on the type of video game played or preferred most, ps > .05. Gamers who have greater ADHD symptom severity may be at greater risk for developing symptoms of video game addiction and its negative consequences, regardless of type of video game played or preferred most. Individuals who report ADHD symptomatology and also identify as gamers may benefit from psychoeducation about the potential risk for problematic play.

  1. Standardized access, display, and retrieval of medical video

    Science.gov (United States)

    Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.

    1999-05-01

    The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.

  2. Video-game play induces plasticity in the visual system of adults with amblyopia.

    Directory of Open Access Journals (Sweden)

    Roger W Li

    2011-08-01

    Full Text Available UNLABELLED: Abnormal visual experience during a sensitive period of development disrupts neuronal circuitry in the visual cortex and results in abnormal spatial vision or amblyopia. Here we examined whether playing video games can induce plasticity in the visual system of adults with amblyopia. Specifically 20 adults with amblyopia (age 15-61 y; visual acuity: 20/25-20/480, with no manifest ocular disease or nystagmus were recruited and allocated into three intervention groups: action videogame group (n = 10, non-action videogame group (n = 3, and crossover control group (n = 7. Our experiments show that playing video games (both action and non-action games for a short period of time (40-80 h, 2 h/d using the amblyopic eye results in a substantial improvement in a wide range of fundamental visual functions, from low-level to high-level, including visual acuity (33%, positional acuity (16%, spatial attention (37%, and stereopsis (54%. Using a cross-over experimental design (first 20 h: occlusion therapy, and the next 40 h: videogame therapy, we can conclude that the improvement cannot be explained simply by eye patching alone. We quantified the limits and the time course of visual plasticity induced by video-game experience. The recovery in visual acuity that we observed is at least 5-fold faster than would be expected from occlusion therapy in childhood amblyopia. We used positional noise and modelling to reveal the neural mechanisms underlying the visual improvements in terms of decreased spatial distortion (7% and increased processing efficiency (33%. Our study had several limitations: small sample size, lack of randomization, and differences in numbers between groups. A large-scale randomized clinical study is needed to confirm the therapeutic value of video-game treatment in clinical situations. Nonetheless, taken as a pilot study, this work suggests that video-game play may provide important principles for treating amblyopia

  3. Video-game play induces plasticity in the visual system of adults with amblyopia.

    Science.gov (United States)

    Li, Roger W; Ngo, Charlie; Nguyen, Jennie; Levi, Dennis M

    2011-08-01

    Abnormal visual experience during a sensitive period of development disrupts neuronal circuitry in the visual cortex and results in abnormal spatial vision or amblyopia. Here we examined whether playing video games can induce plasticity in the visual system of adults with amblyopia. Specifically 20 adults with amblyopia (age 15-61 y; visual acuity: 20/25-20/480, with no manifest ocular disease or nystagmus) were recruited and allocated into three intervention groups: action videogame group (n = 10), non-action videogame group (n = 3), and crossover control group (n = 7). Our experiments show that playing video games (both action and non-action games) for a short period of time (40-80 h, 2 h/d) using the amblyopic eye results in a substantial improvement in a wide range of fundamental visual functions, from low-level to high-level, including visual acuity (33%), positional acuity (16%), spatial attention (37%), and stereopsis (54%). Using a cross-over experimental design (first 20 h: occlusion therapy, and the next 40 h: videogame therapy), we can conclude that the improvement cannot be explained simply by eye patching alone. We quantified the limits and the time course of visual plasticity induced by video-game experience. The recovery in visual acuity that we observed is at least 5-fold faster than would be expected from occlusion therapy in childhood amblyopia. We used positional noise and modelling to reveal the neural mechanisms underlying the visual improvements in terms of decreased spatial distortion (7%) and increased processing efficiency (33%). Our study had several limitations: small sample size, lack of randomization, and differences in numbers between groups. A large-scale randomized clinical study is needed to confirm the therapeutic value of video-game treatment in clinical situations. Nonetheless, taken as a pilot study, this work suggests that video-game play may provide important principles for treating amblyopia, and perhaps other

  4. Video-Game Play Induces Plasticity in the Visual System of Adults with Amblyopia

    Science.gov (United States)

    Li, Roger W.; Ngo, Charlie; Nguyen, Jennie; Levi, Dennis M.

    2011-01-01

    Abnormal visual experience during a sensitive period of development disrupts neuronal circuitry in the visual cortex and results in abnormal spatial vision or amblyopia. Here we examined whether playing video games can induce plasticity in the visual system of adults with amblyopia. Specifically 20 adults with amblyopia (age 15–61 y; visual acuity: 20/25–20/480, with no manifest ocular disease or nystagmus) were recruited and allocated into three intervention groups: action videogame group (n = 10), non-action videogame group (n = 3), and crossover control group (n = 7). Our experiments show that playing video games (both action and non-action games) for a short period of time (40–80 h, 2 h/d) using the amblyopic eye results in a substantial improvement in a wide range of fundamental visual functions, from low-level to high-level, including visual acuity (33%), positional acuity (16%), spatial attention (37%), and stereopsis (54%). Using a cross-over experimental design (first 20 h: occlusion therapy, and the next 40 h: videogame therapy), we can conclude that the improvement cannot be explained simply by eye patching alone. We quantified the limits and the time course of visual plasticity induced by video-game experience. The recovery in visual acuity that we observed is at least 5-fold faster than would be expected from occlusion therapy in childhood amblyopia. We used positional noise and modelling to reveal the neural mechanisms underlying the visual improvements in terms of decreased spatial distortion (7%) and increased processing efficiency (33%). Our study had several limitations: small sample size, lack of randomization, and differences in numbers between groups. A large-scale randomized clinical study is needed to confirm the therapeutic value of video-game treatment in clinical situations. Nonetheless, taken as a pilot study, this work suggests that video-game play may provide important principles for treating amblyopia, and perhaps

  5. Business System Planning Project, Preliminary System Design

    International Nuclear Information System (INIS)

    EVOSEVICH, S.

    2000-01-01

    CH2M HILL Hanford Group, Inc. (CHG) is currently performing many core business functions including, but not limited to, work control, planning, scheduling, cost estimating, procurement, training, and human resources. Other core business functions are managed by or dependent on Project Hanford Management Contractors including, but not limited to, payroll, benefits and pension administration, inventory control, accounts payable, and records management. In addition, CHG has business relationships with its parent company CH2M HILL, U.S. Department of Energy, Office of River Protection and other River Protection Project contractors, government agencies, and vendors. The Business Systems Planning (BSP) Project, under the sponsorship of the CH2M HILL Hanford Group, Inc. Chief Information Officer (CIO), have recommended information system solutions that will support CHG business areas. The Preliminary System Design was developed using the recommendations from the Alternatives Analysis, RPP-6499, Rev 0 and will become the design base for any follow-on implementation projects. The Preliminary System Design will present a high-level system design, providing a high-level overview of the Commercial-Off-The-Shelf (COTS) modules and identify internal and external relationships. This document will not define data structures, user interface components (screens, reports, menus, etc.), business rules or processes. These in-depth activities will be accomplished at implementation planning time

  6. REAL TIME SPEED ESTIMATION FROM MONOCULAR VIDEO

    Directory of Open Access Journals (Sweden)

    M. S. Temiz

    2012-07-01

    Full Text Available In this paper, detailed studies have been performed for developing a real time system to be used for surveillance of the traffic flow by using monocular video cameras to find speeds of the vehicles for secure travelling are presented. We assume that the studied road segment is planar and straight, the camera is tilted downward a bridge and the length of one line segment in the image is known. In order to estimate the speed of a moving vehicle from a video camera, rectification of video images is performed to eliminate the perspective effects and then the interest region namely the ROI is determined for tracking the vehicles. Velocity vectors of a sufficient number of reference points are identified on the image of the vehicle from each video frame. For this purpose sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in the image space are transformed to the object space to find the absolute values of these magnitudes. The accuracy of the estimated speed is approximately ±1 – 2 km/h. In order to solve the real time speed estimation problem, the authors have written a software system in C++ programming language. This software system has been used for all of the computations and test applications.

  7. Teaching parents about responsive feeding through a vicarious learning video: A pilot randomized controlled trial

    Science.gov (United States)

    The American Academy of Pediatrics and World Health Organization recommend responsive feeding (RF) to promote healthy eating behaviors in early childhood. This project developed and tested a vicarious learning video to teach parents RF practices. A RF vicarious learning video was developed using com...

  8. Systems approach to project risk management

    Energy Technology Data Exchange (ETDEWEB)

    Kindinger, J. P. (John P.)

    2002-01-01

    This paper describes the need for better performance in the planning and execution of projects and examines the capabilities of two different project risk analysis methods for improving project performance. A quantitative approach based on concepts and tools adopted from the disciplines of systems analysis, probabilistic risk analysis, and other fields is advocated for managing risk in large and complex research & development projects. This paper also provides an overview of how this system analysis approach for project risk management is being used at Los Alamos National Laboratory along with examples of quantitative risk analysis results and their application to improve project performance.

  9. User-Oriented Project Accounting System.

    Science.gov (United States)

    Hess, Larry G.; Alcorn, Lisa S.

    1990-01-01

    The project accounting system used by the University of Illinois Urbana-Champaign School of Chemical Sciences exchanges financial data with the campus' central accounting system and allows integration of this information with user-entered data to produce an easily read, fully obligated project accounting statement for the budget and period…

  10. SnapVideo: Personalized Video Generation for a Sightseeing Trip.

    Science.gov (United States)

    Zhang, Luming; Jing, Peiguang; Su, Yuting; Zhang, Chao; Shaoz, Ling

    2017-11-01

    Leisure tourism is an indispensable activity in urban people's life. Due to the popularity of intelligent mobile devices, a large number of photos and videos are recorded during a trip. Therefore, the ability to vividly and interestingly display these media data is a useful technique. In this paper, we propose SnapVideo, a new method that intelligently converts a personal album describing of a trip into a comprehensive, aesthetically pleasing, and coherent video clip. The proposed framework contains three main components. The scenic spot identification model first personalizes the video clips based on multiple prespecified audience classes. We then search for some auxiliary related videos from YouTube 1 according to the selected photos. To comprehensively describe a scenery, the view generation module clusters the crawled video frames into a number of views. Finally, a probabilistic model is developed to fit the frames from multiple views into an aesthetically pleasing and coherent video clip, which optimally captures the semantics of a sightseeing trip. Extensive user studies demonstrated the competitiveness of our method from an aesthetic point of view. Moreover, quantitative analysis reflects that semantically important spots are well preserved in the final video clip. 1 https://www.youtube.com/.

  11. Research on Construction of Road Network Database Based on Video Retrieval Technology

    Directory of Open Access Journals (Sweden)

    Wang Fengling

    2017-01-01

    Full Text Available Based on the characteristics of the video database and the basic structure of the video database and several typical video data models, the segmentation-based multi-level data model is used to describe the landscape information video database, the network database model and the road network management database system. Landscape information management system detailed design and implementation of a detailed preparation.

  12. Audiovisual physics reports: students' video production as a strategy for the didactic laboratory

    Science.gov (United States)

    Vinicius Pereira, Marcus; de Souza Barros, Susana; de Rezende Filho, Luiz Augusto C.; Fauth, Leduc Hermeto de A.

    2012-01-01

    Constant technological advancement has facilitated access to digital cameras and cell phones. Involving students in a video production project can work as a motivating aspect to make them active and reflective in their learning, intellectually engaged in a recursive process. This project was implemented in high school level physics laboratory classes resulting in 22 videos which are considered as audiovisual reports and analysed under two components: theoretical and experimental. This kind of project allows the students to spontaneously use features such as music, pictures, dramatization, animations, etc, even when the didactic laboratory may not be the place where aesthetic and cultural dimensions are generally developed. This could be due to the fact that digital media are more legitimately used as cultural tools than as teaching strategies.

  13. Application results for an augmented video tracker

    Science.gov (United States)

    Pierce, Bill

    1991-08-01

    The Relay Mirror Experiment (RME) is a research program to determine the pointing accuracy and stability levels achieved when a laser beam is reflected by the RME satellite from one ground station to another. This paper reports the results of using a video tracker augmented with a quad cell signal to improve the RME ground station tracking system performance. The video tracker controls a mirror to acquire the RME satellite, and provides a robust low bandwidth tracking loop to remove line of sight (LOS) jitter. The high-passed, high-gain quad cell signal is added to the low bandwidth, low-gain video tracker signal to increase the effective tracking loop bandwidth, and significantly improves LOS disturbance rejection. The quad cell augmented video tracking system is analyzed, and the math model for the tracker is developed. A MATLAB model is then developed from this, and performance as a function of bandwidth and disturbances is given. Improvements in performance due to the addition of the video tracker and the augmentation with the quad cell are provided. Actual satellite test results are then presented and compared with the simulated results.

  14. Engineering task plan for purged light system

    International Nuclear Information System (INIS)

    BOGER, R.M.

    1999-01-01

    A purged, closed circuit television system is currently used to video inside of waste tanks. The video is used to support inspection and assessment of the tank interiors, waste residues, and deployed hardware. The system is also used to facilitate deployment of new equipment. A new light source has been requested by Characterization Project Operations (CPO) for the video system. The current light used is mounted on the camera and provides 75 watts of light, which is insufficient for clear video. Other light sources currently in use on the Hanford site either can not be deployed in a 4-inch riser or do not meet the ignition source controls. The scope of this Engineering Task Plan is to address all activities associated with the specification and procurement of a light source for use with the existing CPO video equipment. The installation design change to tank farm facilities is not within the scope of this ETP

  15. Video Conferencing for a Virtual Seminar Room

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Fosgerau, A.; Hansen, Peter Søren K.

    2002-01-01

    A PC-based video conferencing system for a virtual seminar room is presented. The platform is enhanced with DSPs for audio and video coding and processing. A microphone array is used to facilitate audio based speaker tracking, which is used for adaptive beam-forming and automatic camera...

  16. Morphometric analysis of rat femoral vessels under a video magnification system

    Directory of Open Access Journals (Sweden)

    Rui Sergio Monteiro de Barros

    Full Text Available Abstract The right femoral vessels of 80 rats were identified and dissected. External lengths and diameters of femoral arteries and femoral veins were measured using either a microscope or a video magnification system. Findings were correlated to animals’ weights. Mean length was 14.33 mm for both femoral arteries and femoral veins, mean diameter of arteries was 0.65 mm and diameter of veins was 0.81 mm. In our sample, rats’ body weights were only correlated with the diameter of their femoral veins.

  17. Detection Thresholds for Rotation and Translation Gains in 360° Video-Based Telepresence Systems.

    Science.gov (United States)

    Zhang, Jingxin; Langbehn, Eike; Krupke, Dennis; Katzakis, Nicholas; Steinicke, Frank

    2018-04-01

    Telepresence systems have the potential to overcome limits and distance constraints of the real-world by enabling people to remotely visit and interact with each other. However, current telepresence systems usually lack natural ways of supporting interaction and exploration of remote environments (REs). In particular, single webcams for capturing the RE provide only a limited illusion of spatial presence, and movement control of mobile platforms in today's telepresence systems are often restricted to simple interaction devices. One of the main challenges of telepresence systems is to allow users to explore a RE in an immersive, intuitive and natural way, e.g., by real walking in the user's local environment (LE), and thus controlling motions of the robot platform in the RE. However, the LE in which the user's motions are tracked usually provides a much smaller interaction space than the RE. In this context, redirected walking (RDW) is a very suitable approach to solve this problem. However, so far there is no previous work, which explored if and how RDW can be used in video-based 360° telepresence systems. In this article, we conducted two psychophysical experiments in which we have quantified how much humans can be unknowingly redirected on virtual paths in the RE, which are different from the physical paths that they actually walk in the LE. Experiment 1 introduces a discrimination task between local and remote translations, and in Experiment 2 we analyzed the discrimination between local and remote rotations. In Experiment 1 participants performed straightforward translations in the LE that were mapped to straightforward translations in the RE shown as 360° videos, which were manipulated by different gains. Then, participants had to estimate if the remotely perceived translation was faster or slower than the actual physically performed translation. Similarly, in Experiment 2 participants performed rotations in the LE that were mapped to the virtual rotations

  18. Augmented video viewing: transforming video consumption into an active experience

    OpenAIRE

    WIJNANTS, Maarten; Leën, Jeroen; QUAX, Peter; LAMOTTE, Wim

    2014-01-01

    Traditional video productions fail to cater to the interactivity standards that the current generation of digitally native customers have become accustomed to. This paper therefore advertises the \\activation" of the video consumption process. In particular, it proposes to enhance HTML5 video playback with interactive features in order to transform video viewing into a dynamic pastime. The objective is to enable the authoring of more captivating and rewarding video experiences for end-users. T...

  19. A Method for Estimating Surveillance Video Georeferences

    Directory of Open Access Journals (Sweden)

    Aleksandar Milosavljević

    2017-07-01

    Full Text Available The integration of a surveillance camera video with a three-dimensional (3D geographic information system (GIS requires the georeferencing of that video. Since a video consists of separate frames, each frame must be georeferenced. To georeference a video frame, we rely on the information about the camera view at the moment that the frame was captured. A camera view in 3D space is completely determined by the camera position, orientation, and field-of-view. Since the accurate measuring of these parameters can be extremely difficult, in this paper we propose a method for their estimation based on matching video frame coordinates of certain point features with their 3D geographic locations. To obtain these coordinates, we rely on high-resolution orthophotos and digital elevation models (DEM of the area of interest. Once an adequate number of points are matched, Levenberg–Marquardt iterative optimization is applied to find the most suitable video frame georeference, i.e., position and orientation of the camera.

  20. Load Scheduling in a Cloud Based Massive Video-Storage Environment

    DEFF Research Database (Denmark)

    Bayyapu, Karunakar Reddy; Fischer, Paul

    2015-01-01

    We propose an architecture for a storage system of surveillance videos. Such systems have to handle massive amounts of incoming video streams and relatively few requests for replay. In such a system load (i.e., Write requests) scheduling is essential to guarantee performance. Large-scale data-sto...