Potter, Ray; Roberts, Deborah
This guide aims to provide an introduction to Desktop Video Conferencing. You may be familiar with video conferencing, where participants typically book a designated conference room and communicate with another group in a similar room on another site via a large screen display. Desktop video conferencing (DVC), as the name suggests, allows users to video conference from the comfort of their own office, workplace or home via a desktop/laptop Personal Computer. DVC provides live audio and visua...
Lee, June; Yoon, Seo Young; Lee, Chung Hyun
The purposes of the study are to investigate CHLS (Cyber Home Learning System) in online video conferencing environment in primary school level and to explore the students' responses on CHLS-VC (Cyber Home Learning System through Video Conferencing) in order to explore the possibility of using CHLS-VC as a supportive online learning system. The…
Forchhammer, Søren; Fosgerau, A.; Hansen, Peter Søren K.
A PC-based video conferencing system for a virtual seminar room is presented. The platform is enhanced with DSPs for audio and video coding and processing. A microphone array is used to facilitate audio based speaker tracking, which is used for adaptive beam-forming and automatic camera-control...
Crease, Robert P
A video conferencing link between US physicists and scientists at the CERN collider is one of a number of video conferencing applications that allow scientists in widely separated locations to collaborate. Current and future uses of video conferencing are discussed.
Khalid, Md. Saifuddin; Hossan, Md. Iqbal
The integration of video conferencing systems (VCS) have increased significantly in the classrooms and administrative practices of higher education institutions. The VCSs discussed in the existing literature can be broadly categorized as desktop systems (e.g. Scopia), WebRTC or Real......-Time Communications (e.g. Google Hangout, Adobe Connect, Cisco WebEx, and appear.in), and dedicated (e.g. Polycom). There is a lack of empirical study on usability evaluation of the interactive systems in educational contexts. This study identifies usability errors and measures user satisfaction of a dedicated VCS......) analysis of 12 user responses results below average score. Poststudy system test by the vendor has identified cabling and setup error. Applying SUMI followed by qualitative methods might enrich evaluation outcomes....
drs Maurice Schols
As multimedia gradually becomes more and more an integrated part of video conferencing systems, the use of multimedia integrated desktop video conferencing technology (MIDVCT) will open up new educational possibilities for synchronous learning. However, the possibilities and limitations of this
Rogers, Tony; Irwin, Rita L.
Profiles a series of video conferences that examined the effects of European settlement on the art of Aboriginal peoples in Australia and the cultural conflicts facing contemporary Aboriginal artists. The video conferences brought together Aboriginal artists and Canadian educators. Considers the role of video-conferencing in educational research…
I. Kegel; P.S. Cesar Garcia (Pablo Santiago); A.J. Jansen (Jack); D.C.A. Bulterman (Dick); J. Kort; T. Stevens; N. Farber
htmlabstractLow-cost video conferencing systems have provided an existence proof for the value of video communication in a home setting. At the same time, current systems have a number of fundamental limitations that inhibit more general social interactions among multiple groups of participants. In
Kegel, I.; Cesar, P.; Jansen, J.; Bulterman, D.C.A.; Stevens, T.; Kort, J.; Färber, N.
Low-cost video conferencing systems have provided an existence proof for the value of video communication in a home setting. At the same time, current systems have a number of fundamental limitations that inhibit more general social interactions among multiple groups of participants. In our work, we
Shae, Zon-Yin; Chang, Pao-Chi; Chen, Mon-Song
Packet-switching based video conferencing has emerged as one of the most important multimedia applications. Lip synchronization can be disrupted in the packet network as the result of the network properties: packet delay jitters at the capture end, network delay jitters, packet loss, packet arrived out of sequence, local clock mismatch, and video playback overlay with the graphic system. The synchronization problem become more demanding as the real time and multiparty requirement of the video conferencing application. Some of the above mentioned problem can be solved in the more advanced network architecture as ATM having promised. This paper will present some of the solutions to the problems that can be useful at the end station terminals in the massively deployed packet switching network today. The playback scheme in the end station will consist of two units: compression domain buffer management unit and the pixel domain buffer management unit. The pixel domain buffer management unit is responsible for removing the annoying frame shearing effect in the display. The compression domain buffer management unit is responsible for parsing the incoming packets for identifying the complete data blocks in the compressed data stream which can be decoded independently. The compression domain buffer management unit is also responsible for concealing the effects of clock mismatch, lip synchronization, and packet loss, out of sequence, and network jitters. This scheme can also be applied to the multiparty teleconferencing environment. Some of the schemes presented in this paper have been implemented in the Multiparty Multimedia Teleconferencing (MMT) system prototype at the IBM watson research center.
Some results showed that the use of both instant messaging and video conferencing in projects is moderate and both improve the quality of communication in virtual teams, however in different ways. Keywords: Project communication, computer-mediated communication, instant messaging, video conferencing, virtual teams ...
Suzan Duygu Erişti
Full Text Available This study investigated Turkish and Canadian primary school students’ ways of expressing their perception of interactive art education through video conferencing and that of cultural interaction through pictorial representations. The qualitative research data were collected in the form of pictures and interviews on interactive art education along with cultural components depicted in pictures. The results obtained were analyzed and interpreted based on the quantitative content analysis method. The research results revealed that the majority of the students explained their viewpoints through the effectiveness of the process. The students highlighted the importance of learning a different culture, learning about a different art technique and recognizing new friends in the process. The synchronization regarding interactive art education through videoconferencing was another important experience reflected by the students. Most of the students indicated that interactive art education through videoconferencing encouraged them to learn and understand about different cultures, helped them develop cultural awareness, attracted their attention and increased their motivation.
Suzan Duygu Erişti
Full Text Available This study investigated Turkish and Canadian primary school students’ ways of expressing their perception of interactive art education through video conferencing and that of cultural interaction through pictorial representations. The qualitative research data were collected in the form of pictures and interviews on interactive art education along with cultural components depicted in pictures. The results obtained were analyzed and interpreted based on the quantitative content analysis method. The research results revealed that the majority of the students explained their viewpoints through the effectiveness of the process. The students highlighted the importance of learning a different culture, learning about a different art technique and recognizing new friends in the process. The synchronization regarding interactive art education through videoconferencing was another important experience reflected by the students. Most of the students indicated that interactive art education through videoconferencing encouraged them to learn and understand about different cultures, helped them develop cultural awareness, attracted their attention and increased their motivation
Mat Kiah, M L; Al-Bakri, S H; Zaidan, A A; Zaidan, B B; Hussain, Muzammil
One of the applications of modern technology in telemedicine is video conferencing. An alternative to traveling to attend a conference or meeting, video conferencing is becoming increasingly popular among hospitals. By using this technology, doctors can help patients who are unable to physically visit hospitals. Video conferencing particularly benefits patients from rural areas, where good doctors are not always available. Telemedicine has proven to be a blessing to patients who have no access to the best treatment. A telemedicine system consists of customized hardware and software at two locations, namely, at the patient's and the doctor's end. In such cases, the video streams of the conferencing parties may contain highly sensitive information. Thus, real-time data security is one of the most important requirements when designing video conferencing systems. This study proposes a secure framework for video conferencing systems and a complete management solution for secure video conferencing groups. Java Media Framework Application Programming Interface classes are used to design and test the proposed secure framework. Real-time Transport Protocol over User Datagram Protocol is used to transmit the encrypted audio and video streams, and RSA and AES algorithms are used to provide the required security services. Results show that the encryption algorithm insignificantly increases the video conferencing computation time.
Full Text Available A user’s position-specific field has been developed using the Global Positioning System (GPS technology. To determine the position using cellular phones, a device was developed, in which a pedestrian navigation unit carries the GPS. However, GPS cannot specify a position in a subterranean environment or indoors, which is beyond the reach of transmitted signals. In addition, the position-specification precision of GPS, that is, its resolution, is on the order of several meters, which is deemed insufficient for pedestrians. In this study, we proposed and evaluated a technique for locating a user’s 3D position by setting up a marker in the navigation space detected in the image of a cellular phone. By experiment, we verified the effectiveness and accuracy of the proposed method. Additionally, we improved the positional precision because we measured the position distance using numerous markers.
Weiner, M; Schadow, G; Lindbergh, D; Warvel, J; Abernathy, G; Dexter, P; McDonald, C J
Although video-based teleconferencing is becoming more widespread in the medical profession, especially for scheduled consultations, applications for rapid assessment of acute medical problems are rare. Use of such a video system in a nursing facility may be especially beneficial, because physicians are often not immediately available to evaluate patients. We have assembled and tested a portable, wireless conferencing system to prepare for a randomized trial of the system s influence on resource utilization and satisfaction. The system includes a rolling cart with video conferencing hardware and software, a remotely controllable digital camera, light, wireless network, and battery. A semi-automated paging system informs physicians of patient s study status and indications for conferencing. Data transmission occurs wirelessly in the nursing home and then through Internet cables to the physician s home. This provides sufficient bandwidth to support quality motion images. IPsec secures communications. Despite human and technical challenges, this system is affordable and functional.
Maher, Damian; Prescott, Anne
Teachers in rural and remote schools face many challenges including those relating to distance, isolation and lack of professional development opportunities. This article examines a project where mathematics and science teachers were provided with professional development opportunities via video conferencing to help them use syllabus documents to…
Beckwith, E. George; Cunniff, Daniel T.
Online course enrollment has increased dramatically over the past few years. The authors cite the reasons for this rapid growth and the opportunities open for enhancing teaching/learning techniques such as video conferencing and hybrid class combinations. The authors outlined an example of an accelerated learning, eight-class session course…
Five years ago in the February, 2007, issue of LLT, I wrote about developments in digital video of potential interest to language teachers. Since then, there have been major changes in options for video capture, editing, and delivery. One of the most significant has been the rise in popularity of video-based storytelling, enabled largely by…
The purpose of this study was to understand the influences of interactive video-conferencing technology on learning experiences of RN students studying for baccalaureate degrees via interactive distance education. Data collection in this phenomenological study used open-ended questionnaires, interviews, and focus groups. Preliminary thematic analysis of questionnaires shaped open-ended questions for interviews and focus groups with learners confirmed findings. Students identified themes of connecting with others, organization, negative influences, and personal factors as influential to their learning. They also identified useful teaching strategies to facilitate learning within this distance nursing education environment. University nursing programs using video-conferencing for distance education can foster learning by using teaching strategies that fit the technology, increase student interaction, and engage the students.
Suzan DUYGU ERIŞTI
Full Text Available This study investigated Turkish and Canadian primary school students’ ways of expressing their perception of cultural understanding through video conferencing and that of cultural interaction through video conferencing. The qualitative research data were collected in the form of interviews. The results obtained were analyzed and interpreted based on the quantitative content analysis method. The research results revealed that the majority of the students explained their viewpoints through the effectiveness of the process. The students highlighted the importance of learning a different culture, using technology effective and recognizing new friends in the process. Most of the students indicated that videoconferencing encouraged them to learn and understand about different cultures, helped them develop cultural awareness, attracted their attention and increased their motivation.
Ørngreen, Rikke; Mouritzen, Per
shows that the students experiment with various pedagogical situations, and that during the process of design, teaching, and reflection they acquire experiences at both a concrete specific and a general abstract level. The desktop video conference system creates challenges, with technical issues......This paper presents experiences from teaching video conferencing for learning and collaboration, and discusses the challenges and potentials of applying a collaborative and problem‐based learning (PBL) pedagogy. The research is an action research study, and we as researchers, educational planners...... conferences. We studied 3 subsequent years of a master program module on video conferencing, and the changes it has undergone. The participants work in groups and each group has the task of designing a short one hour (45min) educational design of their own choice. The students have to try out and evaluate...
For those of us who are teaching at a university, coming to CERN for a week means that someone else has to be found to teach our course. Recently, thanks to an initiative of CERN's Education Group who in collaboration with the IT department have buit a Remote Video Conference (VC) room for outreach communication with schools, I have been able to test teaching class back home whilst at the same time being at CERN! On Monday October 5, at 16:00, (10:00 at Indiana University), I attempted my first remote class. Of course, I could not do this alone. Back in the main auditorium in the physics Department, Hal Evans and Fred Luehring had rolled in a portable teleconference center, set up lecture demos and started a class computer. At CERN, Knut Bjorkli had the teaching center teleconference screen active, and had also connected to my class website when I arrived. The first day startup was a bit rocky - there were fire wall problems (?) that required that we connect to the Indiana VC unit rather than the other way a...
S. Gunkel (Simon); A.J. Jansen (Jack); I. Kegel; D.C.A. Bulterman (Dick); P.S. Cesar Garcia (Pablo Santiago)
htmlabstractWith the growing popularity of video communication systems, more people are using group video chat, rather than only one-to-one video calls. In such multi-party sessions, remote participants compete for the available screen space and bandwidth. A common solution is showing the current
Osawa, Noritaka; Asai, Kikuo
A multipoint, multimedia conferencing system called FocusShare is described that uses IPv6/IPv4 multicasting for real-time collaboration, enabling video, audio, and group awareness information to be shared. Multiple telepointers provide group awareness information and make it easy to share attention and intention. In addition to pointing with the…
Karal, Hasan; Cebi, Ayca; Turgut, Yigit Emrah
The objective of this study is to determine how students who are taking synchronous distance education classes via video conferencing perceive distance learning courses. A qualitative research approach was used for the study. Scale sampling was also used. The study's subjects consisted of a total of nine students comprised of 2nd and 4th grade…
Full Text Available In this article, we present an alternative framework for conceptualising video-conferencing uses in initial teacher education and in Higher Education (HE more generally. This alternative framework takes into account the existing models in the field, but – based on a set of interviews conducted with teacher trainees and wider analysis of the related literature – we suggest that there is a need to add to existing models the notions of ‘mimicking’ (copying practice and improvisation (unplanned and spontaneous personal learning moments. These two notions are considered to be vital, as they remain valid throughout teachers’ careers and constitute key affordances of video-conferencing uses in HE. In particular, we argue that improvisational processes can be considered as key for developing professional practice and lifelong learning and that video-conferencing uses in initial teacher education can contribute to an understanding of training and learning processes. Current conceptualisations of video conferencing as suggested by Coyle (2004 and Marsh et al. (2009 remain valid, but also are limited in their scope with respect to focusing predominantly on pragmatic and instrumental teacher-training issues. Our article suggests that the theoretical conceptualisations of video conferencing should be expanded to include elements of mimicking and ultimately improvisation. This allows us to consider not just etic aspects of practice, but equally emic practices and related personal professional development. We locate these arguments more widely in a sociocultural-theory framework, as it enables us to describe interactions in dialectical rather than dichotomous terms (Lantolf & Poehner, 2008.
Full Text Available In this article, we present an alternative framework for conceptualising video-conferencing uses in initial teacher education and in Higher Education (HE more generally. This alternative framework takes into account the existing models in the field, but – based on a set of interviews conducted with teacher trainees and wider analysis of the related literature – we suggest that there is a need to add to existing models the notions of ‘mimicking’ (copying practice and improvisation (unplanned and spontaneous personal learning moments. These two notions are considered to be vital, as they remain valid throughout teachers’ careers and constitute key affordances of video-conferencing uses in HE. In particular, we argue that improvisational processes can be considered as key for developing professional practice and lifelong learning and that video-conferencing uses in initial teacher education can contribute to an understanding of training and learning processes. Current conceptualisations of video conferencing as suggested by Coyle (2004 and Marsh et al. (2009 remain valid, but also are limited in their scope with respect to focusing predominantly on pragmatic and instrumental teacher-training issues. Our article suggests that the theoretical conceptualisations of video conferencing should be expanded to include elements of mimicking and ultimately improvisation. This allows us to consider not just etic aspects of practice, but equally emic practices and related personal professional development. We locate these arguments more widely in a sociocultural-theory framework, as it enables us to describe interactions in dialectical rather than dichotomous terms (Lantolf & Poehner, 2008.
Dowling, Anita; Kennedy, Jonathon M.; O'Hare, Neil J.; Mulvihille, Niall; Murphy, Joseph A.; Malone, James F.
Cardiac patients may undergo a range of diagnostic examinations including angiography, echocardiography, nuclear medicine, x-ray, ECG and blood pressure measurement. Cine angiograms are reviewed at cardiac case conferences. Other data types are not typically exhibited due to the incompatibility of display devices. The aim of this study was to evaluate a workstation developed for multimodality reporting in cardiac case conferencing. A PC based system was developed as part of an EU project AMIE enabling all patient data to be viewed and manipulated on a large screen display using a high resolution video projector. The digital data was acquired using a variety of methods compatible with the systems involved. A technical evaluation of the projected imagery was performed by the grading of phantom test objects. A limited clinical evaluation was also performed whereby a panel of 10 consultant radiologists and cardiologists reported on angiography and x-ray images from 50 patients. Several months later the original data sets were reported and the result compared. Results of the clinical and technical evaluations indicate that the systems is satisfactory for the primary diagnosis of all data types with the exception of x-ray. The projected x-ray imagery is satisfactory for reference and teaching purposes.
Full Text Available This is the first in Athabasca University’s series of evaluation reports to feature online Webcam and videoconferencing products. While Webcam software generates a simple visual presentation from a live online camera, videoconferencing products contain a wider range of interactive features serving multi-point interactions between participants. In many online situations, the addition of video images to a live presentation can add substantially to its educational effectiveness. Ten products/ online services are reviewed, supporting a wide range of video-based activities.
Full Text Available Abstract Background Teamwork is important for patient care and outcome in emergencies. In rural areas, efficient communication between rural hospitals and regional trauma centers optimise decisions and treatment of trauma patients. Little is known on potentials and effects of virtual team to team cooperation between rural and regional trauma teams. Methods We adapted a video conferencing (VC system to the work process between multidisciplinary teams responsible for trauma as well as medical emergencies between one rural and one regional (university hospital. We studied how the teams cooperated during simulated critical scenarios, and compared VC with standard telephone communication. We used qualitative observations and interviews to evaluate results. Results The team members found VC to be a useful tool during emergencies and for building "virtual emergency teams" across distant hospitals. Visual communication combined with visual patient information is superior to information gained during ordinary telephone calls, but VC may also cause interruptions in the local teamwork. Conclusion VC can improve clinical cooperation and decision processes in virtual teams during critical patient care. Such team interaction requires thoughtful organisation, training, and new rules for communication.
Conventional video conferencing (e.g. Skype with a webcam) suffers from some fundamental flaws that keep it from attaining a true sense of immersivity and copresence and thereby emulating a real face-to-face conversation. Not in the least does it not allow its users to look directly into each other’s eyes. The webcam is usually set up next to the screen or at best integrated into the bezel. This forces the user to alternate his gaze between looking at the screen to observe his remote conferen...
Hauervig-Jørgensen, Charlotte; Jeong, Cheol-Ho; Toftum, Jørn
Today, face-to-face meetings are frequently replaced by video conferences in order to reduce costs and carbon footprint related to travels and to increase the company efficiency. Yet, complaints about the difficulty of understanding the speech of the participants in both rooms of the video...... conference occur. The aim of this study is to find out the main causes of difficulties in speech communication. Correlation studies between subjective perceptions were conducted through questionnaires and objective acoustic and indoor climate parameters related to video conferencing. Based on four single......-room and three combined-room measurements, it was found that the traditional measure of speech, such as the speech transmission index, was not correlated with the subjective classifications. Thus, a correlation analysis was conducted as an attempt to find the hidden factors behind the subjective perceptions...
Hans L. Cycon
Full Text Available Mobile phones and related gadgets in networks are omnipresent at our students, advertising itself as the platform for mobile, pervasive learning. Currently, these devices rapidly open and enhance, being soon able to serve as a major platform for rich, open multimedia applications and communication. In this report we introduce a video conferencing software, which seamlessly integrates mobile with stationary users into fully distributed multi-party conversations. Following the paradigm of flexible, user-initiated group communication, we present an integrated solution, which scales well for medium-size conferences and accounts for the heterogeneous nature of mobile and stationary participants. This approach allows for a spontaneous, location independent establishment of video dialogs, which is of particular importance in interactive learning scenarios. The work is based on a highly optimized realization of a H.264 codec.
Nakatsuhara, Fumiyo; Inoue, Chihiro; Berry, Vivien; Galaczi, Evelina
This research explores how Internet-based video-conferencing technology can be used to deliver and conduct a speaking test, and what similarities and differences can be discerned between the standard and computer-mediated face-to-face modes. The context of the study is a high-stakes speaking test, and the motivation for the research is the need…
Lakhal, Sawsen; Khechine, Hager; Pascot, Daniel
The aim of this study was to examine psychological factors which could influence acceptance and use of the desktop video conferencing technology by undergraduate business students. Based on the Unified Theory of Acceptance and Use of Technology, this study tested a theoretical model encompassing seven variables: behavioural intentions to use…
Full Text Available Recently, in the distance learning system, video conferencing becomes one of expected course material delivery systems for creating a virtual class such that lecturer and student which are separated at long distance can engage a learning activity as well as face to face learning system. For this reason, the service availability and quality should be able to guaranteed and fulfilled. In this research, we analyze QoS of video conferencing between main campus and sub campus as the implementation of distance learning system in laboratory scale. Our experimental results show that the channel capacity or bandwidth of WAN connection between main campus and sub campus at 128 kbps is able to generate the throughput of video transmission and reception at 281 kbps and 24 kbps, respectively. Meanwhile, throughput of audio transmission and reception is 64 kbps and 26 kbps with the number of total packet loss for video and audio transmission is 84.3% and 29.2%, respectively. In this setting, the total jitter for video and audio transmission is 125 ms and 21 ms, respectively. In this case, there is no packet loss for traffic transmitting and receiving with jitter is not more than 5 ms. We also implemented QoS using Trust CoS model dan Trust DSCP for improving the quality of service in term of jitter up to 12.3% and 22.41%, respectively. Keywords: quality of service, throughput, delay, jitter, packet loss, Trust CoS, Trust DSCP
Describes an integrated computer-based conferencing and mail system called ICMS (Integrated Conferencing and Mail System) that was developed to encourage students to participate in class discussions more actively. The menu-driven user interface is explained, and ICMS's role in promoting self-assessment and critical thinking is discussed. (eight…
.... However, it is the nature of the feedback given to the teacher and how it is delivered, using effective conferencing strategies and techniques, that will actually involve the teacher in understanding...
Hofflander, Malin; Nilsson, Lina; Eriksén, Sara; Borg, Christel
This article describes healthcare managers' experiences of leading the implementation of video conferencing in discharge planning sessions as a new tool in everyday practice. Data collection took place through individual interviews and the interviews were analyzed using qualitative content analysis with an inductive approach. The results indicate that managers identified two distinct leadership perspectives when they reflected on the implementation process. They described a desired way of leading the implementation and communicating about the upcoming change, understanding and securing support for decisions, as well as ensuring that sufficient time is available throughout the change process. They also, however, described how they perceived that the implementation process was actually taking place, highlighting the lack of planning and preparation as well as the need for support and to be supportive, and having the courage to adopt and lead the implementation. It is suggested that managers at all levels require more information and training in how to encourage staff to become involved in designing their everyday work and in the implementation process. Managers, too, need ongoing organizational support for good leadership throughout the implementation of video conferencing in discharge planning sessions, including planning, start-up, implementation, and evaluation.
Çakiroglu, Ünal; Kokoç, Mehmet; Kol, Elvan; Turan, Ebru
The purpose of this qualitative study was to understand activities and behaviors of learners and instructor in an online programming course. Adobe Connect web conferencing system was used as a delivery platform. A total of fifty-six sophomore students attending a computer education and instructional technology program (online) participated in this…
Cox, James R
This report describes the incorporation of digital learning elements in organic chemistry and biochemistry courses. The first example is the use of pen-based technology and a large-format PowerPoint slide to construct a map that integrates various metabolic pathways and control points. Students can use this map to visualize the integrated nature of metabolism and how various hormones impact metabolic regulation. The second example is the embedding of health-related YouTube videos directly into PowerPoint presentations. These videos become a part of the course notes and can be viewed within PowerPoint as long as students are online. The third example is the use of a webcam to show physical models during online sessions using web-conferencing software. Various molecular conformations can be shown through the webcam, and snapshots of important conformations can be incorporated into the notes for further discussion and annotation. Each of the digital learning elements discussed in this report is an attempt to use technology to improve the quality of educational resources available outside of the classroom to foster student engagement with ideas and concepts. Biochemistry and Molecular Biology Education Vol. 39, No. 1, pp. 4-9, 2011. Copyright © 2011 Wiley Periodicals, Inc.
Li, Chenxi; Wu, Ligao; Li, Chen; Tang, Jinlan
This work-in-progress doctoral research project aims to identify meaning negotiation patterns in synchronous audio and video Computer-Mediated Communication (CMC) environments based on the model of CMC text chat proposed by Smith (2003). The study was conducted in the Institute of Online Education at Beijing Foreign Studies University. Four dyads…
Klock, Clóvis; Gomes, Regina de Paula Xavier
Virtual pathology is a very important tool that can be used in several ways, including interconsultations with specialists in many areas and for frozen sections. We considered in this work the use of Windows Live Messenger and Skype for image transmission. The conference was made through wide broad internet using Nikon E 200 microscope and Digital Samsung Colour SCC-131 camera. Internet speed for transmission varied from 400 Kb to 2.0 Mb. Both programs allow voice transmission concomitant to image, so the communication between the involved pathologists was possible using microphones and speakers. A live image could be seen by the receptor pathologist who was able to ask for moving the field or increase/diminish the augmentation. No phone call or typing required. The programs MSN and Skype can be used in many ways and with different operational systems installed in the computer. The capture system is simple and relatively cheap, what proves the viability of the system to be used in developing countries and in cities where do not exist pathologists. With the improvement of software and the improvement of digital image quality, associated to the use of the high speed broad band Internet this will be able to become a new modality in surgical pathology.
Full Text Available This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network and the H.324 WAN (wide-area network users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.
Salling, Kim Bang; Barfod, Michael Bruhn
This presentation introduces the brand new approach of integrating risk simulation and decision conferencing within transport project appraisal (UNITE-DSS model). The modelling approach is divided into various modules respectively as point estimates (cost-benefit analysis), stochastic interval...... results (quantitative risk analysis and Monte Carlo simulation) and finally framed within stakeholder involvement (decision conferencing) as depicted in the figure....
Stanford, Roger John
Web-conferencing software was chosen for course delivery to provide flexible options for students at a two-year technical college. Students used technology to access a live, synchronous microeconomics course over the internet instead of a traditional face-to-face lecture. This investigation studied the impact of implementing web-conferencing…
Schlecht, Leslie E.; Kutler, Paul (Technical Monitor)
This is a proposal for a general use system based, on the SGI IRIS workstation platform, for recording computer animation to videotape. In addition, this system would provide features for simple editing and enhancement. Described here are a list of requirements for the system, and a proposed configuration including the SGI VideoLab Integrator, VideoMedia VLAN animation controller and the Pioneer rewritable laserdisc recorder.
Terrazas, Enrique; Hamill, Timothy R; Wang, Ye; Channing Rodgers, R P
The Department of Laboratory Medicine at the University of California, San Francisco (UCSF) has been split into widely separated facilities, leading to much time being spent traveling between facilities for meetings. We installed an open-source AccessGrid multi-media-conferencing system using (largely) consumer-grade equipment, connecting 6 sites at 5 separate facilities. The system was accepted rapidly and enthusiastically, and was inexpensive compared to alternative approaches. Security was addressed by aspects of the AG software and by local network administrative practices. The chief obstacles to deployment arose from security restrictions imposed by multiple independent network administration regimes, requiring a drastically reduced list of network ports employed by AG components.
Full Text Available Recent major political uprisings are indicating the extent to which social learning Web 2.0 technologies, can infl uence change in informal learning settings. Recognition and a discussion of the potential of that infl uence in formal learning settings have only just begun. This article describes a study of an international distance learning project in 2004, using a variety of Web 2.0 technologies, including video-based web conferencing, that sought to initiate and respond to this urgent need for dialogue in the research. Self-selected participants took part in a 5-week English as a foreign language (EFL program, a joint NATO sponsored Canadian and Romanian Ministry of Defense-supported initiative. Clear evidence of linguistic knowledge construction and of important changes to participants’ learner identities, indicates the power of these technologies to support the kind of learning that can lead to the development of global citizens and the skills they will increasingly require in the 21st century.
Gustafson, Peter C.
For many years, photogrammetry has been in use at TRW. During that time, needs have arisen for highly repetitive measurements. In an effort to satisfy these needs in a timely manner, a specialized Robotic Video Photogrammetry System (RVPS) was developed by TRW in conjunction with outside vendors. The primary application for the RVPS has strict accuracy requirements that demand significantly more images than the previously used film-based system. The time involved in taking these images was prohibitive but by automating the data acquisition process, video techniques became a practical alternative to the more traditional film- based approach. In fact, by applying video techniques, measurement productivity was enhanced significantly. Analysis involved was also brought `on-board' to the RVPS, allowing shop floor acquisition and delivery of results. The RVPS has also been applied in other tasks and was found to make a critical improvement in productivity, allowing many more tests to be run in a shorter time cycle. This paper will discuss the creation of the system and TRW's experiences with the RVPS. Highlighted will be the lessons learned during these efforts and significant attributes of the process not common to the standard application of photogrammetry for industrial measurement. As productivity and ease of use continue to drive the application of photogrammetry in today's manufacturing climate, TRW expects several systems, with technological improvements applied, to be in use in the near future.
This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV
Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi
Pasch, H. L.
An overview of video coding is presented. The aim is not to give a technical summary of possible coding techniques, but to address subjects related to video compression in general and to the transmission of compressed video in more detail. Bit rate reduction is in general possible by removing redundant information; removing information the eye does not use anyway; and reducing the quality of the video. The codecs which are used for reducing the bit rate, can be divided into two groups: Constant Bit rate Codecs (CBC's), which keep the bit rate constant, but vary the video quality; and Variable Bit rate Codecs (VBC's), which keep the video quality constant by varying the bit rate. VBC's can be in general reach a higher video quality than CBC's using less bandwidth, but need a transmission system that allows the bandwidth of a connection to fluctuate in time. The current and the next generation of the PSTN does not allow this; ATM might. There are several factors which influence the quality of video: the bit error rate of the transmission channel, slip rate, packet loss rate/packet insertion rate, end-to-end delay, phase shift between voice and video, and bit rate. Based on the bit rate of the coded video, the following classification of coded video can be made: High Definition Television (HDTV); Broadcast Quality Television (BQTV); video conferencing; and video telephony. The properties of these classes are given. The video conferencing and video telephony equipment available now and in the next few years can be divided into three categories: conforming to 1984 CCITT standard for video conferencing; conforming to 1988 CCITT standard; and conforming to no standard.
Full Text Available To make people at different places participate in the same conference, speak and discuss freely, the interactive remote video conferencing system is designed and realized based on multi-Agent collaboration. FEC (forward error correction and tree P2P technology are firstly used to build a live conference structure to transfer audio and video data; then the branch conference port can participate to speak and discuss through the application of becoming a interactive focus; the introduction of multi-Agent collaboration technology improve the system robustness. The experiments showed that, under normal network conditions, the system can support 350 branch conference node simultaneously to make live broadcasting. The audio and video quality is smooth. It can carry out large-scale remote video conference.
This paper considers how students deal with malfunctions that occur during the use of web conferencing systems in learning arrangements. In a survey among participants in online courses that make use of a web-conferencing system (N = 129), the relationship between a preference for internal or external locus of control and the perception of…
Oz, Halit Hami
New medical schools have been opened in the eastern and southeastern regions of the country. They are also in great need of basic medical science teachers. However, due to security reasons over the past two decades, teachers from the established universities do not desire to travel to these medical schools for lectures. The objective of this study was to develop a synchronous classroom conferencing system to teach basic science courses between two general purpose technology enhanced classrooms of two different universities--Istanbul University (IU) and Istanbul and Harran University (HU), Urfa--located 1,500 miles apart in Turkey. I videostreamed the instructor, content from document camera, Power Point presentations at IU, and the students at both places, IU and HU. In addition, I synchronously broadcast two whiteboards by attaching two mimio devices to the two blackboards in the IU classroom to capture and convert everything written or drawn on them into broadcasting over the intranet. This technique is called "boardcasting," which allows users to stream ink and audio together over the Internet or intranet live. A total of 260 students at IU and 150 students at HU were involved. Off-campus HU students also have asynchronous access to the stored lecture video materials at any time. Midterm and final examinations were administered simultaneously using the same questions at both sites in two universities under the observation of the teaching faculty using the very same system. This system permitted interaction between the students in the class at IU and remote-campus students at HU and the instructor in real time. The instructors at IU were able to maintain a significant level of spontaneity in using their multimedia materials and electronic whiteboards. The mean midterm and final exam scores of students at both universities were similar. The system developed in this study can be used by the medical faculty at the main teaching hospitals to deliver their lectures in
Greenwoll, D.A.; Matter, J.C. (Sandia National Labs., Albuquerque, NM (United States)); Ebel, P.E. (BE, Inc., Barnwell, SC (United States))
The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs.
We have been experimenting with web based electronic conferencing (CMC) at the Educational Science Department of Utrecht University for a period of nearly 10 years now. Obstacles such as insufficient participation, the low quality of messages and the integration of CMC in a course have been
Offering ready access to the security industry's cutting-edge digital future, Intelligent Network Video provides the first complete reference for all those involved with developing, implementing, and maintaining the latest surveillance systems. Pioneering expert Fredrik Nilsson explains how IP-based video surveillance systems provide better image quality, and a more scalable and flexible system at lower cost. A complete and practical reference for all those in the field, this volume:Describes all components relevant to modern IP video surveillance systemsProvides in-depth information about ima
Hsu, Charles; Szu, Harold
An intelligent video surveillance system is able to detect and identify abnormal and alarming situations by analyzing object movement. The Smart Sensing Surveillance Video (S3V) System is proposed to minimize video processing and transmission, thus allowing a fixed number of cameras to be connected on the system, and making it suitable for its applications in remote battlefield, tactical, and civilian applications including border surveillance, special force operations, airfield protection, perimeter and building protection, and etc. The S3V System would be more effective if equipped with visual understanding capabilities to detect, analyze, and recognize objects, track motions, and predict intentions. In addition, alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. The S3V System capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments. It would be directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.
Alsmirat, Mohammad Abdullah
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Full Text Available Video surveillance system senses and trails out all the threatening issues in the real time environment. It prevents from security threats with the help of visual devices which gather the information related to videos like CCTV’S and IP (Internet Protocol cameras. Video surveillance system has become a key for addressing problems in the public security. They are mostly deployed on the IP based network. So, all the possible security threats exist in the IP based application might also be the threats available for the reliable application which is available for video surveillance. In result, it may increase cybercrime, illegal video access, mishandling videos and so on. Hence, in this paper an intelligent model is used to propose security for video surveillance system which ensures safety and it provides secured access on video.
Ge, Jing; Zhang, Guoping; Yang, Zongkai
Multimedia technology and networks protocol are the basic technology of the video surveillance system. A network remote video surveillance system based on MPEG-4 video coding standards is designed and implemented in this paper. The advantages of the MPEG-4 are analyzed in detail in the surveillance field, and then the real-time protocol and real-time control protocol (RTP/RTCP) are chosen as the networks transmission protocol. The whole system includes video coding control module, playing back module, network transmission module and network receiver module The scheme of management, control and storage about video data are discussed. The DirectShow technology is used to playback video data. The transmission scheme of digital video processing in networks, RTP packaging of MPEG-4 video stream is discussed. The receiver scheme of video date and mechanism of buffer are discussed. The most of the functions are archived by software, except that the video coding control module is achieved by hardware. The experiment results show that it provides good video quality and has the real-time performance. This system can be applied into wide fields.
Olha D. Slovinska
Full Text Available This article analyzes the theoretical and practical web conferencing aspects, deals with the main tasks of integration the conference call system, highlights the main categories and classes of conferencing, describes the organization of web conference infrastructure with the help of software tools and their capabilities, analyzes an international experience of using the open conference systems in leading US and European universities.
From the streets of London to subway stations in New York City, hundreds of thousands of surveillance cameras ubiquitously collect hundreds of thousands of videos, often running 24/7. How can such vast volumes of video data be stored, analyzed, indexed, and searched? How can advanced video analysis and systems autonomously recognize people and detect targeted activities real-time? Collating and presenting the latest information Intelligent Video Surveillance: Systems and Technology explores these issues, from fundamentals principle to algorithmic design and system implementation.An Integrated
... From the Federal Register Online via the Government Publishing Office FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 76 Open Video Systems AGENCY: Federal Communications Commission. ACTION: Final rule... Open Video Systems. DATES: The amendments to 47 CFR 76.1505(d) and 76.1506(d), (l)(3), and (m)(2...
Herder, P. M.; Subrahmanian, E.; Talukdar, S.; Turk, A. L.; Westerberg, A. W.
Explains distance education approach applied to the 'Engineering Design Problem Formulation' course simultaneously at the Delft University of Technology (the Netherlands) and at Carnegie Mellon University (CMU, Pittsburgh, USA). Uses video taped lessons, video conferencing, electronic mails and web-accessible document management system LIRE in the…
Robert C. Lorenz
Full Text Available Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual towards playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training.Fifty healthy participants were randomly assigned to a video game training (TG or control group (CG. Before and after training/control period, functional magnetic resonance imaging (fMRI was conducted using a non-video game related reward task.At pretest, both groups showed strongest activation in ventral striatum (VS during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated.This longitudinal study revealed that video game training may preserve reward responsiveness in the ventral striatum in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.
Lorenz, Robert C; Gleich, Tobias; Gallinat, Jürgen; Kühn, Simone
Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training. Fifty healthy participants were randomly assigned to a video game training (TG) or control group (CG). Before and after training/control period, functional magnetic resonance imaging (fMRI) was conducted using a non-video game related reward task. At pretest, both groups showed strongest activation in ventral striatum (VS) during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated. This longitudinal study revealed that video game training may preserve reward responsiveness in the VS in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.
Byrnes, Patrick D.; Higgins, William E.
Image-guided bronchoscopy is a critical component in the treatment of lung cancer and other pulmonary disorders. During bronchoscopy, a high-resolution endobronchial video stream facilitates guidance through the lungs and allows for visual inspection of a patient's airway mucosal surfaces. Despite the detailed information it contains, little effort has been made to incorporate recorded video into the clinical workflow. Follow-up procedures often required in cancer assessment or asthma treatment could significantly benefit from effectively parsed and summarized video. Tracking diagnostic regions of interest (ROIs) could potentially better equip physicians to detect early airway-wall cancer or improve asthma treatments, such as bronchial thermoplasty. To address this need, we have developed a system for the postoperative analysis of recorded endobronchial video. The system first parses an input video stream into endoscopic shots, derives motion information, and selects salient representative key frames. Next, a semi-automatic method for CT-video registration creates data linkages between a CT-derived airway-tree model and the input video. These data linkages then enable the construction of a CT-video chest model comprised of a bronchoscopy path history (BPH) - defining all airway locations visited during a procedure - and texture-mapping information for rendering registered video frames onto the airwaytree model. A suite of analysis tools is included to visualize and manipulate the extracted data. Video browsing and retrieval is facilitated through a video table of contents (TOC) and a search query interface. The system provides a variety of operational modes and additional functionality, including the ability to define regions of interest. We demonstrate the potential of our system using two human case study examples.
Zhang, Zhengbing; Deng, Huiping; Xia, Zhenhua
Video systems have been widely used in many fields such as conferences, public security, military affairs and medical treatment. With the rapid development of FPGA, SOPC has been paid great attentions in the area of image and video processing in recent years. A network video transmission system based on SOPC is proposed in this paper for the purpose of video acquisition, video encoding and network transmission. The hardware platform utilized to design the system is an SOPC board of model Altera's DE2, which includes an FPGA chip of model EP2C35F672C6, an Ethernet controller and a video I/O interface. An IP core, known as Nios II embedded processor, is used as the CPU of the system. In addition, a hardware module for format conversion of video data, and another module to realize Motion-JPEG have been designed with Verilog HDL. These two modules are attached to the Nios II processor as peripheral equipments through the Avalon bus. Simulation results show that these two modules work as expected. Uclinux including TCP/IP protocol as well as the driver of Ethernet controller is chosen as the embedded operating system and an application program scheme is proposed.
Laptenok, V. D.; Seregin, Y. N.; Bocharov, A. N.; Murygin, A. V.; Tynchenko, V. S.
Equipment of video observation system for electron beam welding process was developed. Construction of video observation system allows to reduce negative effects on video camera during the process of electron beam welding and get qualitative images of this process.
Kasprowicz, Grzegorz; Pastuszak, Grzegorz; Poźniak, Krzysztof; Trochimiuk, Maciej; Abramowski, Andrzej; Gaska, Michal; Bukowiecka, Danuta; Tyburska, Agata; Struniawski, Jarosław; Jastrzebski, Pawel; Jewartowski, Blazej; Frasunek, Przemysław; Nalbach-Moszynska, Małgorzata; Brawata, Sebastian; Bubak, Iwona; Gloza, Małgorzata
The purpose of the project is development of a platform which integrates video signals from many sources. The signals can be sourced by existing analogue CCTV surveillance installations, recent internet-protocol (IP) cameras or single cameras of any type. The system will consist of portable devices that provide conversion, encoding, transmission and archiving. The sharing subsystem will use distributed file system and also user console which provides simultaneous access to any of video streams in real time. The system is fully modular so its extension is possible, both from hardware and software side. Due to standard modular technology used, partial technology modernization is also possible during a long exploitation period.
Ørngreen, Rikke; Levinsen, Karin Ellen Tweddell; Jelsbak, Vibe Alopaeus
The Bachelor Programme in Biomedical Laboratory Analysis at VIA's healthcare university college in Aarhus has established a blended class which combines traditional and live broadcast teaching (via an innovative choice of video conferencing system). On the so-called net-days, students have....... From here a number of general principles and perspective were derived for the specific program which can be useful to contemplate in general for similar educations. It is concluded that the blended class model using live video stream represents a viable pedagogical solution for the Bachelor Programme...... sheds light on the pedagogical challenges, the educational designs possible, the opportunities and constrains associated with video conferencing as a pedagogical practice, as well as the technological, structural and organisational conditions involved. In this paper a participatory action research...
Bolona Lopez, Maria del Carmen; Ortiz, Margarita Elizabeth; Allen, Christopher
This paper describes a project to use mobile devices and video conferencing technology in the assessment of student English as a Foreign Language (EFL) teacher performance on teaching practice in Ecuador. With the increasing availability of mobile devices with video recording facilities, it has become easier for trainers to capture teacher…
Full Text Available Automated video object recognition is a topic of emerging importance in both defense and civilian applications. This work describes an accurate and low-power neuromorphic architecture and system for real-time automated video object recognition. Our system, Neuormorphic Visual Understanding of Scenes (NEOVUS, is inspired by recent findings in computational neuroscience on feed-forward object detection and classification pipelines for processing and extracting relevant information from visual data. The NEOVUS architecture is inspired by the ventral (what and dorsal (where streams of the mammalian visual pathway and combines retinal processing, form-based and motion-based object detection, and convolutional neural nets based object classification. Our system was evaluated by the Defense Advanced Research Projects Agency (DARPA under the NEOVISION2 program on a variety of urban area video datasets collected from both stationary and moving platforms. The datasets are challenging as they include a large number of targets in cluttered scenes with varying illumination and occlusion conditions. The NEOVUS system was also mapped to commercially available off-the-shelf hardware. The dynamic power requirement for the system that includes a 5.6Mpixel retinal camera processed by object detection and classification algorithms at 30 frames per second was measured at 21.7 Watts (W, for an effective energy consumption of 5.4 nanoJoules (nJ per bit of incoming video. In a systematic evaluation of five different teams by DARPA on three aerial datasets, the NEOVUS demonstrated the best performance with the highest recognition accuracy and at least three orders of magnitude lower energy consumption than two independent state of the art computer vision systems. These unprecedented results show that the NEOVUS has the potential to revolutionize automated video object recognition towards enabling practical low-power and mobile video processing applications.
Koch, Michael; Fischer, Martin R; Tipold, Andrea; Ehlers, Jan P
In veterinary medicine, there is an ongoing need for students, educators, and veterinarians to exchange the latest knowledge in their respective fields and to learn about unusual cases, emerging diseases, and treatment. Networking among veterinary faculties is developing rapidly, but conferences and meetings can be difficult to attend because of time limitations and travel costs. The current study examines acceptance of synchronous online conferences, seminars, meetings, and lectures by veterinarians and students. First, an online survey on the use of communication technology in veterinary medicine was made available for 15 weeks to every German-speaking veterinary university and via professional journals and an online veterinary forum. A total of 1,776 persons (620 veterinarians and 1,156 students) participated. Most reported using the Internet at least once per day; more than half reported using instant messengers. Most participants used the Internet for communication, but less than half used Skype. Second, to test the spectrum of tools for online conferences, a variety of "virtual classroom" systems (netucate systems iLinc, Adobe Acrobat Connect Pro, Cisco WebEx, Skype) were used to deliver student lectures, veterinary continuing-education courses, and academic conferences at the University of Veterinary Medicine, Hannover (TiHo). Of 591 participants in 63 online events, 99.4% rated the virtual events as enjoyable, 96.1% found them useful, and 92.4% said that they learned a lot. Participants noted that the courses were not tied to a certain place, and thus saved time and travel costs. Online conference systems thus offer new opportunities to provide information in veterinary medicine.
Full Text Available Communication systems which support 3D (Three Dimensional audio offer a couple of advantages to the users/customers. Firstly, within the virtual acoustic environments all participants could easily be recognized through their placement/sitting positions. Secondly, all participants can turn their focus on any particular talker when multiple participants start talking at the same time by taking advantage of the natural listening tendency which is called the Cocktail Party Effect. On the other hand, 3D audio is known as a decreasing factor for overall speech quality because of the commencement of reverberations and echoes within the listening environment. In this article, we study the tradeoff between speech quality and human natural ability of localizing audio events/or talkers within our three dimensional audio supported telephony and teleconferencing solution. Further, we performed subjective user studies by incorporating two different HRTFs (Head Related Transfer Functions, different placements of the teleconferencing participants and different layouts of the virtual environments. Moreover, subjective user studies results for audio event localization and subjective speech quality are presented in this article. This subjective user study would help the research community to optimize the existing 3D audio systems and to design new 3D audio supported teleconferencing solutions based on the quality of experience requirements of the users/customers for agriculture personal in particular and for all potential users in general.
Xia, Jiali; Jin, Jesse S.
Video-On-Demand is a new development on the Internet. In order to manage the rich multimedia information and the large number of users, we present an Internet Video-On-Demand system with some E- Commerce flavors. This paper presents the system architecture and technologies required in the implementation. It provides interactive Video-On-Demand services in which the user has a complete control over the session presentation. It allows the user to select and receive specific video information by retrieving the database. For improving the performance of video information retrieval and management, the video information is represented by hierarchical video metadata in XML format. Video metadatabase stored the video information in this hierarchical structure and allows user to search the video shots at different semantic levels in the database. To browse the searched video, the user not only has full-function VCR capabilities as the traditional Video-On-Demand, but also can browse the video in a hierarchical method to view different shots. In order to perform management of large number of users over the Internet, a membership database designed and managed in an E-Commerce environment, which allows the user to access the video database based on different access levels.
Hench, David L.
The H.264 video compression standard, aka MPEG 4 Part 10 aka Advanced Video Coding (AVC) allows new flexibility in the use of video in the battlefield. This standard necessitates encoder chips to effectively utilize the increased capabilities. Such chips are designed to cover the full range of the standard with designers of individual products given the capability of selecting the parameters that differentiate a broadcast system from a video conferencing system. The SmartCapture commercial product and the Universal Video Stick (UVS) military versions are about the size of a thumb drive with analog video input and USB (Universal Serial Bus) output and allow the user to select the parameters of imaging to the. Thereby, allowing the user to select video bandwidth (and video quality) using four dimensions of quality, on the fly, without stopping video transmission. The four dimensions are: 1) spatial, change from 720 pixel x 480 pixel to 320 pixel x 360 pixel to 160 pixel x 180 pixel, 2) temporal, change from 30 frames/ sec to 5 frames/sec, 3) transform quality with a 5 to 1 range, 4) and Group of Pictures (GOP) that affects noise immunity. The host processor simply wraps the H.264 network abstraction layer packets into the appropriate network packets. We also discuss the recently adopted scalable amendment to H.264 that will allow limit RAVC at any point in the communication chain by throwing away preselected packets.
Spector, B.; Eilbert, L.; Finando, S.; Fukuda, F.
A Video Integrated Measurement (VIM) System is described which incorporates the use of various noninvasive diagnostic procedures (moire contourography, electromyography, posturometry, infrared thermography, etc.), used individually or in combination, for the evaluation of neuromusculoskeletal and other disorders and their management with biofeedback and other therapeutic procedures. The system provides for measuring individual diagnostic and therapeutic modes, or multiple modes by split screen superimposition, of real time (actual) images of the patient and idealized (ideal-normal) models on a video monitor, along with analog and digital data, graphics, color, and other transduced symbolic information. It is concluded that this system provides an innovative and efficient method by which the therapist and patient can interact in biofeedback training/learning processes and holds considerable promise for more effective measurement and treatment of a wide variety of physical and behavioral disorders.
Gershkoff, I.; Haspert, J. K.; Morgenstern, B.
A cost model that can be used to systematically identify the costs of procuring and operating satellite linked communications systems is described. The user defines a network configuration by specifying the location of each participating site, the interconnection requirements, and the transmission paths available for the uplink (studio to satellite), downlink (satellite to audience), and voice talkback (between audience and studio) segments of the network. The model uses this information to calculate the least expensive signal distribution path for each participating site. Cost estimates are broken downy by capital, installation, lease, operations and maintenance. The design of the model permits flexibility in specifying network and cost structure.
Kuo, Huang-Chih; Lin, Youn-Long
Intra-frame encoding is useful for many video applications such as security surveillance, digital cinema, and video conferencing because it supports random access to every video frame for easy editing...
Petkovic, M.; Jonker, Willem
An increasing number of large publicly available video libraries results in a demand for techniques that can manipulate the video data based on content. In this paper, we present a content-based video retrieval system called Cobra. The system supports automatic extraction and retrieval of high-level
Smithwick, Erica; Baxter, Emily; Kim, Kyung; Edel-Malizia, Stephanie; Rocco, Stevie; Blackstock, Dean
Two forms of interactive video were assessed in an online course focused on conservation. The hypothesis was that interactive video enhances student perceptions about learning and improves mental models of social-ecological systems. Results showed that students reported greater learning and attitudes toward the subject following interactive video.…
Bardram, Jakob Eyvind; Bossen, Claus; Lykke-Olesen, Andreas
Virtual studio technology enables the mixing of physical and digital 3D objects and thus expands the way of representing design ideas in terms of virtual video prototypes, which offers new possibilities for designers by combining elements of prototypes, mock-ups, scenarios, and conventional video....... In this article we report our initial experience in the domain of pervasive healthcare with producing virtual video prototypes and using them in a design workshop. Our experience has been predominantly favourable. The production of a virtual video prototype forces the designers to decide very concrete design...
Bardram, Jakob; Bossen, Claus; Lykke-Olesen, Andreas
Virtual studio technology enables the mixing of physical and digital 3D objects and thus expands the way of representing design ideas in terms of virtual video prototypes, which offers new possibilities for designers by combining elements of prototypes, mock-ups, scenarios, and conventional video....... In this article we report our initial experience in the domain of pervasive healthcare with producing virtual video prototypes and using them in a design workshop. Our experience has been predominantly favourable. The production of a virtual video prototype forces the designers to decide very concrete design...
Video Streaming is nowadays the Internet’s biggest source of consumer traffic. Traditional content providers rely on centralised client-server model for distributing their video streaming content. The current generation is moving from being passive viewers, or content consumers, to active content
Bescos, Jesus; Martinez, Jose M.; Cabrera, Julian M.; Cisneros, Guillermo
This paper describes the first stages of a research project that is currently being developed in the Image Processing Group of the UPM. The aim of this effort is to add video capabilities to the Storage and Retrieval Information System already working at our premises. Here we will focus on the early design steps of a Video Information System. For this purpose, we present a review of most of the reported techniques for video temporal segmentation and semantic segmentation, previous steps to afford the content extraction task, and we discuss them to select the more suitable ones. We then outline a block design of a temporal segmentation module, and present guidelines to the design of the semantic segmentation one. All these operations trend to facilitate automation in the extraction of low level features and semantic features that will finally take part of the video descriptors.
Zhao, Heng; Wang, Xiang-jun
This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.
Lynn Anderson, Barb Fyvie, Brenda Koritko, Kathy McCarthy, Sonia Murillo Paz, Mary Rizzuto, Remi Tremblay, and Urel Sawyers
Full Text Available Practical guidelines are offered for the use of online synchronous conferencing software by session administrators and moderators. The configuration of the software prior to conferencing sessions is discussed, and the planning and implementation of useful collaborative activities such as “synchronised browsing.” The combination of these practices into useful ”patterns” for specific online conferencing purposes is discussed.
Robert D. Gaglianello
Full Text Available This paper describes a scalable multipoint video system, designed for efficient generation and display of high quality, multiple resolution, multiple compressed video streams over IP-based networks. We present our experiences using the system over the public Internet for several real-world applications, including distance learning, virtual theater, and virtual collaboration. The trials were a combined effort of Bell Laboratories and the Gertrude Stein Repertory Theatre (TGSRT. We also present current advances in the conferencing system since the trials, new areas for application and future applications.
This article is concerned with the oral language demands (both talking and listening) associated with restorative justice conferencing--an inherently highly verbal and conversational process. Many vulnerable young people (e.g., those in the youth justice system) have significant, yet unidentified language impairments, and these could compromise…
Full Text Available In order to support high-definition video transmission, an implementation of video transmission system based on Long Term Evolution is designed. This system is developed on Xilinx Virtex-6 FPGA ML605 Evaluation Board. The paper elaborates the features of baseband link designed in Xilinx ISE and protocol stack designed in Xilinx SDK, and introduces the process of setting up hardware and software platform in Xilinx XPS. According to test, this system consumes less hardware resource and is able to transmit bidirectional video clearly and stably.
Botella, Guillermo; García, Carlos; Meyer-Bäse, Uwe
This contribution focuses on different topics covered by the special issue titled `Hardware Implementation of Machine vision Systems' including FPGAs, GPUS, embedded systems, multicore implementations for image analysis such as edge detection, segmentation, pattern recognition and object recognition/interpretation, image enhancement/restoration, image/video compression, image similarity and retrieval, satellite image processing, medical image processing, motion estimation, neuromorphic and bioinspired vision systems, video processing, image formation and physics based vision, 3D processing/coding, scene understanding, and multimedia.
van der Schaar-Mitrea, Mihaela; de With, Peter H. N.
The diversity in TV images has augmented with the increased application of computer graphics. In this paper we study z coding system that supports both the lossless coding of such graphics data and regular lossy video compression. The lossless coding techniques are based on runlength and arithmetical coding. For video compression, we introduce a simple block predictive coding technique featuring individual pixel access, so that it enables a gradual shift from lossless coding of graphics to the lossy coding of video. An overall bit rate control completes the system. Computer simulations show a very high quality with a compression factor between 2-3.
Full Text Available Learning Management System (LMS supports e-learning as a distant learning. Moodle is one of open source LMS applications that allow embedding multimedia into learning activity in a course, such as video conference session. The paper investigates quality of service (QoS of video conference session embedded in Moodle, i.e. end-to-end delay, jitter, throughput, packet loss and PSNR. Three scenarios were implemented in the experiment. The scenarios were applied on both wire and wireless transmission, and p2p and p2m connections. The investigation results show that the QoS of video conference session meets the standards issued by ITU-T G.1010 and G.114, for minimum bandwidth of 128 kbps. Thus the application of video conferencing that is integrated in Moodle can run well with minimum bandwidth of 128 Kbps.
Full Text Available Most universities are already implementing wired and wireless network that is used to access integrated information systems and the Internet. At present it is important to do research on the influence of the broadcasting system through the access point for video transmitter learning in the university area. At every university computer network through the access point must also use the cable in its implementation. These networks require cables that will connect and transmit data from one computer to another computer. While wireless networks of computers connected through radio waves. This research will be a test or assessment of how the influence of the network using the WLAN access point for video broadcasting means learning from the server to the client. Instructional video broadcasting from the server to the client via the access point will be used for video broadcasting means of learning. This study aims to understand how to build a wireless network by using an access point. It also builds a computer server as instructional videos supporting software that can be used for video server that will be emitted by broadcasting via the access point and establish a system of transmitting video from the server to the client via the access point.
Full Text Available ABSTRACT This paper reports on the trial of web conferencing software conducted at a regional Australian university with a significant distance population. The paper shares preliminary findings, the views of participants and recommendations for future activity. To design and conduct the trial, an action research method was chosen because it is participative and grounded in experience, reflecting the context and objectives of the trial. In the first phase of the trial, students in postgraduate Education courses were linked across the globe to participate in interactive and collaborative conference activity and to communicate via audio, text, and video and shared whiteboard. Mathematical problem-solving was carried out collaboratively in an undergraduate course using tablet PCs. This was followed by phase 2, a university-wide trial across disciplines. Preliminary findings indicate that web conferencing software enables teachers and students at the university to engage actively across diverse locations, supporting a student-centred approach and greater flexibility in terms of where, when and how students learn. From these findings, the authors have made some initial recommendations to university management on the adoption of web conferencing to support learning and teaching.
Mohamed M. Fouad
Full Text Available In this paper, we present a modified inter-view prediction Multiview Video Coding (MVC scheme from the perspective of viewer's interactivity. When a viewer requests some view(s, our scheme leads to lower transmission bit-rate. We develop an interactive multiview video streaming system exploiting that modified MVC scheme. Conventional interactive multiview video systems require high bandwidth due to redundant data being transferred. With real data test sequences, clear improvements are shown using the proposed interactive multiview video system compared to competing ones in terms of the average transmission bit-rate and storage size of the decoded (i.e., transferred data with comparable rate-distortion.
... shall be “Open Video System Notice of Intent” and “Attention: Media Bureau.” This wording shall be... Notice of Intent with the Office of the Secretary and the Bureau Chief, Media Bureau. The Notice of... capacity through a fair, open and non-discriminatory process; the process must be insulated from any bias...
Full Text Available A novel video conference system is developed. Suppose that three people A, B, and C attend the video conference, the proposed system enables eye contact among every pair. Furthermore, when B and C chat, A feels as if B and C were facing each other (eye contact seems to be kept among B and C. In the case of a triangle video conference, the respective video system is composed of a half mirror, two video cameras, and two monitors. Each participant watches other participants' images that are reflected by the half mirror. Cameras are set behind the half mirror. Since participants' image (face and the camera position are adjusted to be the same direction, eye contact is kept and conversation becomes very natural compared with conventional video conference systems where participants' eyes do not point to the other participant. When 3 participants sit at the vertex of an equilateral triangle, eyes can be kept even for the situation mentioned above (eye contact between B and C from the aspect of A. Eye contact can be kept not only for 2 or 3 participants but also any number of participants as far as they sit at the vertex of a regular polygon.
Full Text Available This paper reports on the development of an automated embedded video surveillance system using two customized embedded RISC processors. The application is partitioned into object tracking and video stream encoding subsystems. The real-time object tracker is able to detect and track moving objects by video images of scenes taken by stationary cameras. It is based on the block-matching algorithm. The video stream encoding involves the optimization of an international telecommunications union (ITU-T H.263 baseline video encoder for quarter common intermediate format (QCIF and common intermediate format (CIF resolution images. The two subsystems running on two processor cores were integrated and a simple protocol was added to realize the automated video surveillance system. The experimental results show that the system is capable of detecting, tracking, and encoding QCIF and CIF resolution images with object movements in them in real-time. With low cycle-count, low-transistor count, and low-power consumption requirements, the system is ideal for deployment in remote locations.
Giroire, Frédéric; Huin, Nicolas
International audience; —We study distributed systems for live video streaming. These systems can be of two types: structured and un-structured. In an unstructured system, the diffusion is done opportunistically. The advantage is that it handles churn, that is the arrival and departure of users, which is very high in live streaming systems, in a smooth way. On the opposite, in a structured system, the diffusion of the video is done using explicit diffusion trees. The advantage is that the dif...
Al-Hamad, A.; Moussa, A.; El-Sheimy, N.
The last two decades have witnessed a huge growth in the demand for geo-spatial data. This demand has encouraged researchers around the world to develop new algorithms and design new mapping systems in order to obtain reliable sources for geo-spatial data. Mobile Mapping Systems (MMS) are one of the main sources for mapping and Geographic Information Systems (GIS) data. MMS integrate various remote sensing sensors, such as cameras and LiDAR, along with navigation sensors to provide the 3D coordinates of points of interest from moving platform (e.g. cars, air planes, etc.). Although MMS can provide accurate mapping solution for different GIS applications, the cost of these systems is not affordable for many users and only large scale companies and institutions can benefits from MMS systems. The main objective of this paper is to propose a new low cost MMS with reasonable accuracy using the available sensors in smartphones and its video camera. Using the smartphone video camera, instead of capturing individual images, makes the system easier to be used by non-professional users since the system will automatically extract the highly overlapping frames out of the video without the user intervention. Results of the proposed system are presented which demonstrate the effect of the number of the used images in mapping solution. In addition, the accuracy of the mapping results obtained from capturing a video is compared to the same results obtained from using separate captured images instead of video.
Craciunescu, Razvan; Mihovska, Albena Dimitrova; Kyriazakos, Sofoklis
Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current research focus includes on the emotion...... recognition from the face and hand gesture recognition. Gesture recognition enables humans to communicate with the machine and interact naturally without any mechanical devices. This paper investigates the possibility to use non-audio/video sensors in order to design a low-cost gesture recognition device...
Su, Ang; Zhang, Yueqiang; Dong, Jing; Xu, Yuhua; Zhu, Xianwei; Zhang, Xiaohu
The high portability of small Unmanned Aircraft Vehicles (UAVs) makes them play an important role in surveillance and reconnaissance tasks, so the military and civilian desires for UAVs are constantly growing. Recently, we have developed a real-time video exploitation system for our small UAV which is mainly used in forest patrol tasks. Our system consists of six key models, including image contrast enhancement, video stabilization, mosaicing, salient target indication, moving target indication, and display of the footprint and flight path on map. Extensive testing on the system has been implemented and the result shows our system performed well.
Kapustin, A. A.; Razumovskii, V. N.; Iatsevich, G. B.
A spatial-spectral analysis method is considered for a laser scanning video system with the phase processing of a received signal, on a modulation frequency. Distortions caused by the system are analyzed, and a general problem is reduced for the case of a cylindrical surface. The approach suggested can also be used for scanning microwave systems.
... system operator may charge different rates to different classes of video programming providers, provided... open video systems. 76.1504 Section 76.1504 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76...
Weitze, Charlotte Lærke; Ørngreen, Rikke; Levinsen, Karin
their exams. Evaluations show that the students are happy with the flexibility this model provides in their everyday life. However, our findings also show several obstacles. Firstly technical issues are at play, but also the learning design of the lessons, as well as general organizational and cultural issues...
edition. London: Prentice Hall. Turner, J.R. & Müller, R. 2004. Communication and co-operation on projects between the project owner as principal and the project manager as agent. European Management Journal, 22(3), pp. 327-. 336. http://dx.doi.org/10.1016/j.emj.2004.04.010. Van den Bulte, C. & Moenaert, R.K. 1998.
Solutions to Current economic problems associated with national economic depression need be approached from technology point of view. The cost of air and land movements have tripled in the last few months with the attendant risk of accident, armed robbery attacks and vehicular breakdown. If every member of staff, ...
Song, Joongseok; Kim, Changseob; Park, Hanhoon; Park, Jong-Il
We propose a practical system that can effectively mix the depth data of real and virtual objects by using a Z buffer and can quickly generate digital mixed reality video holograms by using multiple graphic processing units (GPUs). In an experiment, we verify that real objects and virtual objects can be merged naturally in free viewing angles, and the occlusion problem is well handled. Furthermore, we demonstrate that the proposed system can generate mixed reality video holograms at 7.6 frames per second. Finally, the system performance is objectively verified by users' subjective evaluations.
Aghito, Shankar Manuel; Stegmann, Mikkel Bille; Forchhammer, Søren
Aspects of video communication based on gaze interaction are considered. The overall idea is to use gaze interaction to control video, e.g. for video conferencing. Towards this goal, animation of a facial mask is demonstrated. The animation is based on images using Active Appearance Models (AAM...
Chen, Chien-Hsu; Chou, Yin-Ju
This study focuses on development of augmented video system on traditional picture postcards. The system will provide users to print out the augmented reality marker on the sticker to stick on the picture postcard, and it also allows users to record their real time image and video to augment on that stick marker. According dynamic image, users can share travel moods, greeting, and travel experience to their friends. Without changing in the traditional picture postcards, we develop augmented video system on them by augmented reality (AR) technology. It not only keeps the functions of traditional picture postcards, but also enhances user's experience to keep the user's memories and emotional expression by augmented digital media information on them.
Full Text Available Future wireless video transmission systems will consider orthogonal frequency division multiplexing (OFDM as the basic modulation technique due to its robustness and low complexity implementation in the presence of frequency-selective channels. Recently, adaptive bit loading techniques have been applied to OFDM showing good performance gains in cable transmission systems. In this paper a multilayer bit loading technique, based on the so called Ã‚Â“ordered subcarrier selection algorithm,Ã‚Â” is proposed and applied to a Hiperlan2-like wireless system at 5 GHz for efficient layered multimedia transmission. Different schemes realizing unequal error protection both at coding and modulation levels are compared. The strong impact of this technique in terms of video quality is evaluated for MPEG-4 video transmission.
Rothkrantz, L.; Lefter, I.
The paper describes a surveillance system of cameras installed at lamppost of a military area. The surveillance system has been designed to detect unwanted visitors or suspicious behaviors. The area is composed of streets, building blocks and surrounded by gates and water. The video recordings are
Ecliptic Enterprises Corporation, headquartered in Pasadena, California, provided onboard video systems for rocket and space shuttle launches before it was tasked by Ames Research Center to craft the Data Handling Unit that would control sensor instruments onboard the Lunar Crater Observation and Sensing Satellite (LCROSS) spacecraft. The technological capabilities the company acquired on this project, as well as those gained developing a high-speed video system for monitoring the parachute deployments for the Orion Pad Abort Test Program at Dryden Flight Research Center, have enabled the company to offer high-speed and high-definition video for geosynchronous satellites and commercial space missions, providing remarkable footage that both informs engineers and inspires the imagination of the general public.
Yang, Fan; Ma, Chunting; Li, Haoyi
The design of a wireless video transmission system based on STM32, the system uses the STM32F103VET6 microprocessor as the core, through the video acquisition module collects video data, video data will be sent to the receiver through the wireless transmitting module, receiving data will be displayed on the LCD screen. The software design process of receiver and transmitter is introduced. The experiment proves that the system realizes wireless video transmission function.
Jones, D. P.; Shirey, D. L.; Amai, W. A.
This paper presents a high bandwidth fiber-optic communication system intended for post accident recovery of weapons. The system provides bi-directional multichannel, and multi-media communications. Two smaller systems that were developed as direct spin-offs of the larger system are also briefly discussed.
Jones, D.P.; Shirey, D.L.; Amai, W.A.
This paper presents a high bandwidth fiber-optic communication system intended for post accident recovery of weapons. The system provides bi-directional multichannel, and multi-media communications. Two smaller systems that were developed as direct spin-offs of the larger system are also briefly discussed.
... COMMISSION In the Matter of Certain Video Analytics Software, Systems, Components Thereof, and Products... analytics software, systems, components thereof, and products containing same by reason of infringement of... after importation of certain video analytics software, systems, components thereof, and products...
... COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... analytics software, systems, components thereof, and products containing same by reason of infringement of... after importation of certain video analytics software, systems, components thereof, and products...
... COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... Trade Commission has received a complaint entitled Certain Video Analytics Software, Systems, Components... analytics software, systems, components thereof, and products containing same. The complaint names as...
Gramss, Denise; Struve, Doreen
The study reported in this paper investigated the usefulness of different instructions for guiding inexperienced older adults through interactive systems. It was designed to compare different media in relation to their social as well as their motivational impact on the elderly during the learning process. Precisely, the video was compared with…
Glazkov, V. D.; Goretov, Iu. M.; Rozhavskii, E. I.; Shcherbakov, V. V.
The self-correcting video section of the satellite-borne Fragment multispectral scanning system is described. This section scheme makes possible a sufficiently efficient equalization of the transformation coefficients of all the measuring sections in the presence of a reference-radiation source and a single reference time interval for all the sections.
Full Text Available In this work we introduce a simple client-server system architecture and algorithms for ubiquitous live video and VOD service support. The main features of the system are: efficient usage of network resources, emphasis on user personalization, and ease of implementation. The system supports many continuous service requirements such as QoS provision, user mobility between networks and between different communication devices, and simultaneous usage of a device by a number of users.
Full Text Available An object-based video authentication system, which combines watermarking, error correction coding (ECC, and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI.
Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian
Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.
Kong, Hyoun-Joong; Seo, Jong Mo; Hwang, Jeong Min; Kim, Hee Chan
Binocular indirect ophthalmoscope (BIO) provides a wider view of fundus with stereopsis contrary to the direct one. Proposed system is composed of portable BIO and 3D viewing unit. The illumination unit of BIO utilized high flux LED as a light source, LED condensing lens cap for beam focusing, color filters and small lithium ion battery. In optics unit of BIO, beam splitter was used to distribute an examinee's fundus image both to examiner's eye and to CMOS camera module attached to device. Captured retinal video stream data from stereo camera modules were sent to PC through USB 2.0 connectivity. For 3D viewing, two video streams having parallax between them were aligned vertically and horizontally and made into side-by-side video stream for cross-eyed stereoscopy. And the data were converted into autostereoscopic video stream using vertical interlacing for stereoscopic LCD which has glass 3D filter attached to the front side of it. Our newly devised system presented the real-time 3-D view of fundus to assistants with less dizziness than cross-eyed stereoscopy. And the BIO showed good performance compared to conventional portable BIO (Spectra Plus, Keeler Limited, Windsor, UK).
Chow, John W.; Carlton, Les G.; Ekkekakis, Panteleimon; Hay, James G.
Discusses advantages of a video-based, digitized image system for the study and analysis of projectile motion in the physics laboratory. Describes the implementation of a web-based digitized video system. (WRM)
... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... the United States after importation of certain video analytics software systems, components thereof...
... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Investigations: Terminations, Modifications and Rulings: Certain Video Game Systems and... United States after importation of certain video game systems and controllers by reason of infringement...
Mikulec, Martin; Voznak, Miroslav; Safarik, Jakub; Partila, Pavol; Rozhon, Jan; Mehic, Miralem
The paper deals with presentation of the IVAS system within the 7FP EU INDECT project. The INDECT project aims at developing the tools for enhancing the security of citizens and protecting the confidentiality of recorded and stored information. It is a part of the Seventh Framework Programme of European Union. We participate in INDECT portal and the Interactive Video Audio System (IVAS). This IVAS system provides a communication gateway between police officers working in dispatching centre and police officers in terrain. The officers in dispatching centre have capabilities to obtain information about all online police officers in terrain, they can command officers in terrain via text messages, voice or video calls and they are able to manage multimedia files from CCTV cameras or other sources, which can be interesting for officers in terrain. The police officers in terrain are equipped by smartphones or tablets. Besides common communication, they can reach pictures or videos sent by commander in office and they can respond to the command via text or multimedia messages taken by their devices. Our IVAS system is unique because we are developing it according to the special requirements from the Police of the Czech Republic. The IVAS communication system is designed to use modern Voice over Internet Protocol (VoIP) services. The whole solution is based on open source software including linux and android operating systems. The technical details of our solution are presented in the paper.
A HTTP based video transmission system has been built upon the p2p(peer to peer) network structure utilizing the Java technologies. This makes the video monitoring available to any host which has been connected to the World Wide Web in any method, including those hosts behind firewalls or in isolated sub-networking. In order to achieve this, a video source peer has been developed, together with the client video playback peer. The video source peer can respond to the video stream request in HTTP protocol. HTTP based pipe communication model is developed to speeding the transmission of video stream data, which has been encoded into fragments using the JPEG codec. To make the system feasible in conveying video streams between arbitrary peers on the web, a HTTP protocol based relay peer is implemented as well. This video monitoring system has been applied in a tele-robotic system as a visual feedback to the operator.
Shi, Yu; Yan, Guozheng; Zhu, Bingquan; Liu, Gang
Wireless power transmission (WPT) technology can solve the energy shortage problem of the video capsule endoscope (VCE) powered by button batteries, but the fixed platform limited its clinical application. This paper presents a portable WPT system for VCE. Besides portability, power transfer efficiency and stability are considered as the main indexes of optimization design of the system, which consists of the transmitting coil structure, portable control box, operating frequency, magnetic core and winding of receiving coil. Upon the above principles, the correlation parameters are measured, compared and chosen. Finally, through experiments on the platform, the methods are tested and evaluated. In the gastrointestinal tract of small pig, the VCE is supplied with sufficient energy by the WPT system, and the energy conversion efficiency is 2.8%. The video obtained is clear with a resolution of 320×240 and a frame rate of 30 frames per second. The experiments verify the feasibility of design scheme, and further improvement direction is discussed.
Full Text Available In order to respond to learners' need for more flexible speaking opportunities and to overcome the geographical challenge of students spread over the United Kingdom and continental Western Europe, the Open University recently introduced Internet-based, real-time audio conferencing, thus making a groundbreaking move in the distance learning and teaching of languages. Since February 2002, online tutorials for language courses have been offered using Lyceum, an Internet-based audio-graphics conferencing tool developed in house. Our research is based on the first Open University course ever to deliver tutorials solely online, a level 2 German course, and this article considers some of the challenges of implementing online tuition. As a starting point, we present the pedagogical rationale underpinning the virtual learning and teaching environment. Then we examine the process of development and implementation of online tuition in terms of activity design, tutor training, and student support. A number of methodological tools such as logbooks, questionnaires, and observations were used to gather data. The findings of this paper highlight the complexity of the organisational as well as the pedagogical framework that contributes to the effective use of online tuition via audio conferencing systems in a distance education setting.
... COMMISSION In the Matter of: Certain Video Game Systems and Controllers; Notice of Investigation AGENCY: U.S... importation, and the sale within the United States after importation of certain video game systems and... after importation of certain video game systems and controllers that infringe one or more of claims 16...
Cormier, Etienne; Cao, Frédéric; Guichard, Frédéric; Viard, Clément
This article presents a system and a protocol to characterize image stabilization systems both for still images and videos. It uses a six axes platform, three being used for camera rotation and three for camera positioning. The platform is programmable and can reproduce complex motions that have been typically recorded by a gyroscope mounted on different types of cameras in different use cases. The measurement uses a single chart for still image and videos, the texture dead leaves chart. Although the proposed implementation of the protocol uses a motion platform, the measurement itself does not rely on any specific hardware. For still images, a modulation transfer function is measured in different directions and is weighted by a contrast sensitivity function (simulating the human visual system accuracy) to obtain an acutance. The sharpness improvement due to the image stabilization system is a good measurement of performance as recommended by a CIPA standard draft. For video, four markers on the chart are detected with sub-pixel accuracy to determine a homographic deformation between the current frame and a reference position. This model describes well the apparent global motion as translations, but also rotations along the optical axis and distortion due to the electronic rolling shutter equipping most CMOS sensors. The protocol is applied to all types of cameras such as DSC, DSLR and smartphones.
Sun, Jun; Liang, Mingxing; Chen, Weijun; Zhang, Bin
In order to reinforce the measure of vegetable shed's safety, the S3C44B0X is taken as the main processor chip. The embedded hardware platform is built with a few outer-ring chips, and the network server is structured under the Linux embedded environment, and MPEG4 compression and real time transmission are carried on. The experiment indicates that the video monitoring system can guarantee good effect, which can be applied to the safety of vegetable sheds.
Langbehn, Hendrickson Reiter; Ricci, Saulo M. R.; Gonçalves, Marcos A.; Almeida, Jussara Marques; Pappa, Gisele Lobo; Benevenuto, Fabrício
Most online video sharing systems (OVSSs), such as YouTube and Yahoo! Video, have several mechanisms for supporting interactions among users. One such mechanism is the video response feature in YouTube, which allows a user to post a video in response to another video. While increasingly popular, the video response feature opens the opportunity for non-cooperative users to introduce ``content pollution'' into the system, thus causing loss of service effectiveness and credibility as w...
Markova, Tsveti; Roth, Linda M
While didactic conferences are an important component of residency training, delivering them efficiently is a challenge for many programs, especially when residents are located in multiple sites, as they are at Wayne State University School of Medicine in the Department of Family Medicine. Our residents find it difficult to travel from our hospitals or rotation sites to a centralized location for conferences. In order to overcome this barrier, we implemented distance learning and electronically delivered the conferences to the residents. We introduced an Internet-delivered, group-learning interactive conference model in which the lecturer is in one location with a group of residents and additional residents are in multiple locations. We launched the project in July 2001 using external company meeting services to schedule, coordinate, support, and archive the conferences. Equipment needed in each location consisted of a computer with an Internet connection, a telephone line, and a LCD projector (a computer monitor sufficed for small groups). We purposely chose simple distance-learning technology and used widely available equipment. Our e-conferencing had two components: (1) audio transmission via telephone connection and (2) visual transmission of PowerPoint presentations via the Internet. The telephone connection was open to all users, allowing residents to ask questions or make comments. Residents chose a conference location depending on geographic proximity to their rotation locations. Although we could accommodate up to 50 sites, we focused on a small number of locations in order to facilitate interaction among residents and faculty. Each conference session is archived and stored on the server for one week so those residents whose other residency-related responsibilities precluded attendance can view any conferences they have missed. E-conferencing proved to be an effective method of delivering didactics in our residency program. Its many advantages included
T. Stevens; P.S. Cesar Garcia (Pablo Santiago); I. Kegel; N. Farber; D. Williams; M. Ursu; P. Stenton; P. Torres; M. Falekakis; R. Kaiser
htmlabstractWhile advances in commercial video conferencing and social networking are driving more people to communicate using video, it is still difficult to achieve a sense of co-presence - that is to make the technology transparent to its users - when mediating ad hoc interactions between groups
Hanjalic, Alan; Ceccarelli, Marco; Lagendijk, Reginald L.; Biemond, Jan
In the European project SMASH mass-market storage systems for domestic use are under study. Besides the storage technology that is developed in this project, the related objective of user-friendly browsing/query of video data is studied as well. Key issues in developing a user-friendly system are (1) minimizing the user-intervention in preparatory steps (extraction and storage of representative information needed for browsing/query), (2) providing an acceptable representation of the stored video content in view of a higher automation level, (3) the possibility for performing these steps directly on the incoming stream at storage time, and (4) parameter-robustness of algorithms used for these steps. This paper proposes and validate novel approaches for automation of mentioned preparatory phases. A detection method for abrupt shot changes is proposed, using locally computed threshold based on a statistical model for frame-to-frame differences. For the extraction of representative frames (key frames) an approach is presented which distributes a given number of key frames over the sequence depending on content changes in a temporal segment of the sequence. A multimedia database is introduced, able to automatically store all bibliographic information about a recorded video as well as a visual representation of the content without any manual intervention from the user.
R Venkatesha Prasad
Full Text Available Real-Time services are traditionally supported on circuit switched network. However, there is a need to port these services on packet switched network. Architecture for audio conferencing application over the Internet in the light of ITU-T H.323 recommendations is considered. In a conference, considering packets only from a set of selected clients can reduce speech quality degradation because mixing packets from all clients can lead to lack of speech clarity. A distributed algorithm and architecture for selecting clients for mixing is suggested here based on a new quantifier of the voice activity called "Loudness Number" (LN. The proposed system distributes the computation load and reduces the load on client terminals. The highlights of this architecture are scalability, bandwidth saving and speech quality enhancement. Client selection for playing out tries to mimic a physical conference where the most vocal participants attract more attention. The contributions of the paper are expected to aid H.323 recommendations implementations for Multipoint Processors (MP. A working prototype based on the proposed architecture is already functional.
Li, Yucheng; Han, Dantao; Yan, Juanli
A wireless video surveillance system based on ARM was designed and implemented in this article. The newest ARM11 S3C6410 was used as the main monitoring terminal chip with the embedded Linux operating system. The video input was obtained by the analog CCD and transferred from analog to digital by the video chip TVP5150. The video was packed by RTP and transmitted by the wireless USB TL-WN322G+ after being compressed by H.264 encoders in S3C6410. Further more, the video images were preprocessed. It can detect the abnormities of the specified scene and the abnormal alarms. The video transmission definition is the standard definition 480P. The video stream can be real-time monitored. The system has been used in the real-time intelligent video surveillance of the specified scene.
Ishikawa, Tomoya; Yamazawa, Kazumasa; Sato, Tomokazu; Ikeda, Sei; Nakamura, Yutaka; Fujikawa, Kazutoshi; Sunahara, Hideki; Yokoya, Naokazu
In this paper, we describe a new telepresence system which enables a user to look around a virtualized real world easily in network environments. The proposed system includes omni-directional video viewers on web browsers and allows the user to look around the omni-directional video contents on the web browsers. The omni-directional video viewer is implemented as an Active-X program so that the user can install the viewer automatically only by opening the web site which contains the omni-directional video contents. The system allows many users at different sites to look around the scene just like an interactive TV using a multi-cast protocol without increasing the network traffic. This paper describes the implemented system and the experiments using live and stored video streams. In the experiment with stored video streams, the system uses an omni-directional multi-camera system for video capturing. We can look around high resolution and high quality video contents. In the experiment with live video streams, a car-mounted omni-directional camera acquires omni-directional video streams surrounding the car, running in an outdoor environment. The acquired video streams are transferred to the remote site through the wireless and wired network using multi-cast protocol. We can see the live video contents freely in arbitrary direction. In the both experiments, we have implemented a view-dependent presentation with a head-mounted display (HMD) and a gyro sensor for realizing more rich presence.
Bower, Matt; Cavanagh, Michael; Moloney, Robyn; Dao, MingMing
This paper reports on how the cognitive, behavioural and affective communication competencies of undergraduate students were developed using an online Video Reflection system. Pre-service teachers were provided with communication scenarios and asked to record short videos of one another making presentations. Students then uploaded their videos to…
AO-AIO 790 BOM CORP MCLEAN VA F/A 17/8 VIDEO AUTOMATIC TARGE T TRACKING SYSTEM (VATTS) OPERATING PROCEO -ETC(U) AUG Go C STAMM J P ORRESTER, J...Tape Transport Number Two TKI Tektronics I/0 Terminal DS1 Removable Disk Storage Unit DSO Fixed Disk Storage Unit CRT Cathode Ray Tube 1-3 THE BDM...file (mark on Mag Tape) AZEL Quick look at Trial Information Program DUPTAPE Allows for duplication of magnetic tapes CA Cancel ( terminates program on
Archetti, Renata; Vacchi, Matteo; Carniel, Sandro; Benetazzo, Alvise
Measuring the location of the shoreline and monitoring foreshore changes through time represent a fundamental task for correct coastal management at many sites around the world. Several authors demonstrated video systems to be an essential tool for increasing the amount of data available for coastline management. These systems typically sample at least once per hour and can provide long-term datasets showing variations over days, events, months, seasons and years. In the past few years, due to the wide diffusion of video cameras at relatively low price, the use of video cameras and of video images analysis for environmental control has increased significantly. Even if video monitoring systems were often used in the research field they are most often applied with practical purposes including: i) identification and quantification of shoreline erosion, ii) assessment of coastal protection structure and/or beach nourishment performance, and iii) basic input to engineering design in the coastal zone iv) support for integrated numerical model validation Here we present the guidelines for the creation of a new video monitoring network in the proximity of the Jesolo beach (NW of the Adriatic Sea, Italy), Within this 10 km-long tourist district several engineering structures have been built in recent years, with the aim of solving urgent local erosion problems; as a result, almost all types of protection structures are present at this site: groynes, detached breakwaters.The area investigated experienced severe problems of coastal erosion in the past decades, inclusding a major one in the last November 2012. The activity is planned within the framework of the RITMARE project, that is also including other monitoring and scientific activities (bathymetry survey, waves and currents measurements, hydrodynamics and morphodynamic modeling). This contribution focuses on best practices to be adopted in the creation of the video monitoring system, and briefly describes the
Full Text Available We investigate the video assignment problem of a hierarchical Video-on-Demand (VOD system in heterogeneous environments where different quality levels of videos can be encoded using either replication or layering. In such systems, videos are delivered to clients either through a proxy server or video broadcast/unicast channels. The objective of our work is to determine the appropriate coding strategy as well as the suitable delivery mechanism for a specific quality level of a video such that the overall system blocking probability is minimized. In order to find a near-optimal solution for such a complex video assignment problem, an evolutionary approach based on genetic algorithm (GA is proposed. From the results, it is shown that the system performance can be significantly enhanced by efficiently coupling the various techniques.
White, Preston, III
Kennedy Space Center has the need for economical transmission of two multiplexed video signals along multimode fiberoptic systems. These systems must span unusual distances and must meet RS-250B short-haul standards after reception. Bandwidth is a major problem and studies of the installed fibers, available LEDs and PINFETs led to the choice of 100 MHz as the upper limit for the system bandwidth. Optical multiplexing and digital transmission were deemed inappropriate. Three electrical multiplexing schemes were chosen for further study. Each of the multiplexing schemes included an FM stage to help meet the stringent S/N specification. Both FM and AM frequency division multiplexing methods were investigated theoretically and these results were validated with laboratory tests. The novel application of quadrature amplitude multiplexing was also considered. Frequency division multiplexing of two wideband FM video signal appears the most promising scheme although this application requires high power highly linear LED transmitters. Futher studies are necessary to determine if LEDs of appropriate quality exist and to better quantify performance of QAM in this application.
Full Text Available Abstract Interest in 3D video applications and systems is growing rapidly and technology is maturating. It is expected that multiview autostereoscopic displays will play an important role in home user environments, since they support multiuser 3D sensation and motion parallax impression. The tremendous data rate cannot be handled efficiently by representation and coding formats such as MVC or MPEG-C Part 3. Multiview video plus depth (MVD is a new format that efficiently supports such advanced 3DV systems, but this requires high-quality intermediate view synthesis. For this, a new approach is presented that separates unreliable image regions along depth discontinuities from reliable image regions, which are treated separately and fused to the final interpolated view. In contrast to previous layered approaches, our algorithm uses two boundary layers and one reliable layer, performs image-based 3D warping only, and was generically implemented, that is, does not necessarily rely on 3D graphics support. Furthermore, different hole-filling and filtering methods are added to provide high-quality intermediate views. As a result, high-quality intermediate views for an existing 9-view auto-stereoscopic display as well as other stereo- and multiscopic displays are presented, which prove the suitability of our approach for advanced 3DV systems.
Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried
High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.
Moldenhauer, Judith A.
The concept and use of the synchronous and asynchronous forms of virtual conferencing is central to the experience of global design education. Easy and ready access to people and information worldwide is at the heart of a paradigm shift in design practice and education, defined by collaboration and digital technology. The dream of smooth, global…
Kaye, Anthony R.
This paper briefly reviews the first large-scale use of computer mediated communication (CMC) at the Open University (OU) in Milton Keynes, England, including computer conferencing and electronic mail, in an adjunct mode on a multimedia distance education course with 1,500 students. The first part of the paper outlines the rationale for…
Hewett, Beth L.; Lynn, Robert
Individualized conferencing, a situation where instructors and tutors work individually with students, is one traditional way in which students whose first language is not English (ESOL) can receive help as they learn and practice their English speaking and writing skills. This article is a demonstration of some of the practical strategies common…
Powers, Judith K.
Presents typical problems encountered by tutors at writing centers when they conference with ESL writers. Discusses processes and ways of adapting collaborative conferencing strategies for second-language writers at the University of Wyoming Writing Center, including a need for intervention, that have proven effective in alleviating these…
This week you will be able to watch on the web the second edition of CERN's video news (see Bulletin n°45/2002, p.3). On this news reel: the ATRAP experiment's latest achievements, superconducting cable production for CMS, the CAST experiment and the European digital conferencing project InDiCo. Go to : www.cern.ch/video, or Bulletin web page.
Sandy, C. L. M.; Meiyanti, R.
A measurement of height is comparing the value of the magnitude of an object with a standard measuring tool. The problems that exist in the measurement are still the use of a simple apparatus in which one of them is by using a meter. This method requires a relatively long time. To overcome these problems, this research aims to create software with image processing that is used for the measurement of height. And subsequent that image is tested, where the object captured by the video camera can be known so that the height of the object can be measured using the learning method of Otsu. The system was built using Delphi 7 of Vision Lab VCL 4.5 component. To increase the quality of work of the system in future research, the developed system can be combined with other methods.
Giaccone, Agnese; Solli, Piergiorgio; Bertolaccini, Luca
The magnetic anchoring guidance system (MAGS) is one of the most promising technological innovations in minimally invasive surgery and consists in two magnetic elements matched through the abdominal or thoracic wall. The internal magnet can be inserted into the abdominal or chest cavity through a small single incision and then moved into position by manipulating the external component. In addition to a video camera system, the inner magnetic platform can house remotely controlled surgical tools thus reducing instruments fencing, a serious inconvenience of the uniportal access. The latest prototypes are equipped with self-light-emitting diode (LED) illumination and a wireless antenna for signal transmission and device controlling, which allows bypassing the obstacle of wires crossing the field of view (FOV). Despite being originally designed for laparoscopic surgery, the MAGS seems to suit optimally the characteristics of the chest wall and might meet the specific demands of video-assisted thoracic surgery (VATS) surgery in terms of ergonomics, visualization and surgical performance; moreover, it involves less risks for the patients and an improved aesthetic outcome.
Yang, Jie Chi; Huang, Yi Ting; Tsai, Chi Cheng; Chung, Ching I.; Wu, Yu Chieh
In recent years, using video as a learning resource has received a lot of attention and has been successfully applied to many learning activities. In comparison with text-based learning, video learning integrates more multimedia resources, which usually motivate learners more than texts. However, one of the major limitations of video learning is…
Terakawa, Yuzo; Ishibashi, Kenichi; Goto, Takeo; Ohata, Kenji
Three-dimensional (3-D) video recording of microsurgery is a more promising tool for presentation and education of microsurgery than conventional two-dimensional video systems, but has not been widely adopted partly because 3-D image processing of previous 3-D video systems is complicated and observers without optical devices cannot visualize the 3-D image. A new technical development for 3-D video presentation of microsurgery is described. Microsurgery is recorded with a microscope equipped with a single high-definition (HD) video camera. This 3-D video system records the right- and left-eye views of the microscope simultaneously as single HD data with the use of a 3-D camera adapter: the right- and left-eye views of the microscope are displayed separately on the right and left sides, respectively. The operation video is then edited with video editing software so that the right-eye view is displayed on the left side and left-eye view is displayed on the right side. Consequently, a 3-D video of microsurgery can be created by viewing the edited video by the cross-eyed stereogram viewing method without optical devices. The 3-D microsurgical video provides a more accurate view, especially with regard to depth, and a better understanding of microsurgical anatomy. Although several issues are yet to be addressed, this 3-D video system is a useful method of recording and presenting microsurgery for 3-D viewing with currently available equipment, without optical devices.
Full Text Available Inter-organizational problem solving of emergencies and extreme events are complex research fields where scarce experimental data is available. To address this problem, the Inter-GAP In Vivo System, was developed to run behavioural experiments of complex crisis. The system design and testing included three categories of participants: for pilot testing, first year university students; for theoretical validity, college students engaged in emergency management programs; and for field validity, expert decision makers who managed major crises. A comparative assessment was performed to select the most suitable video conferencing software commercially available, since it was more cost-efficient to acquire a tool already developed and customized it to the experiment needs than it was to design a new one. Software features analyzed were: ease of use, recording capabilities, format delivery options and security. The Inter-GAP In Vivo System setup was implemented on the video conference platform selected. The system performance was evaluated at three levels: technical setup, task design and work flow processes. The actual experimentation showed that the conferencing software is a versatile tool to enhance collaboration between stakeholders from different organizations, due to the audiovisual contact participants can establish, where non verbal cues can be interchanged along the problem solving processes. Potential future system applications include: collaborative and cross – functional training between organizations.
Bhattacharya, Sharmila; Inan, Omer; Kovacs, Gregory; Etemadi, Mozziyar; Sanchez, Max; Marcu, Oana
populations in terrestrial experiments, and could be especially useful in field experiments in remote locations. Two practical limitations of the system should be noted: first, only walking flies can be observed - not flying - and second, although it enables population studies, tracking individual flies within the population is not currently possible. The system used video recording and an analog circuit to extract the average light changes as a function of time. Flies were held in a 5-cm diameter Petri dish and illuminated from below by a uniform light source. A miniature, monochrome CMOS (complementary metal-oxide semiconductor) video camera imaged the flies. This camera had automatic gain control, and this did not affect system performance. The camera was positioned 5-7 cm above the Petri dish such that the imaging area was 2.25 sq cm. With this basic setup, still images and continuous video of 15 flies at one time were obtained. To reduce the required data bandwidth by several orders of magnitude, a band-pass filter (0.3-10 Hz) circuit compressed the video signal and extracted changes in image luminance over time. The raw activity signal output of this circuit was recorded on a computer and digitally processed to extract the fly movement "events" from the waveform. These events corresponded to flies entering and leaving the image and were used for extracting activity parameters such as inter-event duration. The efficacy of the system in quantifying locomotor activity was evaluated by varying environmental temperature, then measuring the activity level of the flies.
Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang
In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.
Some of the most challenging multimedia applications have involved real- time conferencing, using audio and video to support interpersonal communication. Here we re-examine assumptions about the role, importance and implementation of video information in such systems. Rather than focussing on novel technologies, we present evaluation data relevant to both the classes of real-time multimedia applications we should develop and their design and implementation. Evaluations of videoconferencing systems show that previous work has overestimated the importance of video at the expense of audio. This has strong implications for the implementation of bandwidth allocation and synchronization. Furthermore our recent studies of workplace interaction show that prior work has neglected another potentially vital function of visual information: in assessing the communication availability of others. In this new class of application, rather than providing a supplement to audio information, visual information is used to promote the opportunistic communications that are prevalent in face-to-face settings. We discuss early experiments with such connection applications and identify outstanding design and implementation issues. Finally we examine a different class of application 'video-as-data', where the video image is used to transmit information about the work objects themselves, rather than information about interactants.
Д В Сенашенко
Full Text Available The article describes distant learning systems used in world practice. The author gives classification of video communication systems. Aspects of using Skype software in Russian Federation are discussed. In conclusion the author provides the review of modern production video conference systems used as tools for distant learning.
... COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... certain video analytics software, systems, components thereof, and products containing same by reason of..., Inc. The remaining respondents are Bosch Security Systems, Inc.; Robert Bosch GmbH; Bosch...
... COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... States after importation of certain video analytics software, systems, components thereof, and products...; Bosch Security Systems, Inc. of Fairpoint, New York; Samsung Techwin Co., Ltd. of Seoul, Korea; Samsung...
Full Text Available This work presents a novel indoor video surveillance system, capable of detecting the falls of humans. The proposed system can detect and evaluate human posture as well. To evaluate human movements, the background model is developed using the codebook method, and the possible position of moving objects is extracted using the background and shadow eliminations method. Extracting a foreground image produces more noise and damage in this image. Additionally, the noise is eliminated using morphological and size filters and this damaged image is repaired. When the image object of a human is extracted, whether or not the posture has changed is evaluated using the aspect ratio and height of a human body. Meanwhile, the proposed system detects a change of the posture and extracts the histogram of the object projection to represent the appearance. The histogram becomes the input vector of K-Nearest Neighbor (K-NN algorithm and is to evaluate the posture of the object. Capable of accurately detecting different postures of a human, the proposed system increases the fall detection accuracy. Importantly, the proposed method detects the posture using the frame ratio and the displacement of height in an image. Experimental results demonstrate that the proposed system can further improve the system performance and the fall down identification accuracy.
Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.; Ives, Robert W.
FPGA devices with embedded DSP and memory blocks, and high-speed interfaces are ideal for real-time video processing applications. In this work, a hardware-software co-design approach is proposed to effectively utilize FPGA features for a prototype of an automated video surveillance system. Time-critical steps of the video surveillance algorithm are designed and implemented in the FPGAs logic elements to maximize parallel processing. Other non timecritical tasks are achieved by executing a high level language program on an embedded Nios-II processor. Pre-tested and verified video and interface functions from a standard video framework are utilized to significantly reduce development and verification time. Custom and parallel processing modules are integrated into the video processing chain by Altera's Avalon Streaming video protocol. Other data control interfaces are achieved by connecting hardware controllers to a Nios-II processor using Altera's Avalon Memory Mapped protocol.
Wen, Ming; Hu, Haibo
To meet the demands of high definition of video and transmission at real-time during the surgery of endoscope, this paper designs an HD mobile video transmission system. This system uses H.264/AVC to encode the original video data and transports it in the network by RTP/RTCP protocol. Meanwhile, the system implements a stable video transmission in portable terminals (such as tablet PCs, mobile phones) under the 3G mobile network. The test result verifies the strong repair ability and stability under the conditions of low bandwidth, high packet loss rate, and high delay and shows a high practical value.
Walton, James S.; Hallamasek, Karen G.
The value of high-speed imaging for making subjective assessments is widely recognized, but the inability to acquire useful data from image sequences in a timely fashion has severely limited the use of the technology. 4DVideo has created a foundation for a generic instrument that can capture kinematic data from high-speed images. The new system has been designed to acquire (1) two-dimensional trajectories of points; (2) three-dimensional kinematics of structures or linked rigid-bodies; and (3) morphological reconstructions of boundaries. The system has been designed to work with an unlimited number of cameras configured as nodes in a network, with each camera able to acquire images at 1000 frames per second (fps) or better, with a spatial resolution of 512 X 512 or better, and an 8-bit gray scale. However, less demanding configurations are anticipated. The critical technology is contained in the custom hardware that services the cameras. This hardware optimizes the amount of information stored, and maximizes the available bandwidth. The system identifies targets using an algorithm implemented in hardware. When complete, the system software will provide all of the functionality required to capture and process video data from multiple perspectives. Thereafter it will extract, edit and analyze the motions of finite targets and boundaries.
Full Text Available This article reports on a study that was carried out in order to examine the impact of conferencing assessment on students’ learning of English grammar. Forty-two Iranian intermediate university students were randomly assigned to an experimental and a control group. The participants in the experimental group took part in four individual and four whole class conferences. The participants in the control group studied the same grammatical points but they were not involved in conferencing assessment. The results of the study showed that the experimental group performed significantly better than the control group on the given post-test. Moreover, the attitudes of the participants toward grammar learning in the experimental group significantly changed from the first administration of a questionnaire to its second administration.
Allin, Thomas Højgaard; Neubert, Torsten; Laursen, Steen
at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over...
Full Text Available The currentstudyaimsatinvestigatingtheimpactofAudio/Voiceconferencing,asanewapproachtoteaching speaking, on the speakingperformanceand/orspeakingband score ofIELTScandidates.Experimentalgroupsubjectsparticipated in an audio conferencing classwhile those of the control group enjoyed attending in a traditional IELTS Speakingclass. At the endofthestudy,allsubjectsparticipatedinanIELTSExaminationheldonNovemberfourthin Tehran,Iran.To compare thegroupmeansforthestudy,anindependentt-testanalysiswasemployed.Thedifferencebetween experimental and control groupwasconsideredtobestatisticallysignificant(P<0.01.Thatisthecandidates in experimental group have outperformed the ones in control group in IELTS Speaking test scores.
Fraser, Hannah; Soanes, Kylie; Jones, Stuart A; Jones, Chris S; Malishev, Matthew
The objectives of conservation science and dissemination of its research create a paradox: Conservation is about preserving the environment, yet scientists spread this message at conferences with heavy carbon footprints. Ecology and conservation science depend on global knowledge exchange-getting the best science to the places it is most needed. However, conference attendance from developed countries typically outweighs that from developing countries that are biodiversity and conservation hotspots. If any branch of science should be trying to maximize participation while minimizing carbon emissions, it is conservation. Virtual conferencing is common in other disciplines, such as education and humanities, but it is surprisingly underused in ecology and conservation. Adopting virtual conferencing entails a number of challenges, including logistics and unified acceptance, which we argue can be overcome through planning and technology. We examined 4 conference models: a pure-virtual model and 3 hybrid hub-and-node models, where hubs stream content to local nodes. These models collectively aim to mitigate the logistical and administrative challenges of global knowledge transfer. Embracing virtual conferencing addresses 2 essential prerequisites of modern conferences: lowering carbon emissions and increasing accessibility for remote, time- and resource-poor researchers, particularly those from developing countries. © 2017 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
Full Text Available This work presents a fall detection system that is based on image processing technology. The system can detect falling by various humans via analysis of video frame. First, the system utilizes the method of mixture and Gaussian background model to generate information about the background, and the noise and shadow of background are eliminated to extract the possible positions of moving objects. The extraction of a foreground image generates more noise and damage. Therefore, morphological and size filters are utilized to eliminate this noise and repair the damage to the image. Extraction of the foreground image yields the locations of human heads in the image. The median point, height, and aspect ratio of the people in the image are calculated. These characteristics are utilized to trace objects. The change of the characteristics of objects among various consecutive images can be used to evaluate those persons enter or leave the scene. The method of fall detection uses the height and aspect ratio of the human body, analyzes the image in which one person overlaps with another, and detects whether a human has fallen or not. Experimental results demonstrate that the proposed method can efficiently detect falls by multiple persons.
de Barros, Rui Sergio Monteiro; Brito, Marcus Vinicius Henriques; de Brito, Marcelo Houat; de Aguiar Lédo Coutinho, Jean Vitor; Teixeira, Renan Kleber Costa; Yamaki, Vitor Nagai; da Silva Costa, Felipe Lobato; Somensi, Danusa Neves
The surgical microscope is an essential tool for microsurgery. Nonetheless, several promising alternatives are being developed, including endoscopes and laparoscopes with video systems. However, these alternatives have only been used for arterial anastomoses so far. The aim of this study was to evaluate the use of a low-cost video-assisted magnification system in end-to-side neurorrhaphy in rats. Forty rats were randomly divided into four matched groups: (1) normality (sciatic nerve was exposed but was kept intact); (2) denervation (fibular nerve was sectioned, and the proximal and distal stumps were sutured-transection without repair); (3) microscope; and (4) video system (fibular nerve was sectioned; the proximal stump was buried inside the adjacent musculature, and the distal stump was sutured to the tibial nerve). Microsurgical procedures were performed with guidance from a microscope or video system. We analyzed weight, nerve caliber, number of stitches, times required to perform the neurorrhaphy, muscle mass, peroneal functional indices, latency and amplitude, and numbers of axons. There were no significant differences in weight, nerve caliber, number of stitches, muscle mass, peroneal functional indices, or latency between microscope and video system groups. Neurorrhaphy took longer using the video system (P microscope group than in the video group. It is possible to perform an end-to-side neurorrhaphy in rats through video system magnification. The success rate is satisfactory and comparable with that of procedures performed under surgical microscopes. Copyright © 2017 Elsevier Inc. All rights reserved.
Recent years have seen significant investment and increasingly effective use of Video Analytics (VA) systems to detect intrusion or attacks in sterile areas. Currently there are a number of manufacturers who have achieved the Imagery Library for Intelligent Detection System (i-LIDS) primary detection classification performance standard for the sterile zone detection scenario. These manufacturers have demonstrated the performance of their systems under evaluation conditions using an uncompressed evaluation video. In this paper we consider the effect on the detection rate of an i-LIDS primary approved sterile zone system using compressed sterile zone scenario video clips as the input. The preliminary test results demonstrate a change time of detection rate with compression as the time to alarm increased with greater compression. Initial experiments suggest that the detection performance does not linearly degrade as a function of compression ratio. These experiments form a starting point for a wider set of planned trials that the Home Office will carry out over the next 12 months.
..., ``Nintendo''). The products accused of infringing the asserted patents are gaming systems and related... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Wireless Controllers and Components Thereof; Commission...
National Aeronautics and Space Administration — In this project, the development of a novel panoramic, stereoscopic video system was proposed. The proposed system, which contains no moving parts, uses three-fixed...
Yamada, Takaaki; Echizen, Isao; Tezuka, Satoru; Yoshiura, Hiroshi
Emerging broadband networks and high performance of PCs provide new business opportunities of the live video streaming services for the Internet users in sport events or in music concerts. Digital watermarking for video helps to protect the copyright of the video content and the real-time processing is an essential requirement. For the small start of new business, it should be achieved by flexible software without special equipments. This paper describes a novel real-time watermarking system implemented on a commodity PC. We propose the system architecture and methods to shorten watermarking time by reusing the estimated watermark imperceptibility among neighboring frames. A prototype system enables real time processing in a series of capturing NTSC signals, watermarking the video, encoding it to MPEG4 in QGVA, 1Mbps, 30fps style and storing the video for 12 hours in maximum
Ramezani, Mohsen; Yaghmaee, Farzin
In recent years, fast growth of online video sharing eventuated new issues such as helping users to find their requirements in an efficient way. Hence, Recommender Systems (RSs) are used to find the users' most favorite items. Finding these items relies on items or users similarities. Though, many factors like sparsity and cold start user impress the recommendation quality. In some systems, attached tags are used for searching items (e.g. videos) as personalized recommendation. Different views, incomplete and inaccurate tags etc. can weaken the performance of these systems. Considering the advancement of computer vision techniques can help improving RSs. To this end, content based search can be used for finding items (here, videos are considered). In such systems, a video is taken from the user to find and recommend a list of most similar videos to the query one. Due to relating most videos to humans, we present a novel low complex scalable method to recommend videos based on the model of included action. This method has recourse to human action retrieval approaches. For modeling human actions, some interest points are extracted from each action and their motion information are used to compute the action representation. Moreover, a fuzzy dissimilarity measure is presented to compare videos for ranking them. The experimental results on HMDB, UCFYT, UCF sport and KTH datasets illustrated that, in most cases, the proposed method can reach better results than most used methods.
Full Text Available The recent development of three dimensional (3D display technologies has resulted in a proliferation of 3D video production and broadcasting, attracting a lot of research into capture, compression and delivery of stereoscopic content. However, the predominant design practice of interactions with 3D video content has failed to address its differences and possibilities in comparison to the existing 2D video interactions. This paper presents a study of user requirements related to interaction with the stereoscopic 3D video. The study suggests that the change of view, zoom in/out, dynamic video browsing, and textual information are the most relevant interactions with stereoscopic 3D video. In addition, we identified a strong demand for object selection that resulted in a follow-up study of user preferences in 3D selection using virtual-hand and ray-casting metaphors. These results indicate that interaction modality affects users’ decision of object selection in terms of chosen location in 3D, while user attitudes do not have significant impact. Furthermore, the ray-casting-based interaction modality using Wiimote can outperform the volume-based interaction modality using mouse and keyboard for object positioning accuracy.
... systems for the delivery of video programming. 63.02 Section 63.02 Telecommunication FEDERAL... systems for the delivery of video programming. (a) Any common carrier is exempt from the requirements of... with respect to the establishment or operation of a system for the delivery of video programming. ...
... COMMISSION In the Matter of Certain Video Game Systems and Wireless Controllers and Components Thereof... importation, and the sale within the United States after importation of certain video game systems and... importation of certain video game systems and wireless controllers and components thereof that infringe one or...
Full Text Available Abstract Today's video surveillance systems are increasingly equipped with video content analysis for a great variety of applications. However, reliability and robustness of video content analysis algorithms remain an issue. They have to be measured against ground truth data in order to quantify the performance and advancements of new algorithms. Therefore, a variety of measures have been proposed in the literature, but there has neither been a systematic overview nor an evaluation of measures for specific video analysis tasks yet. This paper provides a systematic review of measures and compares their effectiveness for specific aspects, such as segmentation, tracking, and event detection. Focus is drawn on details like normalization issues, robustness, and representativeness. A software framework is introduced for continuously evaluating and documenting the performance of video surveillance systems. Based on many years of experience, a new set of representative measures is proposed as a fundamental part of an evaluation framework.
Full Text Available Today's video surveillance systems are increasingly equipped with video content analysis for a great variety of applications. However, reliability and robustness of video content analysis algorithms remain an issue. They have to be measured against ground truth data in order to quantify the performance and advancements of new algorithms. Therefore, a variety of measures have been proposed in the literature, but there has neither been a systematic overview nor an evaluation of measures for specific video analysis tasks yet. This paper provides a systematic review of measures and compares their effectiveness for specific aspects, such as segmentation, tracking, and event detection. Focus is drawn on details like normalization issues, robustness, and representativeness. A software framework is introduced for continuously evaluating and documenting the performance of video surveillance systems. Based on many years of experience, a new set of representative measures is proposed as a fundamental part of an evaluation framework.
Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian
Panayides, A S; Pattichis, M S; Constantinides, A G; Pattichis, C S
The emergence of the new, High Efficiency Video Coding (HEVC) standard, combined with wide deployment of 4G wireless networks, will provide significant support toward the adoption of mobile-health (m-health) medical video communication systems in standard clinical practice. For the first time since the emergence of m-health systems and services, medical video communication systems can be deployed that can rival the standards of in-hospital examinations. In this paper, we provide a thorough overview of today's advancements in the field, discuss existing approaches, and highlight the future trends and objectives.
Reid, Fraser J. M.; Hards, Rachael
Examines the effects of time scarcity on the way disagreement is managed in synchronous computer conferencing; reports an experiment in which pairs of undergraduates used keyboard-based conferencing software to resolve disputes on two controversial discussion topics under conditions either of time scarcity, or time abundance; and discusses…
Full Text Available Design of automated video surveillance systems is one of the exigent missions in computer vision community because of their ability to automatically select frames of interest in incoming video streams based on motion detection. This research paper focuses on the real-time hardware implementation of a motion detection algorithm for such vision based automated surveillance systems. A dedicated VLSI architecture has been proposed and designed for clustering-based motion detection scheme. The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T FPGA platform. The prototyped system robustly detects the relevant motion in real-time in live PAL (720 × 576 resolution video streams directly coming from the camera.
Mathiak, Krystyna A; Klasen, Martin; Weber, René; Ackermann, Hermann; Shergill, Sukhwinder S; Mathiak, Klaus
.... It was demonstrated that playing a video game leads to striatal dopamine release. It is unclear, however, which aspects of the game cause this reward system activation and if violent content contributes...
Xia, Zhen-Hua; Wang, Xiao-Shuang
With the rapid development of the electronic technology, multimedia technology and mobile communication technology, video monitoring system is going to the embedded, digital and wireless direction. In this paper, a solution of wireless video monitoring system based on WCDMA is proposed. This solution makes full use of the advantages of 3G, which have Extensive coverage network and wide bandwidth. It can capture the video streaming from the chip's video port, real-time encode the image data by the high speed DSP, and have enough bandwidth to transmit the monitoring image through WCDMA wireless network. The experiments demonstrate that the system has the advantages of high stability, good image quality, good transmission performance, and in addition, the system has been widely used, not be restricted by geographical position since it adopts wireless transmission. So, it is suitable used in sparsely populated, harsh environment scenario.
classic films Ii- into separate FM signals for video dual soundtrack or stereo sound censed from nearlk every major stu- and audio. Another...though never disruptive. While my enthusiasm for the subject was distinctly lim- i’ed. I felt almost as if Iwere in the presence of a histori - cally
Full Text Available Video surveillance systems are based on video and image processing research areas in the scope of computer science. Video processing covers various methods which are used to browse the changes in existing scene for specific video. Nowadays, video processing is one of the important areas of computer science. Two-dimensional videos are used to apply various segmentation and object detection and tracking processes which exists in multimedia content-based indexing, information retrieval, visual and distributed cross-camera surveillance systems, people tracking, traffic tracking and similar applications. Background subtraction (BS approach is a frequently used method for moving object detection and tracking. In the literature, there exist similar methods for this issue. In this research study, it is proposed to provide a more efficient method which is an addition to existing methods. According to model which is produced by using adaptive background subtraction (ABS, an object detection and tracking system’s software is implemented in computer environment. The performance of developed system is tested via experimental works with related video datasets. The experimental results and discussion are given in the study
M. van Persie
Full Text Available During a fire incident live airborne video offers the fire brigade an additional means of information. Essential for the effective usage of the daylight and infra red video data from the UAS is that the information is fully integrated into the crisis management system of the fire brigade. This is a GIS based system in which all relevant geospatial information is brought together and automatically distributed to all levels of the organisation. In the context of the Dutch Fire-Fly project a geospatial video server was integrated with a UAS and the fire brigades crisis management system, so that real-time geospatial airborne video and derived products can be made available at all levels during a fire incident. The most important elements of the system are the Delftdynamics Robot Helicopter, the Video Multiplexing System, the Keystone geospatial video server/editor and the Eagle and CCS-M crisis management systems. In discussion with the Security Region North East Gelderland user requirements and a concept of operation were defined, demonstrated and evaluated. This article describes the technical and operational approach and results.
Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui
Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.
Full Text Available Video applications using mobile wireless devices are a challenging task due to the limited capacity of batteries. The higher complex functionality of video decoding needs high resource requirements. Thus, power efficient control has become more critical design with devices integrating complex video processing techniques. Previous works on power efficient control in video decoding systems often aim at the low complexity design and not explicitly consider the scalable impact of subfunctions in decoding process, and seldom consider the relationship with the features of compressed video date. This paper is dedicated to developing an energy-scalable video decoding (ESVD strategy for energy-limited mobile terminals. First, ESVE can dynamically adapt the variable energy resources due to the device aware technique. Second, ESVD combines the decoder control with decoded data, through classifying the data into different partition profiles according to its characteristics. Third, it introduces utility theoretical analysis during the resource allocation process, so as to maximize the resource utilization. Finally, it adapts the energy resource as different energy budget and generates the scalable video decoding output under energy-limited systems. Experimental results demonstrate the efficiency of the proposed approach.
Scollato, A; Perrini, P; Benedetto, N; Di Lorenzo, N
We propose an easy-to-construct digital video editing system ideal to produce video documentation and still images. A digital video editing system applicable to many video sources in the operating room is described in detail. The proposed system has proved easy to use and permits one to obtain videography quickly and easily. Mixing different streams of video input from all the devices in use in the operating room, the application of filters and effects produces a final, professional end-product. Recording on a DVD provides an inexpensive, portable and easy-to-use medium to store or re-edit or tape at a later time. From stored videography it is easy to extract high-quality, still images useful for teaching, presentations and publications. In conclusion digital videography and still photography can easily be recorded by the proposed system, producing high-quality video recording. The use of firewire ports provides good compatibility with next-generation hardware and software. The high standard of quality makes the proposed system one of the lowest priced products available today.
Heckendorn, F.M.; Robinson, C.W.
Specialized miniature low cost video equipment has been effectively used in a number of remote, radioactive, and contaminated environments at the Savannah River Site (SRS). The equipment and related techniques have reduced the potential for personnel exposure to both radiation and physical hazards. The valuable process information thus provided would not have otherwise been available for use in improving the quality of operation at SRS.
Ehlert, Steven; Kingery, Aaron; Suggs, Robert
We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at ∼ 0.20 mag , and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to ∼ 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.
Ehlert, Steven; Kingery, Aaron; Suggs, Robert
We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.
Ferreira, João, E-mail: email@example.com [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Vale, Alberto [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Ribeiro, Isabel [Laboratório de Robótica e Sistemas em Engenharia e Ciência - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)
Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building.
Fog, Benedikte; Ulfkjær, Jacob Kanneworff Stigsen; Schlichter, Bjarne Rerup
not sufficiently reflect the theoretical recommendations of using video optimally in a management education. It did not comply with the video learning sequence as introduced by Marx and Frost (1998). However, it questions if the level of cognitive orientation activities can become too extensive. It finds......The study of business information systems has become increasingly important in the Digital Economy. However, it has been found that students have difficulties understanding the practical implications thereof and this leads to a motivational decreases. This study aims to investigate how to optimize...... the use of video to increase comprehension of the practical implications of studying business information systems. This qualitative study is based on observations and focus group interviews with first semester business students. The findings suggest that the video examined in the case study did...
Al Hadhrami, Tawfik; Nightingale, James M.; Wang, Qi; Grecos, Christos
In emergency situations, the ability to remotely monitor unfolding events using high-quality video feeds will significantly improve the incident commander's understanding of the situation and thereby aids effective decision making. This paper presents a novel, adaptive video monitoring system for emergency situations where the normal communications network infrastructure has been severely impaired or is no longer operational. The proposed scheme, operating over a rapidly deployable wireless mesh network, supports real-time video feeds between first responders, forward operating bases and primary command and control centers. Video feeds captured on portable devices carried by first responders and by static visual sensors are encoded in H.264/SVC, the scalable extension to H.264/AVC, allowing efficient, standard-based temporal, spatial, and quality scalability of the video. A three-tier video delivery system is proposed, which balances the need to avoid overuse of mesh nodes with the operational requirements of the emergency management team. In the first tier, the video feeds are delivered at a low spatial and temporal resolution employing only the base layer of the H.264/SVC video stream. Routing in this mode is designed to employ all nodes across the entire mesh network. In the second tier, whenever operational considerations require that commanders or operators focus on a particular video feed, a `fidelity control' mechanism at the monitoring station sends control messages to the routing and scheduling agents in the mesh network, which increase the quality of the received picture using SNR scalability while conserving bandwidth by maintaining a low frame rate. In this mode, routing decisions are based on reliable packet delivery with the most reliable routes being used to deliver the base and lower enhancement layers; as fidelity is increased and more scalable layers are transmitted they will be assigned to routes in descending order of reliability. The third tier
Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.
Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.
Madiedo, J. M.; Trigo-Rodriguez, J. M.; Lyytinen, E.
The SPanish Meteor Network (SPMN) is performing a continuous monitoring of meteor activity over Spain and neighbouring countries. The huge amount of data obtained by the 25 video observing stations that this network is currently operating made it necessary to develop new software packages to accomplish some tasks, such as data reduction and remote operation of autonomous systems based on high-sensitivity CCD video devices. The main characteristics of this software are described here.
Chen Homer H
Full Text Available The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.
Parihar, Vijay; Yadav, Y R; Kher, Yatin; Ratre, Shailendra; Sethi, Ashish; Sharma, Dhananjaya
Steep learning curve is found initially in pure endoscopic procedures. Video telescopic operating monitor (VITOM) is an advance in rigid-lens telescope systems provides an alternative method for learning basics of neuroendoscopy with the help of the familiar principle of microneurosurgery. The aim was to evaluate the clinical utility of VITOM as a learning tool for neuroendoscopy. Video telescopic operating monitor was used 39 cranial and spinal procedures and its utility as a tool for minimally invasive neurosurgery and neuroendoscopy for initial learning curve was studied. Video telescopic operating monitor was used in 25 cranial and 14 spinal procedures. Image quality is comparable to endoscope and microscope. Surgeons comfort improved with VITOM. Frequent repositioning of scope holder and lack of stereopsis is initial limiting factor was compensated for with repeated procedures. Video telescopic operating monitor is found useful to reduce initial learning curve of neuroendoscopy.
渡部, 和雄; 湯瀬, 裕昭; 渡邉, 貴之; 井口, 真彦; 藤田, 広一
The authors have developed a distance education system for interactive education which can transmit 4 video streams between distant lecture rooms. In this paper, we describe the results of our experiments using the system for adult education. We propose some efficient ways to use the system for adult education.
Full Text Available The basic form of online conferencing is asynchronous and text-based, and a vast array of products is now available for fully featured communication within this framework. The following set of seven reviews contrasts some of the best text-based products that have so far come to our attention, with other products whose features are less extensive. This comparison of products provides a useful look at the options now available to the designers of online conferences, and at the choices to be made in product selection. The reviews (by the first two authors, both DE graduate students have stressed the utility of the products from the joint perspective of students and teachers.
Suenaga, Ryo; Suzuki, Kazuyoshi; Tezuka, Tomoyuki; Panahpour Tehrani, Mehrdad; Takahashi, Keita; Fujii, Toshiaki
In this paper, we present a free viewpoint video generation system with billboard representation for soccer games. Free viewpoint video generation is a technology that enables users to watch 3-D objects from their desired viewpoints. Practical implementation of free viewpoint video for sports events is highly demanded. However, a commercially acceptable system has not yet been developed. The main obstacles are insufficient user-end quality of the synthesized images and highly complex procedures that sometimes require manual operations. In this work, we aim to develop a commercially acceptable free viewpoint video system with a billboard representation. A supposed scenario is that soccer games during the day can be broadcasted in 3-D, even in the evening of the same day. Our work is still ongoing. However, we have already developed several techniques to support our goal. First, we captured an actual soccer game at an official stadium where we used 20 full-HD professional cameras. Second, we have implemented several tools for free viewpoint video generation as follow. In order to facilitate free viewpoint video generation, all cameras should be calibrated. We calibrated all cameras using checker board images and feature points on the field (cross points of the soccer field lines). We extract each player region from captured images manually. The background region is estimated by observing chrominance changes of each pixel in temporal domain (automatically). Additionally, we have developed a user interface for visualizing free viewpoint video generation using a graphic library (OpenGL), which is suitable for not only commercialized TV sets but also devices such as smartphones. However, practical system has not yet been completed and our study is still ongoing.
Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.
Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also
Smith, Jemma; Hand, Linda; Dowrick, Peter W.
This study examined the efficacy of video self modeling (VSM) using feedforward, to teach various goals of a picture exchange communication system (PECS). The participants were two boys with autism and one man with Down syndrome. All three participants were non-verbal with no current functional system of communication; the two children had long…
Dow, Ximeng Y; Sullivan, Shane Z; Muir, Ryan D; Simpson, Garth J
A fast (up to video rate) two-photon excited fluorescence lifetime imaging system based on interleaved digitization is demonstrated. The system is compatible with existing beam-scanning microscopes with minor electronics and software modification. Proof-of-concept demonstrations were performed using laser dyes and biological tissue.
Walpitagama, Milanga; Kaslin, Jan; Nugegoda, Dayanthi; Wlodkowic, Donald
The fish embryo toxicity (FET) biotest performed on embryos of zebrafish (Danio rerio) has gained significant popularity as a rapid and inexpensive alternative approach in chemical hazard and risk assessment. The FET was designed to evaluate acute toxicity on embryonic stages of fish exposed to the test chemical. The current standard, similar to most traditional methods for evaluating aquatic toxicity provides, however, little understanding of effects of environmentally relevant concentrations of chemical stressors. We postulate that significant environmental effects such as altered motor functions, physiological alterations reflected in heart rate, effects on development and reproduction can occur at sub-lethal concentrations well below than LC10. Behavioral studies can, therefore, provide a valuable integrative link between physiological and ecological effects. Despite the advantages of behavioral analysis development of behavioral toxicity, biotests is greatly hampered by the lack of dedicated laboratory automation, in particular, user-friendly and automated video microscopy systems. In this work we present a proof-of-concept development of an optical system capable of tracking embryonic vertebrates behavioral responses using automated and vastly miniaturized time-resolved video-microscopy. We have employed miniaturized CMOS cameras to perform high definition video recording and analysis of earliest vertebrate behavioral responses. The main objective was to develop a biocompatible embryo positioning structures that were suitable for high-throughput imaging as well as video capture and video analysis algorithms. This system should support the development of sub-lethal and behavioral markers for accelerated environmental monitoring.
Allin, T.; Neubert, T.; Laursen, S.; Rasmussen, I. L.; Soula, S.
In support for global ELF/VLF observations, HF measurements in France, and conjugate photometry/VLF observations in South Africa, we developed and operated a semi-automatic, remotely controlled video system for the observation of middle-atmospheric transient luminous events (TLEs). Installed at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over serial links from a local computer, and the video outputs were distributed to a pair of PCI frame grabbers in the computer. This setup allowed remote users to log in and operate the system over the internet. Event detection software provided means of recording and time-stamping single TLE video fields and thus eliminated the need for continuous human monitoring of TLE activity. The computer recorded and analyzed two parallel video streams at the full 50 Hz field rate, while uploading status images, TLE images, and system logs to a remote web server. The system detected more than 130 TLEs - mostly sprites - distributed over 9 active evenings. We have thus demonstrated the feasibility of remote agents for TLE observations, which are likely to find use in future ground-based TLE observation campaigns, or to be installed at remote sites in support for space-borne or other global TLE observation efforts.
Chan, Fai; Moon, Yiu-Sang; Chen, Jiansheng; Ma, Yiu-Kwan; Tsang, Wai-Hung; Fu, Kah-Kuen
Low resolution and un-sharp facial images are always captured from surveillance videos because of long human-camera distance and human movements. Previous works addressed this problem by using an active camera to capture close-up facial images without considering human movements and mechanical delays of the active camera. In this paper, we proposed a unified framework to capture facial images in video surveillance systems by using one static and active camera in a cooperative manner. Human faces are first located by a skin-color based real-time face detection algorithm. A stereo camera model is also employed to approximate human face location and his/her velocity with respect to the active camera. Given the mechanical delays of the active camera, the position of a target face with a given delay can be estimated using a Human-Camera Synchronization Model. By controlling the active camera with corresponding amount of pan, tilt, and zoom, a clear close-up facial image of a moving human can be captured then. We built the proposed system in an 8.4-meter indoor corridor. Results show that the proposed stereo camera configuration can locate faces with average error of 3%. In addition, it is capable of capturing facial images of a walking human clearly in first instance in 90% of the test cases.
Brunner, M; Ittner, W
This paper describes VIPER, the video image-processing system Erlangen. It consists of a general purpose microcomputer, commercially available image-processing hardware modules connected directly to the computer, video input/output-modules such as a TV camera, video recorders and monitors, and a software package. The modular structure and the capabilities of this system are explained. The software is user-friendly, menu-driven and performs image acquisition, transfers, greyscale processing, arithmetics, logical operations, filtering display, colour assignment, graphics, and a couple of management functions. More than 100 image-processing functions are implemented. They are available either by typing a key or by a simple call to the function-subroutine library in application programs. Examples are supplied in the area of biomedical research, e.g. in in-vivo microscopy.
Full Text Available About the video image processing's vehicle detection and counting system research, which has video vehicle detection, vehicle targets' image processing, and vehicle counting function. Vehicle detection is the use of inter-frame difference method and vehicle shadow segmentation techniques for vehicle testing. Image processing functions is the use of color image gray processing, image segmentation, mathematical morphology analysis and image fills, etc. on target detection to be processed, and then the target vehicle extraction. Counting function is to count the detected vehicle. The system is the use of inter-frame video difference method to detect vehicle and the use of the method of adding frame to vehicle and boundary comparison method to complete the counting function, with high recognition rate, fast, and easy operation. The purpose of this paper is to enhance traffic management modernization and automation levels. According to this study, it can provide a reference for the future development of related applications.
Full Text Available This essay examines how tensions between work and play for video game developers shape the worlds they create. The worlds of game developers, whose daily activity is linked to larger systems of experimentation and technoscientific practice, provide insights that transcend video game development work. The essay draws on ethnographic material from over 3 years of fieldwork with video game developers in the United States and India. It develops the notion of creative collaborative practice based on work in the fields of science and technology studies, game studies, and media studies. The importance of, the desire for, or the drive to understand underlying systems and structures has become fundamental to creative collaborative practice. I argue that the daily activity of game development embodies skills fundamental to creative collaborative practice and that these capabilities represent fundamental aspects of critical thought. Simultaneously, numerous interests have begun to intervene in ways that endanger these foundations of creative collaborative practice.
A. L. Oleinik
Full Text Available Subject of Research. The paper deals with the problem of multiple face tracking in a video stream. The primary application of the implemented tracking system is the automatic video surveillance. The particular operating conditions of surveillance cameras are taken into account in order to increase the efficiency of the system in comparison to existing general-purpose analogs. Method. The developed system is comprised of two subsystems: detector and tracker. The tracking subsystem does not depend on the detector, and thus various face detection methods can be used. Furthermore, only a small portion of frames is processed by the detector in this structure, substantially improving the operation rate. The tracking algorithm is based on BRIEF binary descriptors that are computed very efficiently on modern processor architectures. Main Results. The system is implemented in C++ and the experiments on the processing rate and quality evaluation are carried out. MOTA and MOTP metrics are used for tracking quality measurement. The experiments demonstrated the four-fold processing rate gain in comparison to the baseline implementation that processes every video frame with the detector. The tracking quality is on the adequate level when compared to the baseline. Practical Relevance. The developed system can be used with various face detectors (including slow ones to create a fully functional high-speed multiple face tracking solution. The algorithm is easy to implement and optimize, so it may be applied not only in full-scale video surveillance systems, but also in embedded solutions integrated directly into cameras.
Sehairi, Kamal; Chouireb, Fatima; Meunier, Jean
The objective of this study is to compare several change detection methods for a monostatic camera and identify the best method for different complex environments and backgrounds in indoor and outdoor scenes. To this end, we used the CDnet video dataset as a benchmark that consists of many challenging problems, ranging from basic simple scenes to complex scenes affected by bad weather and dynamic backgrounds. Twelve change detection methods, ranging from simple temporal differencing to more sophisticated methods, were tested and several performance metrics were used to precisely evaluate the results. Because most of the considered methods have not previously been evaluated on this recent large scale dataset, this work compares these methods to fill a lack in the literature, and thus this evaluation joins as complementary compared with the previous comparative evaluations. Our experimental results show that there is no perfect method for all challenging cases; each method performs well in certain cases and fails in others. However, this study enables the user to identify the most suitable method for his or her needs.
Geradts, Zeno J.; Merlijn, Menno; de Groot, Gert; Bijhold, Jurrien
The gait parameters of eleven subjects were evaluated to provide data for recognition purposes of subjects. Video images of these subjects were acquired in frontal, transversal, and sagittal (a plane parallel to the median of the body) view. The subjects walked by at their usual walking speed. The measured parameters were hip, knee and ankle joint angle and their time averaged values, thigh, foot and trunk angle, step length and width, cycle time and walking speed. Correlation coefficients within and between subjects for the hip, knee and ankle rotation pattern in the sagittal aspect and for the trunk rotation pattern in the transversal aspect were almost similar. (were similar or were almost identical) This implies that the intra and inter individual variance were equal. Therefore, these gait parameters could not distinguish between subjects. A simple ANOVA with a follow-up test was used to detect significant differences for the mean hip, knee and ankle joint angle, thigh angle, step length, step width, walking speed, cycle time and foot angle. The number of significant differences between subjects defined the usefulness of the gait parameter. The parameter with the most significant difference between subjects was the foot angle (64 % - 73 % of the maximal attainable significant differences), followed by the time average hip joint angle (58 %) and the step length (45 %). The other parameters scored less than 25 %, which is poor for recognition purposes. The use of gait for identification purposes it not yet possible based on this research.
Azer, Samy A; Algrain, Hala A; AlKhelaif, Rana A; AlEshaiwi, Sarah M
A number of studies have evaluated the educational contents of videos on YouTube. However, little analysis has been done on videos about physical examination. This study aimed to analyze YouTube videos about physical examination of the cardiovascular and respiratory systems. It was hypothesized that the educational standards of videos on YouTube would vary significantly. During the period from November 2, 2011 to December 2, 2011, YouTube was searched by three assessors for videos covering the clinical examination of the cardiovascular and respiratory systems. For each video, the following information was collected: title, authors, duration, number of viewers, and total number of days on YouTube. Using criteria comprising content, technical authority, and pedagogy parameters, videos were rated independently by three assessors and grouped into educationally useful and non-useful videos. A total of 1920 videos were screened. Only relevant videos covering the examination of adults in the English language were identified (n=56). Of these, 20 were found to be relevant to cardiovascular examinations and 36 to respiratory examinations. Further analysis revealed that 9 provided useful information on cardiovascular examinations and 7 on respiratory examinations: scoring mean 14.9 (SD 0.33) and mean 15.0 (SD 0.00), respectively. The other videos, 11 covering cardiovascular and 29 on respiratory examinations, were not useful educationally, scoring mean 11.1 (SD 1.08) and mean 11.2 (SD 1.29), respectively. The differences between these two categories were significant (P.86. A small number of videos about physical examination of the cardiovascular and respiratory systems were identified as educationally useful; these videos can be used by medical students for independent learning and by clinical teachers as learning resources. The scoring system utilized by this study is simple, easy to apply, and could be used by other researchers on similar topics.
Yang, Jian; Xie, Xiaofang; Wang, Yan
Based on the AHRS (Attitude and Heading Reference System) and PTZ (Pan/Tilt/Zoom) camera, we designed a video monitoring and tracking system. The overall structure of the system and the software design are given. The key technologies such as serial port communication and head attitude tracking are introduced, and the codes of the key part are given.
Mitzova-Vladinov, Greta; Bizzio-Knott, Rossana; Hooshmand, Mary; Hauglum, Shayne; Aziza, Khitam
This case study examines an innovative way the Blackboard Collaborate video conferencing learning platform was used to record graduate student presentations for creating a course library utilized in individualized student teaching. The presentation recordings evolved into an innovative strategy for providing feedback and ultimately improvement in…
Bulan, Orhan; Loce, Robert P.; Wu, Wencheng; Wang, YaoRong; Bernal, Edgar A.; Fan, Zhigang
Urban parking management is receiving significant attention due to its potential to reduce traffic congestion, fuel consumption, and emissions. Real-time parking occupancy detection is a critical component of on-street parking management systems, where occupancy information is relayed to drivers via smart phone apps, radio, Internet, on-road signs, or global positioning system auxiliary signals. Video-based parking occupancy detection systems can provide a cost-effective solution to the sensing task while providing additional functionality for traffic law enforcement and surveillance. We present a video-based on-street parking occupancy detection system that can operate in real time. Our system accounts for the inherent challenges that exist in on-street parking settings, including illumination changes, rain, shadows, occlusions, and camera motion. Our method utilizes several components from video processing and computer vision for motion detection, background subtraction, and vehicle detection. We also present three traffic law enforcement applications: parking angle violation detection, parking boundary violation detection, and exclusion zone violation detection, which can be integrated into the parking occupancy cameras as a value-added option. Our experimental results show that the proposed parking occupancy detection method performs in real-time at 5 frames/s and achieves better than 90% detection accuracy across several days of videos captured in a busy street block under various weather conditions such as sunny, cloudy, and rainy, among others.
Ignacio, Joselito; Center for Homeland Defense and Security Naval Postgraduate School
This proposed system process aims to improve subway safety through better enabling the rapid detection and response to a chemical release in a subway system. The process is designed to be location-independent and generalized to most subway systems despite each system's unique characteristics.
Hua, My; Yip, Henry; Talbot, Prue
The objective was to analyse and compare puff and exhalation duration for individuals using electronic nicotine delivery systems (ENDS) and conventional cigarettes in YouTube videos. Video data from YouTube videos were analysed to quantify puff duration and exhalation duration during use of conventional tobacco-containing cigarettes and ENDS. For ENDS, comparisons were also made between 'advertisers' and 'non-advertisers', genders, brands of ENDS, and models of ENDS within one brand. Puff duration (mean =2.4 s) for conventional smokers in YouTube videos (N=9) agreed well with prior publications. Puff duration was significantly longer for ENDS users (mean =4.3 s) (N = 64) than for conventional cigarette users, and puff duration varied significantly among ENDS brands. For ENDS users, puff duration and exhalation duration were not significantly affected by 'advertiser' status, gender or variation in models within a brand. Men outnumbered women by about 5:1, and most users were between 19 and 35 years of age. YouTube videos provide a valuable resource for studying ENDS usage. Longer puff duration may help ENDS users compensate for the apparently poor delivery of nicotine from ENDS. As with conventional cigarette smoking, ENDS users showed a large variation in puff duration (range =1.9-8.3 s). ENDS puff duration should be considered when designing laboratory and clinical trials and in developing a standard protocol for evaluating ENDS performance.
Cheah Wai Shiang
Full Text Available Agent-oriented methodology (AOM is a comprehensive and unified agent methodology for agent-oriented software development. Although AOM is claimed to be able to cope with a complex system development, it is still not yet determined up to what extent this may be true. Therefore, it is vital to conduct an investigation to validate this methodology. This paper presents the adoption of AOM in developing an agent-oriented video surveillance system (VSS. An intruder handling scenario is designed and implemented through AOM. AOM provides an alternative method to engineer a distributed security system in a systematic manner. It presents the security system at a holistic view; provides a better conceptualization of agent-oriented security system and supports rapid prototyping as well as simulation of video surveillance system.
Burner, A. W.; Rummler, D. R.; Goad, W. K.
A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.
... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Controllers; Investigations: Terminations, Modifications and Rulings AGENCY: U.S. International Trade Commission. ACTION: Notice. Section 337 of the Tariff Act of 1930...
... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Wireless Controllers and Components Thereof, Commission Determination Finding No Violation of the Tariff Act of 1930 AGENCY: U.S. International Trade Commission. ACTION...
Horn, Eva; And Others
Three nonvocal students (ages 5-8) with severe physical handicaps were trained in scan and selection responses (similar to responses needed for operating augmentative communication systems) using a microcomputer-operated video-game format. Results indicated that all three children showed substantial increases in the number of correct responses and…
Pope, Alan T.; Bogart, Edward H.
Describes the Extended Attention Span Training (EAST) system for modifying attention deficits, which takes the concept of biofeedback one step further by making a video game more difficult as the player's brain waves indicate that attention is waning. Notes contributions of this technology to neuropsychology and neurology, where the emphasis is on…
Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C.
The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds’ benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system’s high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.
The author demonstrates a new system useful for reflective learning. Our new system offers an environment that one can use handwriting tablet devices to bookmark symbolic and descriptive feedbacks into simultaneously recorded videos in the environment. If one uses video recording and feedback check sheets in reflective learning sessions, one can…
... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Wireless Controllers and Components Thereof; Notice of Request for... limited exclusion order and a cease and desist order against certain video game systems and wireless...
AKINCI, Gökay; Polat, Ediz; Koçak, Orhan Murat
Eye pupil detection systems have become increasingly popular in image processing and computer vision applications in medical systems. In this study, a video-based eye pupil detection system is developed for diagnosing bipolar disorder. Bipolar disorder is a condition in which people experience changes in cognitive processes and abilities, including reduced attentional and executive capabilities and impaired memory. In order to detect these abnormal behaviors, a number of neuropsychologi...
Cihak, David; Fahrenkrog, Cynthia; Ayres, Kevin M.; Smith, Catherine
This study evaluated the efficacy of video modeling delivered via a handheld device (video iPod) and the use of the system of least prompts to assist elementary-age students with transitioning between locations and activities within the school. Four students with autism learned to manipulate a handheld device to watch video models. An ABAB…
Haemaelaeinen, R.P.; Lindstedt, M. [Helsinki Univ. of Technology, Espoo (Finland). Systems Analysis Lab.; Sinkko, K.; Ammann, M. [Radiation and Nuclear Safety Authority, Helsinki (Finland); Salo, A
This work was undertaken in order to study the utilisation of decision conferencing and of the RODOS system when considering early phase protective actions in the case of a nuclear accident. Altogether four meetings with various people were organised. The meetings were attended by competent national safety authorities and technical level decision-makers, i.e., those who are responsible for preparing advice or making presentations of matters for decision-makers responsible for practical implementation of actions. In the first set of meetings the aim was to elicit the factors/attributes that have to be considered when making a decision on sheltering, evacuation and iodine tablets. No uncertainties nor a threat phase were considered but everything was assumed to happen as described in the given scenario. The theme in the second set of meetings was to study the implications of probabilities. All information was calculated with the support of the RODOS system. In the early phases of a nuclear accident time is limited. Prestructured generic value trees or a list of possible attributes can help to save time. A possible approach is to present a large generic value tree. Either the decision-makers select the attributes that are suitable for the case in hand or the facilitator offers a choice between more structured value trees. The decision-makers then just examine the suggested value trees, check the generic tree to make sure that no important factors have been omitted and choose the appropriate one. As in previous RODOS exercises, the participants felt that RODOS could be used for providing information but found it more problematic to use decision analysis methods when deciding on countermeasures in the early phase of a nuclear accident. Furthermore, it was noted that understanding the actual meaning of `soft` attributes, such as socio-psychological impacts or political cost, was not a straightforward issue. Consequently, the definition of attributes in advance would be
Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien
Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.
Thorsdatter Orvedal Aase, Anne Lene
Full Text Available In this study we used a portable event-triggered video surveillance system for monitoring flower-visiting bumblebees. The system consist of mini digital recorder (mini-DVR with a video motion detection (VMD sensor which detects changes in the image captured by the camera, the intruder triggers the recording immediately. The sensitivity and the detection area are adjustable, which may prevent unwanted recordings. To our best knowledge this is the first study using VMD sensor to monitor flower-visiting insects. Observation of flower-visiting insects has traditionally been monitored by direct observations, which is time demanding, or by continuous video monitoring, which demands a great effort in reviewing the material. A total of 98.5 monitoring hours were conducted. For the mini-DVR with VMD, a total of 35 min were spent reviewing the recordings to locate 75 pollinators, which means ca. 0.35 sec reviewing per monitoring hr. Most pollinators in the order Hymenoptera were identified to species or group level, some were only classified to family (Apidae or genus (Bombus. The use of the video monitoring system described in the present paper could result in a more efficient data sampling and reveal new knowledge to pollination ecology (e.g. species identification and pollinating behaviour.
Bräger, S.; Chong, A.; Dawson, S.; Slooten, E.; Würsig, B.
One reason for the paucity of knowledge of dolphin social structure is the difficulty of measuring individual dolphins. In Hector's dolphins, Cephalorhynchus hectori, total body length is a function of age, and sex can be determined by individual colouration pattern. We developed a novel system combining stereo-photogrammetry and underwater-video to record dolphin group composition. The system consists of two downward-looking single-lens-reflex (SLR) cameras and a Hi8 video camera in an underwater housing mounted on a small boat. Bow-riding Hector's dolphins were photographed and video-taped at close range in coastal waters around the South Island of New Zealand. Three-dimensional, stereoscopic measurements of the distance between the blowhole and the anterior margin of the dorsal fin (BH-DF) were calibrated by a suspended frame with reference points. Growth functions derived from measurements of 53 dead Hector's dolphins (29 female : 24 male) provided the necessary reference data. For the analysis, the measurements were synchronised with corresponding underwater-video of the genital area. A total of 27 successful measurements (8 with corresponding sex) were obtained, showing how this new system promises to be potentially useful for cetacean studies.
Hsu, Chia-chun A.; Ling, Jim; Li, Qing; Kuo, C.-C. J.
The distributed Multiplayer Online Game (MOG) system is complex since it involves technologies in computer graphics, multimedia, artificial intelligence, computer networking, embedded systems, etc. Due to the large scope of this problem, the design of MOG systems has not yet been widely addressed in the literatures. In this paper, we review and analyze the current MOG system architecture followed by evaluation. Furthermore, we propose a clustered-server architecture to provide a scalable solution together with the region oriented allocation strategy. Two key issues, i.e. interesting management and synchronization, are discussed in depth. Some preliminary ideas to deal with the identified problems are described.
Seo, Young-Ho; Lee, Yoon-Hyuk; Koo, Ja-Myung; Kim, Woo-Youl; Yoo, Ji-Sang; Kim, Dong-Wook
We propose a new system that can generate digital holograms using natural color information. The system consists of a camera system for capturing images (object points) and software (S/W) for various image processing. The camera system uses a vertical rig, which is equipped with two depth and RGB cameras and a cold mirror, which has different reflectances according to wavelength for obtaining images with the same viewpoint. The S/W is composed of the engines for processing the captured images and executing computer-generated hologram for generating digital holograms using general-purpose graphics processing units. Each algorithm was implemented using C/C++ and CUDA languages, and all engines in the form of library were integrated in LabView environment. The proposed system can generate about 10 digital holographic frames per second using about 6 K object points.
Endo, Chiaki; Sakurada, A; Kondo, T
Recently, endoscopic procedures including surgery, intervention, and examination have been widely performed. Medical practitioners are required to record the procedures precisely in order to check the procedures retrospectively and to get the legally reliable record. Medical Forensic System made by KS Olympus Japan offers 2 kinds of movie and patient's data, such as heart rate, blood pressure, and Spo, which are simultaneously recorded. We installed this system into the bronchoscopy room and have experienced its benefit. Under this system, we can get bronchoscopic image, bronchoscopy room view, and patient's data simultaneously. We can check the quality of the bronchoscopic procedures retrospectively, which is useful for bronchoscopy staff training. Medical Forensic System should be installed in any kind of endoscopic procedures.
Jihwan Park; Youngsun Kong; Yunyoung Nam
In order to remain in focus during head movements, vestibular-ocular reflex causes eyes to move in the opposite direction to head movement. Disorders of vestibular system decrease vision, causing abnormal nystagmus and dizziness. To diagnose abnormal nystagmus, various studies have been reported including the use of rotating chair tests and videonystagmography. However, these tests are unsuitable for home use due to their high costs. Thus, a low-cost video-oculography system is necessary to obtain clinical features at home. In this paper, we present a low-cost video-oculography system using an infrared camera and Raspberry Pi board for tracking the pupils and evaluating a vestibular system. Horizontal eye movement is derived from video data obtained from an infrared camera and infrared light-emitting diodes, and the velocity of head rotation is obtained from a gyroscope sensor. Each pupil was extracted using a morphology operation and a contour detection method. Rotatory chair tests were conducted with our developed device. To evaluate our system, gain, asymmetry, and phase were measured and compared with System 2000. The average IQR errors of gain, phase and asymmetry were 0.81, 2.74 and 17.35, respectively. We showed that our system is able to measure clinical features.
Roh, Mootaek; McHugh, Thomas J; Lee, Kyungmin
To investigate the relationship between neural function and behavior it is necessary to record neuronal activity in the brains of freely behaving animals, a technique that typically involves tethering to a data acquisition system. Optimally this approach allows animals to behave without any interference of movement or task performance. Currently many laboratories in the cognitive and behavioral neuroscience fields employ commercial motorized commutator systems using torque sensors to detect tether movement induced by the trajectory behaviors of animals. In this study we describe a novel motorized commutator system which is automatically controlled by video tracking. To obtain accurate head direction data two light emitting diodes were used and video image noise was minimized by physical light source manipulation. The system calculates the rotation of the animal across a single trial by processing head direction data and the software, which calibrates the motor rotation angle, subsequently generates voltage pulses to actively untwist the tether. This system successfully provides a tether twist-free environment for animals performing behavioral tasks and simultaneous neural activity recording. To the best of our knowledge, it is the first to utilize video tracking generated head direction to detect tether twisting and compensate with a motorized commutator system. Our automatic commutator control system promises an affordable and accessible method to improve behavioral neurophysiology experiments, particularly in mice.
Full Text Available This paper presents an efficient video filtering scheme and its implementation in a field-programmable logic device (FPLD. Since the proposed nonlinear, spatiotemporal filtering scheme is based on order statistics, its efficient implementation benefits from a bit-serial realization. The utilization of both the spatial and temporal correlation characteristics of the processed video significantly increases the computational demands on this solution, and thus, implementation becomes a significant challenge. Simulation studies reported in this paper indicate that the proposed pipelined bit-serial FPLD filtering solution can achieve speeds of up to 97.6 Mpixels/s and consumes 1700 to 2700 logic cells for the speed-optimized and area-optimized versions, respectively. Thus, the filter area represents only 6.6 to 10.5% of the Altera STRATIX EP1S25 device available on the Altera Stratix DSP evaluation board, which has been used to implement a prototype of the entire real-time vision system. As such, the proposed adaptive video filtering scheme is both practical and attractive for real-time machine vision and surveillance systems as well as conventional video and multimedia applications.
M.Sc. (Computer Science) A video conference is an interactive meeting between two or more locations, facilitated by simultaneous two-way video and audio transmissions. People in a video conference, also known as participants, join these video conferences for business and recreational purposes. In a typical video conference, we should properly identify and authenticate every participant in the video conference, if information discussed during the video conference is confidential. This preve...
Ziemke, Robert A.
The objective of the High Resolution, High Frame Rate Video Technology (HHVT) development effort is to provide technology advancements to remove constraints on the amount of high speed, detailed optical data recorded and transmitted for microgravity science and application experiments. These advancements will enable the development of video systems capable of high resolution, high frame rate video data recording, processing, and transmission. Techniques such as multichannel image scan, video parameter tradeoff, and the use of dual recording media were identified as methods of making the most efficient use of the near-term technology.
Lee, Kang Oh; Nakaji, Kei; 中司, 敬
A web-based video direct e-commerce system was developed to solve the problems in the internet shopping and to increase trust in safety and quality of agricultural products from consumers. We found that the newly developed e-commerce system could overcome demerits of the internet shopping and give consumers same effects as purchasing products offline. Producers could have opportunities to explain products and to talk to customers and get increased income because of maintaining a certain numbe...
Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel
Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.
Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel
Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper. PMID:22438753
Full Text Available Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.
Zhiwei, Jia; Guozheng, Yan; Bingquan, Zhu
Wireless power transmission is considered a practical way of overcoming the power shortage of wireless capsule endoscopy (VCE). However, most patients cannot tolerate the long hours of lying in a fixed transmitting coil during diagnosis. To develop a portable wireless power transmission system for VCE, a compact transmitting coil and a portable inverter circuit driven by rechargeable batteries are proposed. The couple coils, optimized considering the stability and safety conditions, are 28 turns of transmitting coil and six strands of receiving coil. The driven circuit is designed according to the portable principle. Experiments show that the integrated system could continuously supply power to a dual-head VCE for more than 8 h at a frame rate of 30 frames per second with resolution of 320 × 240. The portable VCE exhibits potential for clinical applications, but requires further improvement and tests.
Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton
Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.
Househ, Mowafa Said; Kushniruk, Andre; Maclure, Malcolm; Carleton, Bruce; Cloutier-Fisher, Denise
To describe experiences, lessons and the implications related to the use of conferencing technology to support three drug policy research groups within a three-year period, using the action case research method. An action case research field study was executed. Three different drug policy groups participated: research, educator, and decision-maker task groups. There were a total of 61 participants in the study. The study was conducted between 2004 and 2007. Each group used audio-teleconferencing, web-conferencing or both to support their knowledge exchange activities. Data were collected over three years and consisted of observation notes, interviews, and meeting transcripts. Content analysis was used to analyze the data using NIVIO qualitative data analysis software. The study found six key lessons regarding the impact of conferencing technologies on knowledge exchange within drug policy groups. We found that 1) groups adapt to technology to facilitate group communication, 2) web-conferencing communication is optimal under certain conditions, 3) audio conferencing is convenient, 4) web-conferencing forces group interactions to be "within text", 5) facilitation contributes to successful knowledge exchange, and 6) technology impacts information sharing. This study highlights lessons related to the use of conferencing technologies to support distant knowledge exchange within drug policy groups. Key lessons from this study can be used by drug policy groups to support successful knowledge exchange activities using conferencing technologies. 2010 Elsevier Ireland Ltd. All rights reserved.
de Jong, G.; Schout, G.; Abma, T.
To understand whether and how Family Group Conferencing might contribute to the social embedding of clients with mental illness. Background: Ensuring the social integration of psychiatric clients is a key aspect of community mental health nursing. Family Group Conferencing has potency to create
Moshirnia, Andrew; Israel, Maya
Despite the increasing popularity of many commercial video games, this popularity is not shared by educational video games. Modified video games, however, can bridge the gap in quality between commercial and education video games by embedding educational content into popular commercial video games. This study examined how different information…
Cai, Lin; Deng, Nianchun; Xiao, Zexin
The cables in anchorage zone of cable-stayed bridge are hidden within the embedded pipe, which leads to the difficulty for detecting the damage of the cables with visual inspection. We have built a detection device based on high-resolution video capture, realized the distance observing of invisible segment of stay cable and damage detection of outer surface of cable in the small volume. The system mainly consists of optical stents and precision mechanical support device, optical imaging system, lighting source, drived motor control and IP camera video capture system. The principal innovations of the device are ⑴A set of telescope objectives with three different focal lengths are designed and used in different distances of the monitors by means of converter. ⑵Lens system is far separated with lighting system, so that the imaging optical path could effectively avoid the harsh environment which would be in the invisible part of cables. The practice shows that the device not only can collect the clear surveillance video images of outer surface of cable effectively, but also has a broad application prospect in security warning of prestressed structures.
Potel, Michael J.; MacKay, Steven A.; Sayre, Richard E.
Extracting quantitative information from movie film and video recordings has always been a difficult process. The Galatea motion analysis system represents an application of some powerful interactive computer graphics capabilities to this problem. A minicomputer is interfaced to a stop-motion projector, a data tablet, and real-time display equipment. An analyst views a film and uses the data tablet to track a moving position of interest. Simultaneously, a moving point is displayed in an animated computer graphics image that is synchronized with the film as it runs. Using a projection CRT and a series of mirrors, this image is superimposed on the film image on a large front screen. Thus, the graphics point lies on top of the point of interest in the film and moves with it at cine rates. All previously entered points can be displayed simultaneously in this way, which is extremely useful in checking the accuracy of the entries and in avoiding omission and duplication of points. Furthermore, the moving points can be connected into moving stick figures, so that such representations can be transcribed directly from film. There are many other tools in the system for entering outlines, measuring time intervals, and the like. The system is equivalent to "dynamic tracing paper" because it is used as though it were tracing paper that can keep up with running movie film. We have applied this system to a variety of problems in cell biology, cardiology, biomechanics, and anatomy. We have also extended the system using photogrammetric techniques to support entry of three-dimensional moving points from two (or more) films taken simultaneously from different perspective views. We are also presently constructing a second, lower-cost, microcomputer-based system for motion analysis in video, using digital graphics and video mixing to achieve the graphics overlay for any composite video source image.
Indah, K. A. T.; Sukarata, G.
Interactive e-learning is a distance learning method that involves information technology, electronic system or computer as one means of learning system used for teaching and learning process that is implemented without having face to face directly between teacher and student. A strong dependence on emerging technologies greatly influences the way in which the architecture is designed to produce a powerful interactive e-learning network. In this paper analyzed an architecture model where learning can be done interactively, involving many participants (N-way synchronized distance learning) using video conferencing technology. Also used broadband internet network as well as multicast techniques as a troubleshooting method for bandwidth usage can be efficient.
Full Text Available Digital Subscriber Line (DSL network access is subject to error bursts, which, for interactive video, can introduce unacceptable latencies if video packets need to be re-sent. If the video packets are protected against errors with Forward Error Correction (FEC, calculation of the application-layer channel codes themselves may also introduce additional latency. This paper proposes Low-Density Generator Matrix (LDGM codes rather than other popular codes because they are more suitable for interactive video streaming, not only for their computational simplicity but also for their licensing advantage. The paper demonstrates that a reduction of up to 4 dB in video distortion is achievable with LDGM Application Layer (AL FEC. In addition, an extension to the LDGM scheme is demonstrated, which works by rearranging the columns of the parity check matrix so as to make it even more resilient to burst errors. Telemedicine and video conferencing are typical target applications.
Colwell O'Callaghan, Veronica
This article reports on two small-scale international projects, both outcomes of a teaching staff exchange, which seek to exploit the potential afforded by new technologies to enrich L2 learning conditions and learners' experience of using the L2 (in this case English) as a lingua franca. Two undergraduate courses, one for professional and one for…
Bruno Monteiro Tavares Pereira
Full Text Available CONTEXT AND OBJECTIVE: Telehealth and telemedicine services are advancing rapidly, with an increasing spectrum of information and communication technologies that can be applied broadly to the population's health, and to medical education. The aim here was to report our institution's experience from 100 videoconferencing meetings between five different countries in the Americas over a one-year period. DESIGN AND SETTING: Retrospective study at Universidade Estadual de Campinas. METHODS: Through a Microsoft Excel database, all conferences in all specialties held at our institution from September 2009 to August 2010 were analyzed retrospectively. RESULTS: A total of 647 students, physicians and professors participated in telemedicine meetings. A monthly mean of 8.3 (± 4.3 teleconferences were held over the analysis period. Excluding holidays and the month of inaugurating the telemedicine theatre, our teleconference rate reached a mean of 10.3 (± 2.7, or two teleconferences a week, on average. Trauma surgery and meetings on patient safety were by far the most common subjects discussed in our teleconference meetings, accounting for 22% and 21% of the total calls. CONCLUSION: Our experience with telemedicine meetings has increased students' interest; helped our institution to follow and discuss protocols that are already accepted worldwide; and stimulated professors to promote telemedicine-related research in their own specialties and keep up-to-date. These high-technology meetings have shortened distances in our vast country, and to other reference centers abroad. This virtual proximity has enabled discussion of international training with students and residents, to increase their overall knowledge and improve their education within this institution.
Reading, Chris; Auh, Myung-Sook; Pegg, John; Cybula, Peter
The need for Australian school students to develop a strong understanding of Asian culture has been recognised in the cross-curriculum priority, "Asia and Australia's Engagement with Asia," of the Australian Curriculum. School students in rural and remote Australia have limited opportunities to engage with Asians and learn about their…
Full Text Available Video content has increased much on the Internet during last years. In spite of the efforts of different organizations and governments to increase the accessibility of websites, most multimedia content on the Internet is not accessible. This paper describes a system that contributes to make multimedia content more accessible on the Web, by automatically translating subtitles in oral language to SignWriting, a way of writing Sign Language. This system extends the functionality of a general web platform that can provide accessible web content for different needs. This platform has a core component that automatically converts any web page to a web page compliant with level AA of WAI guidelines. Around this core component, different adapters complete the conversion according to the needs of specific users. One adapter is the Deaf People Accessibility Adapter, which provides accessible web content for the Deaf, based on SignWritting. Functionality of this adapter has been extended with the video subtitle translator system. A first prototype of this system has been tested through different methods including usability and accessibility tests and results show that this tool can enhance the accessibility of video content available on the Web for Deaf people.
Allen, A. J.; Terry, J. L.; Garnier, D.; Stillerman, J. A.; Wurden, G. A.
A new system for routine digitization of video images is presently operating on the Alcator C-Mod tokamak. The PC-based system features high resolution video capture, storage, and retrieval. The captured images are stored temporarily on the PC, but are eventually written to CD. Video is captured from one of five filtered RS-170 CCD cameras at 30 frames per second (fps) with 640×480 pixel resolution. In addition, the system can digitize the output from a filtered Kodak Ektapro EM Digital Camera which captures images at 1000 fps with 239×192 resolution. Present views of this set of cameras include a wide angle and a tangential view of the plasma, two high resolution views of gas puff capillaries embedded in the plasma facing components, and a view of ablating, high speed Li pellets. The system is being used to study (1) the structure and location of visible emissions (including MARFEs) from the main plasma and divertor, (2) asymmetries in gas puff plumes due to flows in the scrape-off layer (SOL), and (3) the tilt and cigar-shaped spatial structure of the Li pellet ablation cloud.
Han, Xiaoguang; Fu, Hongbo; Zheng, Hanlin; Liu, Ligang; Wang, Jue
Stop-motion is a well-established animation technique but is often laborious and requires craft skills. A new video-based system can animate the vast majority of everyday objects in stop-motion style, more flexibly and intuitively. Animators can perform and capture motions continuously instead of breaking them into increments and shooting one still picture per increment. More important, the system permits direct hand manipulation without resorting to rigs, achieving more natural object control for beginners. The system's key component is two-phase keyframe-based capturing and processing, assisted by computer vision techniques. With this system, even amateurs can generate high-quality stop-motion animations.
Full Text Available This paper proposes a camera road sign system of the early warning, which can help to avoid from vehicle collision with the wild animals. The system consists of camera modules placed down the particularly chosen route and the intelligent road signs. The camera module consists of the camera device and the computing unit. The video stream is captured from video camera using computing unit. Then the algorithms of object detection are deployed. Afterwards, the machine learning algorithms will be used to classify the moving objects. If the moving object is classified as animal and this animal can be dangerous for safety of the vehicle, warning will be displayed on the intelligent road sings.
Martin, Benjamin M.; Irwin, Elise R.
We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.
Warren, S.; Craft, R.L.; Parks, R.C.; Gallagher, L.K.; Garcia, R.J.; Funkhouser, D.R.
Telemedicine technology is rapidly evolving. Whereas early telemedicine consultations relied primarily on video conferencing, consultations today may utilize video conferencing, medical peripherals, store-and-forward capabilities, electronic patient record management software, and/or a host of other emerging technologies. These remote care systems rely increasingly on distributed, collaborative information technology during the care delivery process, in its many forms. While these leading-edge systems are bellwethers for highly advanced telemedicine, the remote care market today is still immature. Most telemedicine systems are custom-designed and do not interoperate with other commercial offerings. Users are limited to a set of functionality that a single vendor provides and must often pay high prices to obtain this functionality, since vendors in this marketplace must deliver entire systems in order to compete. Besides increasing corporate research and development costs, this inhibits the ability of the user to make intelligent purchasing decisions regarding best-of-breed technologies. We propose a secure, object-oriented information architecture for telemedicine systems that promotes plug-and-play interaction between system components through standardized interfaces, communication protocols, messaging formats, and data definitions. In this architecture, each component functions as a black box, and components plug together in a lego-like fashion to achieve the desired device or system functionality. The architecture will support various ongoing standards work in the medical device arena.
R. Dulaney, D.; Hopfensperger, M.; Malinowski, R.; Hauptman, J.; Kruger, J M
Background Urinary disorders in cats often require subjective caregiver quantification of clinical signs to establish a diagnosis and monitor therapeutic outcomes. Objective To investigate use of a video recording system (VRS) to better assess and quantify urination behaviors in cats. Animals Eleven healthy cats and 8 cats with disorders potentially associated with abnormal urination patterns. Methods Prospective study design. Litter box urination behaviors were quantified with a VRS for 14 d...
A microprocessor has been used to provide the major control functions in the Telemation/Sandia unattended video surveillance system. The software in the microprocessor provides control of the various hardware components and provides the capability of interactive communications with the operator. This document, in conjunction with the commented source listing, defines the philosophy and function of the software. It is assumed that the reader is familiar with the RCA 1802 COSMAC microprocessor and has a reasonable computer science background.
Xia, Xue; Qiu, Yun; Hu, Lin; Fan, Jingchao; Guo, Xiuming; Zhou, Guomin
International audience; As the proposition of the ‘Internet plus’ concept and speedy progress of new media technology, traditional business have been increasingly shared in the development fruits of the informatization and the networking. Proceeding from the real plant protection demands, the construction of a cloud-based video monitoring system that surveillances diseases and pests in apple orchards has been discussed, aiming to solve the lack of timeliness and comprehensiveness in the contr...
This study investigated the effects of two modes of corrective feedback, namely, face-to-face recasts and computer-mediated recasts during video-conferencing on Iranian English as a foreign language (EFL) learners' second language (L2) development. Moreover, the accuracy of the learners' interpretations of recasts in the two modalities was…
Anishchenko, S.; Beylin, D.; Stepanov, P.; Stepanov, A.; Weinberg, I. N.; Schaeffer, S.; Zavarzin, V.; Shaposhnikov, D.; Smith, M. F.
Unintentional head motion during Positron Emission Tomography (PET) data acquisition can degrade PET image quality and lead to artifacts. Poor patient compliance, head tremor, and coughing are examples of movement sources. Head motion due to patient non-compliance can be an issue with the rise of amyloid brain PET in dementia patients. To preserve PET image resolution and quantitative accuracy, head motion can be tracked and corrected in the image reconstruction algorithm. While fiducial markers can be used, a contactless approach is preferable. A video-based head motion tracking system for a dedicated portable brain PET scanner was developed. Four wide-angle cameras organized in two stereo pairs are used for capturing video of the patient's head during the PET data acquisition. Facial points are automatically tracked and used to determine the six degree of freedom head pose as a function of time. The presented work evaluated the newly designed tracking system using a head phantom and a moving American College of Radiology (ACR) phantom. The mean video-tracking error was 0.99±0.90 mm relative to the magnetic tracking device used as ground truth. Qualitative evaluation with the ACR phantom shows the advantage of the motion tracking application. The developed system is able to perform tracking with accuracy close to millimeter and can help to preserve resolution of brain PET images in presence of movements.
Jubran, Mohammad K; Bansal, Manu; Kondi, Lisimachos P; Grover, Rohan
In this paper, we propose an optimal strategy for the transmission of scalable video over packet-based multiple-input multiple-output (MIMO) systems. The scalable extension of H.264/AVC that provides a combined temporal, quality and spatial scalability is used. For given channel conditions, we develop a method for the estimation of the distortion of the received video and propose different error concealment schemes. We show the accuracy of our distortion estimation algorithm in comparison with simulated wireless video transmission with packet errors. In the proposed MIMO system, we employ orthogonal space-time block codes (O-STBC) that guarantee independent transmission of different symbols within the block code. In the proposed constrained bandwidth allocation framework, we use the estimated end-to-end decoder distortion to optimally select the application layer parameters, i.e., quantization parameter (QP) and group of pictures (GOP) size, and physical layer parameters, i.e., rate-compatible turbo (RCPT) code rate and symbol constellation. Results show the substantial performance gain by using different symbol constellations across the scalable layers as compared to a fixed constellation.
Full Text Available Digital Video Recorder (DVR is a digital video recorder with hard drive storage media. When the capacity of the hard disk runs out. It will provide information to users and if there is no response, it will be overwritten automatically and the data will be lost. The main focus of this paper is to enable recording directly connected to a computer editor. The output of both systems (DVR and Direct Recording will be compared with an objective assessment using the Mean Square Error (MSE and Peak Signal to Noise Ratio (PSNR parameter. The results showed that the average value of MSE Direct Recording dB 797.8556108, 137.4346100 DVR MSE dB and the average value of PSNR Direct Recording and DVR PSNR dB 19.5942333 27.0914258 dB. This indicates that the DVR has a much better output quality than Direct Recording.
Machireddy, Archana; van Santen, Jan; Wilson, Jenny L; Myers, Julianne; Hadders-Algra, Mijna; Xubo Song
Cerebral palsy is a non-progressive neurological disorder occurring in early childhood affecting body movement and muscle control. Early identification can help improve outcome through therapy-based interventions. Absence of so-called "fidgety movements" is a strong predictor of cerebral palsy. Currently, infant limb movements captured through either video cameras or accelerometers are analyzed to identify fidgety movements. However both modalities have their limitations. Video cameras do not have the high temporal resolution needed to capture subtle movements. Accelerometers have low spatial resolution and capture only relative movement. In order to overcome these limitations, we have developed a system to combine measurements from both camera and sensors to estimate the true underlying motion using extended Kalman filter. The estimated motion achieved 84% classification accuracy in identifying fidgety movements using Support Vector Machine.
Full Text Available This paper presents a parallel TBB-CUDA implementation for the acceleration of single-Gaussian distribution model, which is effective for background removal in the video-based fire detection system. In this framework, TBB mainly deals with initializing work of the estimated Gaussian model running on CPU, and CUDA performs background removal and adaption of the model running on GPU. This implementation can exploit the combined computation power of TBB-CUDA, which can be applied to the real-time environment. Over 220 video sequences are utilized in the experiments. The experimental results illustrate that TBB+CUDA can achieve a higher speedup than both TBB and CUDA. The proposed framework can effectively overcome the disadvantages of limited memory bandwidth and few execution units of CPU, and it reduces data transfer latency and memory latency between CPU and GPU.
Sharma, Shubhankar; Singh, K. John; Priya, M.
From the past two decades the extreme evolution of the Internet has lead a massive rise in video technology and significantly video consumption over the Internet which inhabits the bulk of data traffic in general. Clearly, video consumes that so much data size on the World Wide Web, to reduce the burden on the Internet and deduction of bandwidth consume by video so that the user can easily access the video data.For this, many video codecs are developed such as HEVC/H.265 and V9. Although after seeing codec like this one gets a dilemma of which would be improved technology in the manner of rate distortion and the coding standard.This paper gives a solution about the difficulty for getting low delay in video compression and video application e.g. ad-hoc video conferencing/streaming or observation by surveillance. Also this paper describes the benchmark of HEVC and V9 technique of video compression on subjective oral estimations of High Definition video content, playback on web browsers. Moreover, this gives the experimental ideology of dividing the video file into several segments for compression and putting back together to improve the efficiency of video compression on the web as well as on the offline mode.
Bonk, Curt; Ehman, Lee; Hixon, Emily; Yamagata-Lynch, Lisa
Discusses the online activities used in the Teacher Institute for Curriculum Knowledge about the Integration of Technology (TICKIT), a school-based professional development program involving K-12 teachers from rural Indiana schools that was developed by Indiana University. Describes the use of Web-based asynchronous conferencing to post technology…
Morse, Philip S.
A study analyzed the extent to which writing teachers in conferencing situations employ the communication techniques used by professional helping agents. A metatheory of communication techniques developed by Allen Ivey and associates which attempts to combine and synthesize the relevant psychotherapeutic and counseling techniques in the profession…
Based on a three-semester design research study, this paper argues the need to redesign online learning environments to better support the representation and sharing of factual, procedural, and conceptual knowledge in order for students to develop their design capabilities. A web-conferencing environment is redesigned so that the modalities…
Metze, Rosalie N.; Kwekkeboom, Rick H.; Abma, Tineke A.
Family Group Conferencing (FGC) is emerging in the field of elderly care, as a method to enhance the resilience and relational autonomy of older persons. In this article, we want to explore the appropriateness of these two concepts to understand the FGC process in older adults.
The online education industry has had a rapid economic development in China since 2013, but this area received little attention in research. This study investigates Chinese undergraduate students' online English learning experiences and online teacher-learner interaction in synchronous web conferencing classes. This article reports the findings…
Full Text Available Identities for Research and Education (SAFIRE) to allow users to be able to access the service quickly and easily using their home institutions credentials. By integrating Mconf web conferencing with SAFIRE, the SA NREN hopes that Mconf will encourage...
Gizzi, Michael C.
This paper reflects on a two-semester experiment using computer technologies in the university political science classroom. The instructor incorporated electronic mail (e-mail), the Internet, and an on-line conferencing program into the course requirements for an upper-division course on the Supreme Court and an introductory honors tutorial on…
Heiser, Sarah; Stickler, Ursula; Furnborough, Concha
With the increase of online language teaching the training needs of teachers have long been established and researched. However, the training needs of students have not yet been fully acknowledged. This paper focuses on learner training as preparation for language classes where online synchronous conferencing is used. It presents an action…
"How to" guides and software training resources support the development of the skills and confidence needed to teach in virtual classrooms using web-conferencing software. However, these sources do not often reveal the subtleties of what it is like to be a facilitator in such an environment--what it feels like, what issues might emerge…
Fu, Chang-Hong; Chan, Yui-Lam; Ip, Tak-Piu; Siu, Wan-Chi
MPEG digital video is becoming ubiquitous for video storage and communications. It is often desirable to perform various video cassette recording (VCR) functions such as backward playback in MPEG videos. However, the predictive processing techniques employed in MPEG severely complicate the backward-play operation. A straightforward implementation of backward playback is to transmit and decode the whole group-of-picture (GOP), store all the decoded frames in the decoder buffer, and play the decoded frames in reverse order. This approach requires a significant buffer in the decoder, which depends on the GOP size, to store the decoded frames. This approach could not be possible in a severely constrained memory requirement. Another alternative is to decode the GOP up to the current frame to be displayed, and then go back to decode the GOP again up to the next frame to be displayed. This approach does not need the huge buffer, but requires much higher bandwidth of the network and complexity of the decoder. In this paper, we propose a macroblock-based algorithm for an efficient implementation of the MPEG video streaming system to provide backward playback over a network with the minimal requirements on the network bandwidth and the decoder complexity. The proposed algorithm classifies macroblocks in the requested frame into backward macroblocks (BMBs) and forward/backward macroblocks (FBMBs). Two macroblock-based techniques are used to manipulate different types of macroblocks in the compressed domain and the server then sends the processed macroblocks to the client machine. For BMBs, a VLC-domain technique is adopted to reduce the number of macroblocks that need to be decoded by the decoder and the number of bits that need to be sent over the network in the backward-play operation. We then propose a newly mixed VLC/DCT-domain technique to handle FBMBs in order to further reduce the computational complexity of the decoder. With these compressed-domain techniques, the
Momcilovic, Svetislav; Sousa, Leonel
In this work scalable parallelization methods for computing in real-time the H.264/AVC on multi-cores platforms, such as the most recent Graphical Processing Units (GPUs) and Cell Broadband Engine (Cell/BE), are proposed. By applying the Amdahl's law, the most demanding parts of the video coder were identified and the Single Program Multiple Data and Single Instruction Multiple Data approaches are adopted for achieving real-time processing. In particular, video motion estimation and in-loop deblocking filtering were offloaded to be executed in parallel on either GPUs or Cell/BE Synergistic Processor Elements (SPEs). The limits and advantages of these two architectures when dealing with typical video coding problems, such as data dependencies and large input data are demonstrated. We propose techniques to minimize the impact of branch divergences and branch misprediction, data misalignment, conflicts and non-coalesced memory accesses. Moreover, data dependencies and memory size restrictions are taken into account in order to minimize synchronization and communication time overheads, and to achieve the optimal workload balance given the available multiple cores. Data reusing technique is extensively applied for reducing communication overhead, in order to achieve the maximum processing speedup. Experimental results show that real time H.264/AVC is achieved in both systems by computing 30 frames per second, with a resolution of 720×576 pixels, when full-pixel motion estimation is applied over 5 reference frames and 32×32 search area. When quarter-pixel motion estimation is adopted, real time video coding is obtained on GPU for larger search area and on Cell/BE for smaller search areas.
Yoo, Sun K; Kim, Kwang M; Jung, Suck M; Lee, K J; Kim, Nam H
The telemedicine systems for the decision of patient transfer, and the direction of patient treatment through remote consultation are necessarily required for better patient care in emergency situation. In this paper, the prototype emergency telemedicine system has been designed and implemented. The unified integration of multimedia components, including full-quality video, vital sign signals, radiological images and video conferencing in a single computer, provides an efficient means to investigate the accurate status of emergency patient at the remote location. The software implementation of needed functionality without any externally attached hardware CODEC units enables the compact design with low cost, and ease of operation at the emergency room. Experimental tests at the local networks analyze the technical aspects of implemented systems, and optimize the parameters subjectively to run telemedicine systems with affordable error. Inter-hospital experiments demonstrate the possibility to be effectively used at emergency situation.
Fabian, E; Mertz, M; Hofmann, H; Wertheimer, R; Foos, C
The clinical advantages of a scanning laser ophthalmoscope (SLO) and video imaging of fundus pictures are described. Image quality (contrast, depth of field) and imaging possibilities (confocal stop) are assessed. Imaging with different lasers (argon, He-Ne) and changes in imaging rendered possible by confocal alignment of the imaging optics are discussed. Hard copies from video images are still of inferior quality compared to fundus photographs. Methods of direct processing and retrieval of digitally stored SLO video fundus images are illustrated by examples. Modifications for a definitive laser scanning system - in regard to the field of view and the quality of hard copies - are proposed.
Schneider, Jeffrey C; Ozsecen, Muzaffer Y; Muraoka, Nicholas K; Mancinelli, Chiara; Della Croce, Ugo; Ryan, Colleen M; Bonato, Paolo
Burn contractures are common and difficult to treat. Measuring continuous joint motion would inform the assessment of contracture interventions; however, it is not standard clinical practice. This study examines use of an interactive gaming system to measure continuous joint motion data. To assess the usability of an exoskeleton-based interactive gaming system in the rehabilitation of upper extremity burn contractures. Feasibility study. Eight subjects with a history of burn injury and upper extremity contractures were recruited from the outpatient clinic of a regional inpatient rehabilitation facility. Subjects used an exoskeleton-based interactive gaming system to play 4 different video games. Continuous joint motion data were collected at the shoulder and elbow during game play. Visual analog scale for engagement, difficulty and comfort. Angular range of motion by subject, joint, and game. The study population had an age of 43 ± 16 (mean ± standard deviation) years and total body surface area burned range of 10%-90%. Subjects reported satisfactory levels of enjoyment, comfort, and difficulty. Continuous joint motion data demonstrated variable characteristics by subject, plane of motion, and game. This study demonstrates the feasibility of use of an exoskeleton-based interactive gaming system in the burn population. Future studies are needed that examine the efficacy of tailoring interactive video games to the specific joint impairments of burn survivors. Copyright © 2016 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Li, Hejian; An, Ping; Zhang, Zhaoyang
Three-dimensional (3-D) video brings people strong visual perspective experience, but also introduces large data and complexity processing problems. The depth estimation algorithm is especially complex and it is an obstacle for real-time system implementation. Meanwhile, high-resolution depth maps are necessary to provide a good image quality on autostereoscopic displays which deliver stereo content without the need for 3-D glasses. This paper presents a hardware implementation of a full high-definition (HD) depth estimation system that is capable of processing full HD resolution images with a maximum processing speed of 125 fps and a disparity search range of 240 pixels. The proposed field-programmable gate array (FPGA)-based architecture implements a fusion strategy matching algorithm for efficiency design. The system performs with high efficiency and stability by using a full pipeline design, multiresolution processing, synchronizers which avoid clock domain crossing problems, efficient memory management, etc. The implementation can be included in the video systems for live 3-D television applications and can be used as an independent hardware module in low-power integrated applications.
Full Text Available The development of the PC has opened up many new perspectives in the use of technology for distance learning. Broadband, high-speed telecommunications now make it possible to access, transmit and receive sound, still images, video and other data. This application is normally referred to as videoconferencing (Woodruff and Mosby, 1996. It provides the capability to connect two or more parties separated by distance by means of audio, video and data and allows opportunities for real-time interaction. It is often used by groups of people who gather in a specific setting to communicate with other groups of people who are unable physically to be there. However, the term videoconferencing can be applied to a wide range of situations, such as individual-to-individual discussion and video-lecturing. Lopez and Woodruff (1996 identify four videoconferencing formats: the interview, the virtual meeting, the virtual field trip and the lecture. They state that the least productive of these is normally the lecture which, they suggest, does not promote dialogue or interaction: a lecture is as a one-way process where intellectual resources are transmitted, and as a learning environment does not usually provide opportunities for students to interact with tutors or between themselves. They are unlikely to establish any form of dialogue or to use their own thought processes (King and Honeybone, 1997. Whilst cost-effective in traditional terms, the lecture forum can be a shallow and relatively ineffective learning experience.
Roger W Li
Full Text Available UNLABELLED: Abnormal visual experience during a sensitive period of development disrupts neuronal circuitry in the visual cortex and results in abnormal spatial vision or amblyopia. Here we examined whether playing video games can induce plasticity in the visual system of adults with amblyopia. Specifically 20 adults with amblyopia (age 15-61 y; visual acuity: 20/25-20/480, with no manifest ocular disease or nystagmus were recruited and allocated into three intervention groups: action videogame group (n = 10, non-action videogame group (n = 3, and crossover control group (n = 7. Our experiments show that playing video games (both action and non-action games for a short period of time (40-80 h, 2 h/d using the amblyopic eye results in a substantial improvement in a wide range of fundamental visual functions, from low-level to high-level, including visual acuity (33%, positional acuity (16%, spatial attention (37%, and stereopsis (54%. Using a cross-over experimental design (first 20 h: occlusion therapy, and the next 40 h: videogame therapy, we can conclude that the improvement cannot be explained simply by eye patching alone. We quantified the limits and the time course of visual plasticity induced by video-game experience. The recovery in visual acuity that we observed is at least 5-fold faster than would be expected from occlusion therapy in childhood amblyopia. We used positional noise and modelling to reveal the neural mechanisms underlying the visual improvements in terms of decreased spatial distortion (7% and increased processing efficiency (33%. Our study had several limitations: small sample size, lack of randomization, and differences in numbers between groups. A large-scale randomized clinical study is needed to confirm the therapeutic value of video-game treatment in clinical situations. Nonetheless, taken as a pilot study, this work suggests that video-game play may provide important principles for treating amblyopia
Joongheon Kim; Eun-Seok Ryu
This paper presents the quality analysis results of high-definition video streaming in two-tiered camera sensor network applications. In the camera-sensing system, multiple cameras sense visual scenes in their target fields and transmit the video streams via IEEE 802.15.3c multigigabit wireless links. However, the wireless transmission introduces interferences to the other links. This paper analyzes the capacity degradation due to the interference impacts from the camera-sensing nodes to the ...
Rui Sergio Monteiro de Barros
Full Text Available Abstract The right femoral vessels of 80 rats were identified and dissected. External lengths and diameters of femoral arteries and femoral veins were measured using either a microscope or a video magnification system. Findings were correlated to animals’ weights. Mean length was 14.33 mm for both femoral arteries and femoral veins, mean diameter of arteries was 0.65 mm and diameter of veins was 0.81 mm. In our sample, rats’ body weights were only correlated with the diameter of their femoral veins.
Krüger, Andreas; Edelmann-Nusser, Jürgen
This study aims at determining the accuracy of a full body inertial measurement system in a real skiing environment in comparison with an optical video based system. Recent studies have shown the use of inertial measurement systems for the determination of kinematical parameters in alpine skiing. However, a quantitative validation of a full body inertial measurement system for the application in alpine skiing is so far not available. For the purpose of this study, a skier performed a test-run equipped with a full body inertial measurement system in combination with a DGPS. In addition, one turn of the test-run was analyzed by an optical video based system. With respect to the analyzed angles, a maximum mean difference of 4.9° was measured. No differences in the measured angles between the inertial measurement system and the combined usage with a DGPS were found. Concerning the determination of the skier's trajectory, an additional system (e.g., DGPS) must be used. As opposed to optical methods, the main advantages of the inertial measurement system are the determination of kinematical parameters without the limitation of restricted capture volume, and small time costs for the measurement preparation and data analysis.
Perry, Robin; Yoo, Jane; Spoliansky, Toni; Edelman, Pebbles
This article reports the outcome evaluation findings of an experimental study conducted with families in the child welfare system in Florida. Families were randomly assigned to one of three Family Team Conferencing (FTC) models. In Pathway 1, the comparison model, FTCs were facilitated by case-workers. In Pathway 2, one of two experimental models, FTCs were cofacilitated by caseworkers and a designated/trained facilitator, and included expedited family engagement as well as the provision of FTCs throughout the life of a case. Pathway 3, also an experimental model, had the same components of Pathway 2 but also included family alone time. In approximately three years of the project period, 623 families agreed to participate in the study. Study findings showed no statistically significant change observed for families participating in Pathway 1 FTCs in terms of protective factors, achieving family-defined service and plan-of-care goals, and emotional and behavioral symptomology of children. Cases in Pathway 2 demonstrated significant improvement in family functioning and resiliency, nurturing and attachment, and increasing parents' knowledge about "what to do as a parent." Caregivers and teens in Pathway 3 reported significant improvement in expression of emotional symptomology/problems, conduct problems, hyperactivity, peer problems, and a measure of total difficulties. However, foster care re-entry rates were significantly higher for Pathway 3 than Pathway 2 (but not Pathway 1). Moreover, Pathway 2 and Pathway 3 FTCs had a significant effect on moving the family toward agreed upon service goals. Taken together, these findings suggest that the experimental FTC models in which facilitators were used and family engagement was expedited and sustained through subsequent FTCs demonstrated moderate, yet mixed benefits to children, youth, and families.
Full Text Available A major learning difficulty of Japanese foreign language (JFL learners is the complex composition of two syllabaries, hiragana and katakana, and kanji characters adopted from logographic Chinese ones. As the number of Japanese language learners increases, computer-assisted Japanese language education gradually gains more attention. This study aimed to adopt a Japanese word segmentation system to help JFL learners overcome literacy problems. This study adopted MeCab, a Japanese morphological analyzer and part-of-speech (POS tagger, to segment Japanese texts into separate morphemes by adding spaces and to attach POS tags to each morpheme for beginners. The participants were asked to participate in three experimental activities involvingwatching two Japanese videos with general or segmented Japanese captions and complete the Nielsen’s Attributes of Usability (NAU survey and the After Scenario Questionnaire (ASQ to evaluate the usability of the learning activities. The results of the system evaluation showed that the videos with the segmented captions could increase the participants’ learning motivation and willingness to adopt the word segmentation system to learn Japanese.
Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko
The authors proposed a high-quality and small-capacity lecture-video-file creating system for distance e-learning system. Examining the feature of the lecturing scene, the authors ingeniously employ two kinds of image-capturing equipment having complementary characteristics : one is a digital video camera with a low resolution and a high frame rate, and the other is a digital still camera with a high resolution and a very low frame rate. By managing the two kinds of image-capturing equipment, and by integrating them with image processing, we can produce course materials with the greatly reduced file capacity : the course materials satisfy the requirements both for the temporal resolution to see the lecturer's point-indicating actions and for the high spatial resolution to read the small written letters. As a result of a comparative experiment, the e-lecture using the proposed system was confirmed to be more effective than an ordinary lecture from the viewpoint of educational effect.
Khosla, Deepak; Moore, Christopher K.; Chelian, Suhas
This paper presents a bio-inspired method for spatio-temporal recognition in static and video imagery. It builds upon and extends our previous work on a bio-inspired Visual Attention and object Recognition System (VARS). The VARS approach locates and recognizes objects in a single frame. This work presents two extensions of VARS. The first extension is a Scene Recognition Engine (SCE) that learns to recognize spatial relationships between objects that compose a particular scene category in static imagery. This could be used for recognizing the category of a scene, e.g., office vs. kitchen scene. The second extension is the Event Recognition Engine (ERE) that recognizes spatio-temporal sequences or events in sequences. This extension uses a working memory model to recognize events and behaviors in video imagery by maintaining and recognizing ordered spatio-temporal sequences. The working memory model is based on an ARTSTORE1 neural network that combines an ART-based neural network with a cascade of sustained temporal order recurrent (STORE)1 neural networks. A series of Default ARTMAP classifiers ascribes event labels to these sequences. Our preliminary studies have shown that this extension is robust to variations in an object's motion profile. We evaluated the performance of the SCE and ERE on real datasets. The SCE module was tested on a visual scene classification task using the LabelMe2 dataset. The ERE was tested on real world video footage of vehicles and pedestrians in a street scene. Our system is able to recognize the events in this footage involving vehicles and pedestrians.
Bruellmann, D D; Tjaden, H; Schwanecke, U; Barth, P
We propose an augmented reality system for the reliable detection of root canals in video sequences based on a k-nearest neighbor color classification and introduce a simple geometric criterion for teeth. The new software was implemented using C++, Qt, and the image processing library OpenCV. Teeth are detected in video images to restrict the segmentation of the root canal orifices by using a k-nearest neighbor algorithm. The location of the root canal orifices were determined using Euclidean distance-based image segmentation. A set of 126 human teeth with known and verified locations of the root canal orifices was used for evaluation. The software detects root canals orifices for automatic classification of the teeth in video images and stores location and size of the found structures. Overall 287 of 305 root canals were correctly detected. The overall sensitivity was about 94 %. Classification accuracy for molars ranged from 65.0 to 81.2 % and from 85.7 to 96.7 % for premolars. The realized software shows that observations made in anatomical studies can be exploited to automate real-time detection of root canal orifices and tooth classification with a software system. Automatic storage of location, size, and orientation of the found structures with this software can be used for future anatomical studies. Thus, statistical tables with canal locations will be derived, which can improve anatomical knowledge of the teeth to alleviate root canal detection in the future. For this purpose the software is freely available at: http://www.dental-imaging.zahnmedizin.uni-mainz.de/.
Background Violent content in video games evokes many concerns but there is little research concerning its rewarding aspects. It was demonstrated that playing a video game leads to striatal dopamine release. It is unclear, however, which aspects of the game cause this reward system activation and if violent content contributes to it. We combined functional Magnetic Resonance Imaging (fMRI) with individual affect measures to address the neuronal correlates of violence in a video game. Results Thirteen male German volunteers played a first-person shooter game (Tactical Ops: Assault on Terror) during fMRI measurement. We defined success as eliminating opponents, and failure as being eliminated themselves. Affect was measured directly before and after game play using the Positive and Negative Affect Schedule (PANAS). Failure and success events evoked increased activity in visual cortex but only failure decreased activity in orbitofrontal cortex and caudate nucleus. A negative correlation between negative affect and responses to failure was evident in the right temporal pole (rTP). Conclusions The deactivation of the caudate nucleus during failure is in accordance with its role in reward-prediction error: it occurred whenever subject missed an expected reward (being eliminated rather than eliminating the opponent). We found no indication that violence events were directly rewarding for the players. We addressed subjective evaluations of affect change due to gameplay to study the reward system. Subjects reporting greater negative affect after playing the game had less rTP activity associated with failure. The rTP may therefore be involved in evaluating the failure events in a social context, to regulate the players' mood. PMID:21749711
Full Text Available Abstract Background Violent content in video games evokes many concerns but there is little research concerning its rewarding aspects. It was demonstrated that playing a video game leads to striatal dopamine release. It is unclear, however, which aspects of the game cause this reward system activation and if violent content contributes to it. We combined functional Magnetic Resonance Imaging (fMRI with individual affect measures to address the neuronal correlates of violence in a video game. Results Thirteen male German volunteers played a first-person shooter game (Tactical Ops: Assault on Terror during fMRI measurement. We defined success as eliminating opponents, and failure as being eliminated themselves. Affect was measured directly before and after game play using the Positive and Negative Affect Schedule (PANAS. Failure and success events evoked increased activity in visual cortex but only failure decreased activity in orbitofrontal cortex and caudate nucleus. A negative correlation between negative affect and responses to failure was evident in the right temporal pole (rTP. Conclusions The deactivation of the caudate nucleus during failure is in accordance with its role in reward-prediction error: it occurred whenever subject missed an expected reward (being eliminated rather than eliminating the opponent. We found no indication that violence events were directly rewarding for the players. We addressed subjective evaluations of affect change due to gameplay to study the reward system. Subjects reporting greater negative affect after playing the game had less rTP activity associated with failure. The rTP may therefore be involved in evaluating the failure events in a social context, to regulate the players' mood.
Mathiak, Krystyna A; Klasen, Martin; Weber, René; Ackermann, Hermann; Shergill, Sukhwinder S; Mathiak, Klaus
Violent content in video games evokes many concerns but there is little research concerning its rewarding aspects. It was demonstrated that playing a video game leads to striatal dopamine release. It is unclear, however, which aspects of the game cause this reward system activation and if violent content contributes to it. We combined functional Magnetic Resonance Imaging (fMRI) with individual affect measures to address the neuronal correlates of violence in a video game. Thirteen male German volunteers played a first-person shooter game (Tactical Ops: Assault on Terror) during fMRI measurement. We defined success as eliminating opponents, and failure as being eliminated themselves. Affect was measured directly before and after game play using the Positive and Negative Affect Schedule (PANAS). Failure and success events evoked increased activity in visual cortex but only failure decreased activity in orbitofrontal cortex and caudate nucleus. A negative correlation between negative affect and responses to failure was evident in the right temporal pole (rTP). The deactivation of the caudate nucleus during failure is in accordance with its role in reward-prediction error: it occurred whenever subject missed an expected reward (being eliminated rather than eliminating the opponent). We found no indication that violence events were directly rewarding for the players. We addressed subjective evaluations of affect change due to gameplay to study the reward system. Subjects reporting greater negative affect after playing the game had less rTP activity associated with failure. The rTP may therefore be involved in evaluating the failure events in a social context, to regulate the players' mood.
Okano, Fumio; Kawakita, Masahiro; Arai, Jun; Sasaki, Hisayuki; Yamashita, Takayuki; Sato, Masahito; Suehiro, Koya; Haino, Yasuyuki
The integral method enables observers to see 3D images like real objects. It requires extremely high resolution for both capture and display stages. We present an experimental 3D television system based on the integral method using an extremely high-resolution video system. The video system has 4,000 scanning lines using the diagonal offset method for two green channels. The number of elemental lenses in the lens array is 140 (vertical) × 182 (horizontal). The viewing zone angle is wider than 20 degrees in practice. This television system can capture 3D objects and provides full color and full parallax 3D images in real time.
Non-intrusive video imaging sensors are commonly used in traffic monitoring : and surveillance. For some applications it is necessary to transmit the video : data over communication links. However, due to increased requirements of : bitrate this mean...
Yaser Mohammad Taheri; Alireza Zolghadr–asli; Mehran Yazdi
Video watermarking is usually considered as watermarking of a set of still images. In frame-by-frame watermarking approach, each video frame is seen as a single watermarked image, so collusion attack is more critical in video watermarking. If the same or redundant watermark is used for embedding in every frame of video, the watermark can be estimated and then removed by watermark estimate remodolulation (WER) attack. Also if uncorrelated watermarks are used for every frame, these watermarks c...
Arai, Jun; Okui, Makoto; Yamashita, Takayuki; Okano, Fumio
We have developed an integral three-dimensional (3-D) television that uses a 2000-scanning-line video system that can shoot and display 3-D color moving images in real time. We had previously developed an integral 3-D television that used a high-definition television system. The new system uses ˜6 times as many elemental images [160 (horizontal)×118 (vertical) elemental images] arranged at ˜1.5 times the density to improve further the picture quality of the reconstructed image. Through comparison an image near the lens array can be reconstructed at ˜1.9 times the spatial frequency, and the viewing angle is ˜1.5 times as wide.
Takahata, Minoru; Uemori, Akira; Nakano, Hirotaka
This video-on-demand service is constructed of distributed servers, including video servers that supply real-time MPEG-1 video & audio, real-time MPEG-1 encoders, and an application server that supplies additional text information and agents for retrieval. This system has three distinctive features that enable it to provide multi viewpoint access to real-time visual information: (1) The terminal application uses an agent-oriented approach that allows the system to be easily extended. The agents are implemented using a commercial authoring tool plus additional objects that communicate with the video servers by using TCP/IP protocols. (2) The application server manages the agents, automatically processes text information and is able to handle unexpected alterations of the contents. (3) The distributed system has an economical, flexible architecture to store long video streams. The real-time MPEG-1 encoder system is based on multi channel phase-shifting processing. We also describe a practical application of this system, a prototype TV-on-demand service called TVOD. This provides access to broadcast television programs for the previous week.
Yoo, S K; Kim, S H; Kim, N H; Kang, Y T; Kim, K M; Bae, S H; Vannier, M W
During time-critical brain surgery, the detection of developing cerebral ischemia is particularly important because early therapeutic intervention may reduce the mortality of the patient. The purpose of this system is to provide an efficient means of remote teleconsultation for the early detection of ischemia, particularly when subspecialists are unavailable. The hardware and software design architecture for the multimedia brain function teleconsultation system including the dedicated brain function monitoring system is described. In order to comprehensively support remote teleconsultation, multi-media resources needed for ischemia interpretation were included: EEG signals, CSA, CD-CSA, radiological images, surgical microscope video images and video conferencing. PC-based system integration with standard interfaces and the operability over the Ethernet meet the cost-effectiveness while the modular software was customized with a diverse range of data manipulations and control functions necessary for shared workspace and standard interfaces.
The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing.
R. P. Tsang; H. Y. Chen; J. M. Brandt; J. A. Hutchins
Coded digital video signals are considered to be one of the most difficult data types to transport due to their real-time requirements and high bit rate variability. In this study, the authors discuss the coding mechanisms incorporated by the major compression standards bodies, i.e., JPEG and MPEG, as well as more advanced coding mechanisms such as wavelet and fractal techniques. The relationship between the applications which use these coding schemes and their network requirements are the major focus of this study. Specifically, the authors relate network latency, channel transmission reliability, random access speed, buffering and network bandwidth with the various coding techniques as a function of the applications which use them. Such applications include High-Definition Television, Video Conferencing, Computer-Supported Collaborative Work (CSCW), and Medical Imaging.
Full Text Available Aims: The aims of this study were (1 to investigate the influence of physical movement on near-infrared spectroscopy (NIRS data, (2 to establish a video-NIRS system which simultaneously records NIRS data and the subject’s movement, and (3 to measure the oxygenated hemoglobin (oxy-Hb concentration change (Δoxy-Hb during a word fluency (WF task. Experiment 1: In 5 healthy volunteers, we measured the oxy-Hb and deoxygenated hemoglobin (deoxy-Hb concentrations during 11 kinds of facial, head, and extremity movements. The probes were set in the bilateral frontal regions. The deoxy-Hb concentration was increased in 85% of the measurements. Experiment 2: Using a pillow on the backrest of the chair, we established the video-NIRS system with data acquisition and video capture software. One hundred and seventy-six elderly people performed the WF task. The deoxy-Hb concentration was decreased in 167 subjects (95%. Experiment 3: Using the video-NIRS system, we measured the Δoxy-Hb, and compared it with the results of the WF task. Δoxy-Hb was significantly correlated with the number of words. Conclusion: Like the blood oxygen level-dependent imaging effect in functional MRI, the deoxy-Hb concentration will decrease if the data correctly reflect the change in neural activity. The video-NIRS system might be useful to collect NIRS data by recording the waveforms and the subject’s appearance simultaneously.
Full Text Available With the rapid development of wireless networks and image acquisition technology, wireless video transmission technology has been widely applied in various communication systems. The traditional video monitoring technology is restricted by some conditions such as layout, environmental, the relatively large volume, cost, and so on. In view of this problem, this paper proposes a method that the mobile car can be equipped with wireless video monitoring system. The mobile car which has some functions such as detection, video acquisition and wireless data transmission is developed based on STC89C52 Micro Control Unit (MCU and WiFi router. Firstly, information such as image, temperature and humidity is processed by the MCU and communicated with the router, and then returned by the WiFi router to the host computer phone. Secondly, control information issued by the host computer phone is received by WiFi router and sent to the MCU, and then the MCU sends relevant instructions. Lastly, the wireless transmission of video images and the remote control of the car are realized. The results prove that the system has some features such as simple operation, high stability, fast response, low cost, strong flexibility, widely application, and so on. The system has certain practical value and popularization value.
Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.
Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.
Keramidas, Eystratios G; Maroulis, Dimitris; Iakovidis, Dimitris K
In this paper, we present a computer-aided-diagnosis (CAD) system prototype, named TND (Thyroid Nodule Detector), for the detection of nodular tissue in ultrasound (US) thyroid images and videos acquired during thyroid US examinations. The proposed system incorporates an original methodology that involves a novel algorithm for automatic definition of the boundaries of the thyroid gland, and a novel approach for the extraction of noise resilient image features effectively representing the textural and the echogenic properties of the thyroid tissue. Through extensive experimental evaluation on real thyroid US data, its accuracy in thyroid nodule detection has been estimated to exceed 95%. These results attest to the feasibility of the clinical application of TND, for the provision of a second more objective opinion to the radiologists by exploiting image evidences.
The primary purpose of the "modification and validation of an automotive data processing unit (DPU), compressed video system, and communications equipment" cooperative research and development agreement (CRADA) was to modify and validate both hardware and software, developed by Scientific Atlanta, Incorporated (S-A) for defense applications (e.g., rotary-wing airplanes), for the commercial sector surface transportation domain (i.e., automobiles and trucks). S-A also furnished a state-of-the-art compressed video digital storage and retrieval system (CVDSRS), and off-the-shelf data storage and transmission equipment to support the data acquisition system for crash avoidance research (DASCAR) project conducted by Oak Ridge National Laboratory (ORNL). In turn, S-A received access to hardware and technology related to DASCAR. DASCAR was subsequently removed completely and installation was repeated a number of times to gain an accurate idea of complete installation, operation, and removal of DASCAR. Upon satisfactory completion of the DASCAR construction and preliminary shakedown, ORNL provided NHTSA with an operational demonstration of DASCAR at their East Liberty, OH test facility. The demonstration included an on-the-road demonstration of the entire data acquisition system using NHTSA'S test track. In addition, the demonstration also consisted of a briefing, containing the following: ORNL generated a plan for validating the prototype data acquisition system with regard to: removal of DASCAR from an existing vehicle, and installation and calibration in other vehicles; reliability of the sensors and systems; data collection and transmission process (data integrity); impact on the drivability of the vehicle and obtrusiveness of the system to the driver; data analysis procedures; conspicuousness of the vehicle to other drivers; and DASCAR installation and removal training and documentation. In order to identify any operational problems not captured by the systems
Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng
As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.
Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.
Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.
Tsifouti, Anastasia; Nasralla, Moustafa M.; Razaak, Manzoor; Cope, James; Orwell, James M.; Martini, Maria G.; Sage, Kingsley
The Image Library for Intelligent Detection Systems (i-LIDS) provides benchmark surveillance datasets for analytics systems. This paper proposes a methodology to investigate the effect of compression and frame-rate reduction, and to recommend an appropriate suite of degraded datasets for public release. The library consists of six scenarios, including Sterile Zone (SZ) and Parked Vehicle (PV), which are investigated using two different compression algorithms (H.264 and JPEG) and a number of detection systems. PV has higher spatio-temporal complexity than the SZ. Compression performance is dependent on scene content hence PV will require larger bit-streams in comparison with SZ, for any given distortion rate. The study includes both industry standard algorithms (for transmission) and CCTV recorders (for storage). CCTV recorders generally use proprietary formats, which may significantly affect the visual information. Encoding standards such as H.264 and JPEG use the Discrete Cosine Transform (DCT) technique, which introduces blocking artefacts. The H.264 compression algorithm follows a hybrid predictive coding approach to achieve high compression gains, exploiting both spatial and temporal redundancy. The highly predictive approach of H.264 may introduce more artefacts resulting in a greater effect on the performance of analytics systems than JPEG. The paper describes the two main components of the proposed methodology to measure the effect of degradation on analytics performance. Firstly, the standard tests, using the `f-measure' to evaluate the performance on a range of degraded video sets. Secondly, the characterisation of the datasets, using quantification of scene features, defined using image processing techniques. This characterization permits an analysis of the points of failure introduced by the video degradation.
Osterman, Linnea; Masson, Isla
This article presents findings from a new qualitative study into female offenders’ experiences of restorative conferencing in England and Wales. It is argued that gendered factors of crime and victimization have a definite impact on the restorative conference process, particularly in the areas of complex and interacting needs, differently natured conference engagements, and risks around shame, mental health, and stereotypical ideals of female behavior. For women to reap the full benefits of r...
Kite, James; Phongsavan, Philayrath
Background Online focus groups have been increasing in use over the last 2 decades, including in biomedical and health-related research. However, most of this research has made use of text-based services such as email, discussion boards, and chat rooms, which do not replicate the experience of face-to-face focus groups. Web conferencing services have the potential to more closely match the face-to-face focus group experience, including important visual and aural cues. This paper provides critical reflections on using a web conferencing service to conduct online focus groups. Methods As part of a broader study, we conducted both online and face-to-face focus groups with participants. The online groups were conducted in real-time using the web conferencing service, Blackboard Collaborate TM. We used reflective practice to assess how the conduct and content of the groups were similar and how they differed across the two platforms. Results We found that further research using such services is warranted, particularly when working with hard-to-reach or geographically dispersed populations. The level of discussion and the quality of the data obtained was similar to that found in face-to-face groups. However, some issues remain, particularly in relation to managing technical issues experienced by participants and ensuring adequate recording quality to facilitate transcription and analysis. Conclusions Our experience with using web conferencing for online focus groups suggests that they have the potential to offer a realistic and comparable alternative to face-to-face focus groups, especially for geographically dispersed populations such as rural and remote health practitioners. Further testing of these services is warranted but researchers should carefully consider the service they use to minimise the impact of technical difficulties.
M. M. Blagoveshchenskaya
Full Text Available Summary. The most important operation of granular mixed fodder production is molding process. Properties of granular mixed fodder are defined during this process. They determine the process of production and final product quality. The possibility of digital video camera usage as intellectual sensor for control system in process of production is analyzed in the article. The developed parametric model of the process of bundles molding from granular fodder mass is presented in the paper. Dynamic characteristics of the molding process were determined. A mathematical model of motion of bundle of granular fodder mass after matrix holes was developed. The developed mathematical model of the automatic control system (ACS with the use of etalon video frame as the set point in the MATLAB software environment was shown. As a parameter of the bundles molding process it is proposed to use the value of the specific area defined in the mathematical treatment of the video frame. The algorithms of the programs to determine the changes in structural and mechanical properties of the feed mass in video frames images were developed. Digital video shooting of various modes of the molding machine was carried out and after the mathematical processing of video the transfer functions for use as a change of adjustable parameters of the specific area were determined. Structural and functional diagrams of the system of regulation of the food bundles molding process with the use of digital camcorders were built and analyzed. Based on the solution of the equations of fluid dynamics mathematical model of bundle motion after leaving the hole matrix was obtained. In addition to its viscosity, creep property was considered that is characteristic of the feed mass. The mathematical model ACS of the bundles molding process allowing to investigate transient processes which occur in the control system that uses a digital video camera as the smart sensor was developed in Simulink
Brown, Michael A.
With the advent of broadcast television as a constant source of information throughout the NASA manned space flight Mission Control Center (MCC) at the Johnson Space Center (JSC), the current Video Transport System (VTS) characteristics provides the ability to visually enhance real-time applications as a broadcast channel that decision making flight controllers come to rely on, but can be difficult to maintain and costly. The Operations Technology Facility (OTF) of the Mission Operations Facility Division (MOFD) has been tasked to provide insight to new innovative technological solutions for the MCC environment focusing on alternative architectures for a VTS. New technology will be provided to enable sharing of all imagery from one specific computer display, better known as Display Sharing (DS), to other computer displays and display systems such as; large projector systems, flight control rooms, and back supporting rooms throughout the facilities and other offsite centers using IP networks. It has been stated that Internet Protocol (IP) applications are easily readied to substitute for the current visual architecture, but quality and speed may need to be forfeited for reducing cost and maintainability. Although the IP infrastructure can support many technologies, the simple task of sharing ones computer display can be rather clumsy and difficult to configure and manage to the many operators and products. The DS process shall invest in collectively automating the sharing of images while focusing on such characteristics as; managing bandwidth, encrypting security measures, synchronizing disconnections from loss of signal / loss of acquisitions, performance latency, and provide functions like, scalability, multi-sharing, ease of initial integration / sustained configuration, integration with video adjustments packages, collaborative tools, host / recipient controllability, and the utmost paramount priority, an enterprise solution that provides ownership to the whole
Donnelly, Mark P; Nugent, Chris D; Craig, David; Passmore, Peter; Mulvenna, Maurice
The current paper presents details regarding the early developments of a memory prompt solution for persons with early dementia. Using everyday technology, in the form of a cell-phone, video reminders are delivered to assist with daily activities. The proposed CPVS system will permit carers to record and schedule video reminders remotely using a standard personal computer and web cam. It is the aim of the three year project that through the frequent delivery of helpful video reminders that a 'virtual carer' will be present with the person with dementia at all times. The first prototype of the system has been fully implemented with the first field trial scheduled to take place in May 2008. Initially, only three patient carer dyads will be involved, however, the second field trial aims to involve 30 dyads in the study. Details of the first prototype and the methods of evaluation are presented herein.
Full Text Available This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i automatic camera calibration using both moving objects and a background structure; (ii object depth estimation; and (iii detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.
Full Text Available In the paper are presented the results of strength analysis for the two types of the welded joints made according to conventional and laser technologies of high-strength steel S960QC. The hardness distributions, tensile properties and fracture toughness were determined for the weld material and heat affect zone material for both types of the welded joints. Tests results shown on advantage the laser welded joints in comparison to the convention ones. Tensile properties and fracture toughness in all areas of the laser joints have a higher level than in the conventional one. The heat affect zone of the conventional welded joints is a weakness area, where the tensile properties are lower in comparison to the base material. Verification of the tensile tests, which carried out by using the Aramis video system, confirmed this assumption. The highest level of strains was observed in HAZ material and the destruction process occurred also in HAZ of the conventional welded joint.
Beatty, Ian D
In order to facilitate analyzing video games as learning systems and instructional designs as games, we present a theoretical framework that integrates ideas from a broad range of literature. The framework describes games in terms of four layers, all sharing similar structural elements and dynamics: a micro-level game focused on immediate problem-solving and skill development, a macro-level game focused on the experience of the game world and story and identity development, and two meta-level games focused on building or modifying the game and on social interactions around it. Each layer casts gameplay as a co-construction of the game and the player, and contains three dynamical feedback loops: an exploratory learning loop, an intrinsic motivation loop, and an identity loop.
Saveliev, Alexei V [Chicago, IL; Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL
A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.
Full Text Available An approach has been proposed for automatic adaptive subtitle coloring using fuzzy logic-based algorithm. This system changes the color of the video subtitle/caption to “pleasant” color according to color harmony and the visual perception of the image background colors. In the fuzzy analyzer unit, using RGB histograms of background image, the R, G, and B values for the color of the subtitle/caption are computed using fixed fuzzy IF-THEN rules fully driven from the color harmony theories to satisfy complementary color and subtitle-background color harmony conditions. A real-time hardware structure has been proposed for implementation of the front-end processing unit as well as the fuzzy analyzer unit.
Cho, Jai Wan; Lee, Nam Ho; Choi, Young Soo
There are 760 feederpipes, which they are connected to inlet/outlet of the 380 pressure tube channels on the front of the calandria, in CANDU-type Reactor of Wolsung Nuclear Power Plant. As an ISI(In-Service Inspection) and PSI (Post- Service Inspection) requirements, maintenance activities of measuring the thickness of curvilinear part of feederpipe and inspecting the feederpipe support area within calandria are needed to ensure continued reliable operation of nuclear power plant. And untrasonic probe is used to measure the thickness of curvilinear part of feederpipe, however workers are exposed to radioactivity irradiation during the measurement period. But, it is impossible to inspect feederpipe support area thoroughlv because of narrow and confined accessibility, that is, an inspection space between the pressure tube channels is less than 100mm and pipes in feederpipe support area are congested. And also, workers involved in inspecting feederpipe support area are under the jeopardy of high-level radiation exposure. Concerns about sliding home, which make the move of feederpipe connected to pressure tube channel smooth as pressure tube expands and contracts in its axial direction, stuck to feederpipe support and some of the structural components have made necessary the development of video inspection probe system with narrow and confined accessibility to observe and inspect feederpipe support area more close. Using video inspection probe system, it is possible to inspect and repair abnormality of feederpipe support connected to pressure tube channels of the calandria more accurate and quantative than naked eye. Therefore, that will do much for ensuring safety of CANDU-type nuclear power plant.
Hwang, Euiseok; Yoon, Pilsang; Kim, Nakyoung; Kang, Byongbok; Kim, Kunyul; Park, Jooyoun; Park, Jongyong
A holographic data storage prototype fully integrated with electronics for video demonstration has been developed. It can record data in several tracks of a photopolymer disk and access them arbitrarily during retrieving process from the continuously rotating disk. An embedded controller operates all of the optomechanical components of the prototype automatically and electronic parts conduct adaptive data readout of channel data up to 55 megabit per sec. For real-time video demonstration, video stream are recorded in four concentric circular tracks of the disk. Each recording spot contains about one hundred pages with angle multiplexing. The eleven minute length video data are successfully reconstructed from the prototype.
Wang, C. P.; Bow, R. T.
A laser scanning system capable of scanning at standard video rate has been developed. The scanning mirrors, circuit design and system performance, as well as its applications to video cameras and ultra-violet microscopes, are discussed.
Full Text Available The expansion of Digital Television and the convergence between conventional broadcasting and television over IP contributed to the gradual increase of the number of available channels and on demand video content. Moreover, the dissemination of the use of mobile devices like laptops, smartphones and tablets on everyday activities resulted in a shift of the traditional television viewing paradigm from the couch to everywhere, anytime from any device. Although this new scenario enables a great improvement in viewing experiences, it also brings new challenges given the overload of information that the viewer faces. Recommendation systems stand out as a possible solution to help a watcher on the selection of the content that best fits his/her preferences. This paper describes a web based system that helps the user navigating on broadcasted and online television content by implementing recommendations based on collaborative and content based filtering. The algorithms developed estimate the similarity between items and users and predict the rating that a user would assign to a particular item (television program, movie, etc.. To enable interoperability between different systems, programs? characteristics (title, genre, actors, etc. are stored according to the TV-Anytime standard. The set of recommendations produced are presented through a Web Application that allows the user to interact with the system based on the obtained recommendations.
Schuetz, Christopher; Martin, Richard; Dillon, Thomas; Yao, Peng; Mackrides, Daniel; Harrity, Charles; Zablocki, Alicia; Shreve, Kevin; Bonnett, James; Curt, Petersen; Prather, Dennis
Passive imaging using millimeter waves (mmWs) has many advantages and applications in the defense and security markets. All terrestrial bodies emit mmW radiation and these wavelengths are able to penetrate smoke, fog/clouds/marine layers, and even clothing. One primary obstacle to imaging in this spectrum is that longer wavelengths require larger apertures to achieve the resolutions desired for many applications. Accordingly, lens-based focal plane systems and scanning systems tend to require large aperture optics, which increase the achievable size and weight of such systems to beyond what can be supported by many applications. To overcome this limitation, a distributed aperture detection scheme is used in which the effective aperture size can be increased without the associated volumetric increase in imager size. This distributed aperture system is realized through conversion of the received mmW energy into sidebands on an optical carrier. This conversion serves, in essence, to scale the mmW sparse aperture array signals onto a complementary optical array. The side bands are subsequently stripped from the optical carrier and recombined to provide a real time snapshot of the mmW signal. Using this technique, we have constructed a real-time, video-rate imager operating at 75 GHz. A distributed aperture consisting of 220 upconversion channels is used to realize 2.5k pixels with passive sensitivity. Details of the construction and operation of this imager as well as field testing results will be presented herein.
Desurmont, Xavier; Wijnhoven, Rob; Jaspers, Egbert; Caignart, Olivier; Barais, Mike; Favoreel, Wouter; Delaigle, Jean-Francois
The CANDELA project aims at realizing a system for real-time image processing in traffic and surveillance applications. The system performs segmentation, labels the extracted blobs and tracks their movements in the scene. Performance evaluation of such a system is a major challenge since no standard methods exist and the criteria for evaluation are highly subjective. This paper proposes a performance evaluation approach for video content analysis (VCA) systems and identifies the involved research areas. For these areas we give an overview of the state-of-the-art in performance evaluation and introduce a classification into different semantic levels. The proposed evaluation approach compares the results of the VCA algorithm with a ground-truth (GT) counterpart, which contains the desired results. Both the VCA results and the ground truth comprise description files that are formatted in MPEG-7. The evaluation is required to provide an objective performance measure and a mean to choose between competitive methods. In addition, it enables algorithm developers to measure the progress of their work at the different levels in the design process. From these requirements and the state-of-the-art overview we conclude that standardization is highly desirable for which many research topics still need to be addressed.
Javier I. Portillo
Full Text Available Automatic surveillance of airport surface is one of the core components of advanced surface movement, guidance, and control systems (A-SMGCS. This function is in charge of the automatic detection, identification, and tracking of all interesting targets (aircraft and relevant ground vehicles in the airport movement area. This paper presents a novel approach for object tracking based on sequences of video images. A fuzzy system has been developed to ponder update decisions both for the trajectories and shapes estimated for targets from the image regions extracted in the images. The advantages of this approach are robustness, flexibility in the design to adapt to different situations, and efficiency for operation in real time, avoiding combinatorial enumeration. Results obtained in representative ground operations show the system capabilities to solve complex scenarios and improve tracking accuracy. Finally, an automatic procedure, based on neuro-fuzzy techniques, has been applied in order to obtain a set of rules from representative examples. Validation of learned system shows the capability to learn the suitable tracker decisions.
Ballesta, S; Reymond, G; Pozzobon, M; Duhamel, J-R
To date, assessing the solitary and social behaviors of laboratory primates' colonies relies on time-consuming manual scoring methods. Here, we describe a real-time multi-camera 3D tracking system developed to measure the behavior of socially-housed primates. Their positions are identified using non-invasive color markers such as plastic collars, thus allowing to also track colored objects and to measure their usage. Compared to traditional manual ethological scoring, we show that this system can reliably evaluate solitary behaviors (foraging, solitary resting, toy usage, locomotion) as well as spatial proximity with peers, which is considered as a good proxy of their social motivation. Compared to existing video-based commercial systems currently available to measure animal activity, this system offers many possibilities (real-time data, large volume coverage, multiple animal tracking) at a lower hardware cost. Quantitative behavioral data of animal groups can now be obtained automatically over very long periods of time, thus opening new perspectives in particular for studying the neuroethology of social behavior in primates. Copyright © 2014 Elsevier B.V. All rights reserved.
Full Text Available Object Detection with small computation cost and processing time is a necessity in diverse domains such as traffic analysis security cameras video surveillance etc .With current advances in technology and decrease in prices of image sensors and video cameras the resolution of captured images is more than 1MP and has higher frame rates. This implies a considerable data size that needs to be processed in a very short period of time when real-time operations and data processing is needed. Real time video processing with high performance can be achieved with GPU technology. The aim of this study is to evaluate the influence of different image and video resolutions on the processing time number of objects detections and accuracy of the detected object. MOG2 algorithm is used for processing video input data with GPU module. Fuzzy interference system is used to evaluate the accuracy of number of detected object and to show the difference between CPU and GPU computing methods.
... From the Federal Register Online via the Government Publishing Office ] SECURITIES AND EXCHANGE COMMISSION In the Matter of Digital Video Systems, Inc., Geocom Resources, Inc., and GoldMountain Exploration... of Suspension of Trading It appears to the Securities and Exchange Commission that there is a lack of...
McNeal, Thomas, Jr.; Kearns, Landon
Video streaming can be a very useful tool for educators. It is now possible for a school?s technical specialist or classroom teacher to create a streaming server with tools that are available in many classrooms. In this article we describe how we created our video streamer using free software, older computers, and borrowed hardware. The system…
I M.O. Widyantara
Full Text Available Video surveillance system (VSS is an monitoring system based-on IP-camera. VSS implemented in live streaming and serves to observe and monitor a site remotely. Typically, IP- camera in the VSS has a management software application. However, for ad hoc applications, where the user wants to manage VSS independently, application management software has become ineffective. In the IP-camera installation spread over a large area, an administrator would be difficult to describe the location of the IP-camera. In addition, monitoring an area of IP- Camera will also become more difficult. By looking at some of the flaws in VSS, this paper has proposed a VSS application for easy monitoring of each IP Camera. Applications that have been proposed to integrate the concept of web-based geographical information system with the Google Maps API (Web-GIS. VSS applications built with smart features include maps ip-camera, live streaming of events, information on the info window and marker cluster. Test results showed that the application is able to display all the features built well
Billah, Mustain; Waheed, Sajjad; Rahman, Mohammad Motiur
Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW) features and convolutional neural network (CNN) features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM). Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%.
Partha Sindu I Gede
Full Text Available The purpose of this study was to determine the effect of the use of the instructional media based on lecture video and slide synchronization system on Statistics learning achievement of the students of PTI department . The benefit of this research is to help lecturers in the instructional process i to improve student's learning achievements that lead to better students’ learning outcomes. Students can use instructional media which is created from the lecture video and slide synchronization system to support more interactive self-learning activities. Students can conduct learning activities more efficiently and conductively because synchronized lecture video and slide can assist students in the learning process. The population of this research was all students of semester VI (six majoring in Informatics Engineering Education. The sample of the research was the students of class VI B and VI D of the academic year 2016/2017. The type of research used in this study was quasi-experiment. The research design used was post test only with non equivalent control group design. The result of this research concluded that there was a significant influence in the application of learning media based on lectures video and slide synchronization system on statistics learning result on PTI department.
Waheed, Sajjad; Rahman, Mohammad Motiur
Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW) features and convolutional neural network (CNN) features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM). Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%. PMID:28894460
Full Text Available Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW features and convolutional neural network (CNN features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM. Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%.
Kämmerer, P W; Schneider, D; Pacyna, A A; Daubländer, M
The aim of the present study was an evaluation of movement during double aspiration by different manual syringes and one computer-controlled local anesthesia delivery system (C-CLAD). With five different devices (two disposable syringes (2, 5 ml), two aspirating syringes (active, passive), one C-CLAD), simulation of double aspiration in a phantom model was conducted. Two experienced and two inexperienced test persons carried out double aspiration with the injection systems at the right and left phantom mandibles in three different inclination angles (n = 24 × 5 × 2 for each system). 3D divergences of the needle between aspiration procedures (mm) were measured with two video cameras. An average movement for the 2-ml disposal syringe of 2.85 mm (SD 1.63), for the 5 ml syringe of 2.36 mm (SD 0.86), for the active-aspirating syringe of 2.45 mm (SD 0.9), for the passive-aspirating syringe of 2.01 mm (SD 0.7), and for the C-CLAD, an average movement of 0.91 mm (SD 0.63) was seen. The movement was significantly less for the C-CLAD compared to the other systems (p movement of the needle in the soft tissue was significantly less for the C-CLAD compared to the other systems (p movement of the syringe could be seen in comparison between manual and C-CLAD systems. Launching the aspiration by a foot pedal in computer-assisted anesthesia leads to a minor movement. To solve the problem of movement during aspiration with possibly increased false-negative results, a C-CLAD seems to be favorable.
Bornoe, Nis; Barkhuus, Louise
Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....
Alqadoumi, Omar Mohamed
Previous studies in the field of e-tutoring dealt either with asynchronous tutoring or synchronous conferencing as modes for providing e-tutoring services to English learners. This qualitative research study reports the experiences of Arab ESL tutees with both asynchronous tutoring and synchronous conferencing. It also reports the experiences of…
A. A. SHAFIE
Full Text Available Traffic signal light can be optimized using vehicle flow statistics obtained by Smart Video Surveillance Software (SVSS. This research focuses on efficient traffic control system by detecting and counting the vehicle numbers at various times and locations. At present, one of the biggest problems in the main city in any country is the traffic jam during office hour and office break hour. Sometimes it can be seen that the traffic signal green light is still ON even though there is no vehicle coming. Similarly, it is also observed that long queues of vehicles are waiting even though the road is empty due to traffic signal light selection without proper investigation on vehicle flow. This can be handled by adjusting the vehicle passing time implementing by our developed SVSS. A number of experiment results of vehicle flows are discussed in this research graphically in order to test the feasibility of the developed system. Finally, adoptive background model is proposed in SVSS in order to successfully detect target objects such as motor bike, car, bus, etc.
Full Text Available As audio visual communication technologies are installed in prisons, these spaces of incarceration are networked with courtrooms and other non-contiguous spaces, potentially facilitating a process of permeability. Jurisdictions around the world are embracing video conferencing and the technology is becoming a major interface for prisoners’ interactions with courts and legal advisers. In this paper, I draw on fieldwork interviews with prisoners from two correction centres in New South Wales, Australia, to understand their subjective and sensorial experiences of using video links as a portal to the outside world. These interviews raised many issues including audio permeability: a soundtrack of incarceration sometimes infiltrates into the prison video studio and then the remote courtroom, framing the prisoner in the context of their detention, intruding on legal process, and affecting prisoners’ comprehension and participation.
Full Text Available The scope of this paper is a video surveillance system constituted of three principal modules, segmentation module, vehicle classification and vehicle counting. The segmentation is based on a background subtraction by using the Codebooks method. This step aims to define the regions of interest associated with vehicles. To classify vehicles in their type, our system uses the histograms of oriented gradient followed by support vector machine. Counting and tracking vehicles will be the last task to be performed. The presence of partial occlusion involves the decrease of the accuracy of vehicle segmentation and classification, which directly impacts the robustness of a video surveillance system. Therefore, a novel method to handle the partial occlusions based on vehicle classification process have developed. The results achieved have shown that the accuracy of vehicle counting and classification exceeds the accuracy measured in some existing systems.
Full Text Available Abstract HD video applications can be represented with multiple tasks consisting of tightly coupled multiple threads. Each task requires massive computation, and their communication can be categorized as asynchronous distributed small data and large streaming data transfers. In this paper, we propose a high performance programmable video platform that consists of four processing element (PE clusters. Each PE cluster runs a task in the video application with RISC cores, a hardware operating system kernel (HOSK, and task-specific accelerators. PE clusters are connected with two separate point-to-point networks: one for asynchronous distributed controls and the other for heavy streaming data transfers among the tasks. Furthermore, we developed an application mapping framework, with which parallel executable codes can be obtained from a manually developed SystemC model of the target application without knowing the detailed architecture of the video platform. To show the effectivity of the platform and its mapping framework, we also present mapping results for an H.264/AVC 720p decoder/encoder and a VC-1 720p decoder with 30 fps, assuming that the platform operates at 200 MHz.
Moutakki Zakaria; Ouloul Imad Mohamed; Afdel Karim; Amghar Abdellah
The scope of this paper is a video surveillance system constituted of three principal modules, segmentation module, vehicle classification and vehicle counting. The segmentation is based on a background subtraction by using the Codebooks method. This step aims to define the regions of interest associated with vehicles. To classify vehicles in their type, our system uses the histograms of oriented gradient followed by support vector machine. Counting and tracking vehicles will be the last task...
Full Text Available This report examines three text-based conferencing products: WowBB, Invision Power Board, and vBulletin. Their selection was prompted by a feature-by-feature comparison of the same products on the WowBB website. The comparison chart painted a misleading impression of WowBB’s features in relation to the other two products; so the evaluation team undertook a more comprehensive and impartial comparison using the categories and criteria for online software evaluation developed by the American Society for Training and Development (ASTD. The findings are summarised in terms of the softwares’ pricing, common features/ functions, and differentiating features.
The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...
This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.
Gettings, Sheryl; Franco, Fabia; Santosh, Paramala J
Siblings of children with chronic illness and disabilities are at increased risk of negative psychological effects. Support groups enable them to access psycho-education and social support. Barriers to this can include the distance they have to travel to meet face-to-face. Audio-conferencing, whereby three or more people can connect by telephone in different locations, is an efficient means of groups meeting and warrants exploration in this healthcare context. This study explored the feasibility of audio-conferencing as a method of facilitating sibling support groups. A longitudinal design was adopted. Participants were six siblings (aged eight to thirteen years) and parents of children with complex neurodevelopmental disorders attending the Centre for Interventional Paediatric Psychopharmacology (CIPP). Four of the eight one-hour weekly sessions were held face-to-face and the other four using audio-conferencing. Pre- and post-intervention questionnaires and interviews were completed and three to six month follow-up interviews were carried out. The sessions were audio-recorded, transcribed and thematic analysis was undertaken. Audio-conferencing as a form of telemedicine was acceptable to all six participants and was effective in facilitating sibling support groups. Audio-conferencing can overcome geographical barriers to children being able to receive group therapeutic healthcare interventions such as social support and psycho-education. Psychopathology ratings increased post-intervention in some participants. Siblings reported that communication between siblings and their family members increased and siblings' social network widened. Audio-conferencing is an acceptable, feasible and effective method of facilitating sibling support groups. Siblings' clear accounts of neuropsychiatric symptoms render them reliable informants. Systematic assessment of siblings' needs and strengthened links between Child and Adolescent Mental Health Services, school counsellors and
Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin
Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.
Full Text Available The design of smart video surveillance systems is an active research field among the computer vision community because of their ability to perform automatic scene analysis by selecting and tracking the objects of interest. In this paper, we present the design and implementation of an FPGA-based standalone working prototype system for real-time tracking of an object of interest in live video streams for such systems. In addition to real-time tracking of the object of interest, the implemented system is also capable of providing purposive automatic camera movement (pan-tilt in the direction determined by movement of the tracked object. The complete system, including camera interface, DDR2 external memory interface controller, designed object tracking VLSI architecture, camera movement controller and display interface, has been implemented on the Xilinx ML510 (Virtex-5 FX130T FPGA Board. Our proposed, designed and implemented system robustly tracks the target object present in the scene in real time for standard PAL (720 × 576 resolution color video and automatically controls camera movement in the direction determined by the movement of the tracked object.
In 2009, the Texas Transportation Institute produced for the Texas Department of Transportation a document : called Video over IP Design Guidebook. This report summarizes an implementation of that project in the : form of a workshop. The workshop was...
Gregorio, Massimo De
In this paper we present an intelligent active video surveillance system currently adopted in two different application domains: railway tunnels and outdoor storage areas. The system takes advantages of the integration of Artificial Neural Networks (ANN) and symbolic Artificial Intelligence (AI). This hybrid system is formed by virtual neural sensors (implemented as WiSARD-like systems) and BDI agents. The coupling of virtual neural sensors with symbolic reasoning for interpreting their outputs, makes this approach both very light from a computational and hardware point of view, and rather robust in performances. The system works on different scenarios and in difficult light conditions.
Ilgner, Justus; Park, Jonas Jae-Hyun; Labbé, Daniel; Westhofen, Martin
Introduction: While there is an increasing demand for minimally invasive operative techniques in Ear, Nose and Throat surgery, these operations are difficult to learn for junior doctors and demanding to supervise for experienced surgeons. The motivation for this study was to integrate high-definition (HD) stereoscopic video monitoring in microscopic surgery in order to facilitate teaching interaction between senior and junior surgeon. Material and methods: We attached a 1280x1024 HD stereo camera (TrueVisionSystems TM Inc., Santa Barbara, CA, USA) to an operating microscope (Zeiss ProMagis, Zeiss Co., Oberkochen, Germany), whose images were processed online by a PC workstation consisting of a dual IntelÂ® XeonÂ® CPU (Intel Co., Santa Clara, CA). The live image was displayed by two LCD projectors @ 1280x768 pixels on a 1,25m rear-projection screen by polarized filters. While the junior surgeon performed the surgical procedure based on the displayed stereoscopic image, all other participants (senior surgeon, nurse and medical students) shared the same stereoscopic image from the screen. Results: With the basic setup being performed only once on the day before surgery, fine adjustments required about 10 minutes extra during the operation schedule, which fitted into the time interval between patients and thus did not prolong operation times. As all relevant features of the operative field were demonstrated on one large screen, four major effects were obtained: A) Stereoscopy facilitated orientation for the junior surgeon as well as for medical students. B) The stereoscopic image served as an unequivocal guide for the senior surgeon to demonstrate the next surgical steps to the junior colleague. C) The theatre nurse shared the same image, anticipating the next instruments which were needed. D) Medical students instantly share the information given by all staff and the image, thus avoiding the need for an extra teaching session. Conclusion: High definition
Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.
Xie, Ruobing; Li, Li; Jin, Weiqi; Guo, Hong
It is prevalent for the low-light night-vision helmet to equip the binocular viewer with image intensifiers. Such equipment can not only acquire night vision ability, but also obtain the sense of stereo vision to achieve better perception and understanding of the visual field. However, since the image intensifier is for direct-observation, it is difficult to apply the modern image processing technology. As a result, developing digital video technology in night vision is of great significance. In this paper, we design a low-light night-vision helmet with digital imaging device. It consists of three parts: a set of two low-illumination CMOS cameras, a binocular OLED micro display and an image processing PCB. Stereopsis is achieved through the binocular OLED micro display. We choose Speed-Up Robust Feature (SURF) algorithm for image registration. Based on the image matching information and the cameras' calibration parameters, disparity can be calculated in real-time. We then elaborately derive the constraints of binocular stereo display. The sense of stereo vision can be obtained by dynamically adjusting the content of the binocular OLED micro display. There is sufficient space for function extensions in our system. The performance of this low-light night-vision helmet can be further enhanced in combination with The HDR technology and image fusion technology, etc.
Xu, Huihui; Jiang, Mingyan
Two-dimensional to three-dimensional (3-D) conversion in 3-D video applications has attracted great attention as it can alleviate the problem of stereoscopic content shortage. Depth estimation is an essential part of this conversion since the depth accuracy directly affects the quality of a stereoscopic image. In order to generate a perceptually reasonable depth map, a comprehensive depth estimation algorithm that considers the scenario type is presented. Based on the human visual system mechanism, which is sensitive to a change in the scenario, this study classifies the type of scenario into four classes according to the relationship between the movements of the camera and the object, and then leverages different strategies on the basis of the scenario type. The proposed strategies efficiently extract the depth information from different scenarios. In addition, the depth generation method for a scenario in which there is no motion, neither of the object nor the camera, is also suitable for the single image. Qualitative and quantitative evaluation results demonstrate that the proposed depth estimation algorithm is very effective for generating stereoscopic content and providing a realistic visual experience.
Mitani, Kohji; Sugawara, Masayuki; Shimamoto, Hiroshi; Yamashita, Takayuki; Okano, Fumio
An experimental ultrahigh-definition color video camera system with 7680(H) × 4320(V) pixels has been developed using four 8-million-pixel CCDs. The 8-million-pixel CCD with a progressive scanning rate of 60 frames per second has 4046(H) × 2048(V) effective imaging pixels, each of which is 8.4 micron2. We applied the four-imager pickup method to increase the camera"s resolution. This involves attaching four CCDs to a special color-separation prism. Two CCDs are used for the green image, and the other two are used for red and blue. The spatial image sampling pattern of these CCDs to the optical image is equivalent to one with 32 million pixels in the Bayer pattern color filter. The prototype camera attains a limiting resolution of more than 2700 TV lines both horizontally and vertically, which is higher than that of an 8-million-CCD. The sensitivity of the camera is 2000 lux, F 2.8 at approx. 50 dB of dark-noise level on the HDTV format. Its other specifications are a dynamic range of 200%, a power consumption of about 600 W and a weight, with lens, of 76 kg.
Clynick, Tony J.
A prototype laser video projector which uses electronic, optical, and mechanical means to project a television picture is described. With the primary goal of commercial viability, the price/performance ratio of the chosen means is critical. The fundamental requirement has been to achieve high brightness, high definition images of at least movie-theater size, at a cost comparable with other existing large-screen video projection technologies, while having the opportunity of developing and exploiting the unique properties of the laser projected image, such as its infinite depth-of-field. Two argon lasers are used in combination with a dye laser to achieve a range of colors which, despite not being identical to those of a CRT, prove to be subjectively acceptable. Acousto-optic modulation in combination with a rotary polygon scanner, digital video line stores, novel specialized electro-optics, and a galvanometric frame scanner form the basis of the projection technique achieving a 30 MHz video bandwidth, high- definition scan rates (1125/60 and 1250/50), high contrast ratio, and good optical efficiency. Auditorium projection of HDTV pictures wider than 20 meters are possible. Applications including 360 degree(s) projection and 3-D video provide further scope for exploitation of the HD laser video projector.
Full Text Available We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.
Monica Adams, head librarian at Robinson Secondary in Fairfax country, Virginia, states that librarians should have the technical knowledge to support projects related to digital video editing. The process of digital video editing and the cables, storage issues and the computer system with software is described.
Wang, Shuangbao; Kelly, William
In this paper, we present a novel system, inVideo, for video data analytics, and its use in transforming linear videos into interactive learning objects. InVideo is able to analyze video content automatically without the need for initial viewing by a human. Using a highly efficient video indexing engine we developed, the system is able to analyze…
Wilson, Rhoda M.
Human Systems Integration Report Low back pain (LBP) and work-related musculoskeletal disorders (WMSDs) can lead to employee absenteeism, sick leave, and permanent disability. Over the years, much work has been done in examining physical exposure to ergonomic risks. The current research presents a new approach for assessing WMSD risk during lifting related tasks that combines traditional observational methods with video recording methods. One particular application area, the Future Com...
Bourgine, Paul; Collet, Pierre
This book contains the proceedings as well as invited papers for the first annual conference of the UNESCO Unitwin Complex System Digital Campus (CSDC), which is an international initiative gathering 120 Universities on four continents, and structured in ten E-Departments. First Complex Systems Digital Campus World E-Conference 2015 features chapters from the latest research results on theoretical questions of complex systems and their experimental domains. The content contained bridges the gap between the individual and the collective within complex systems science and new integrative sciences on topics such as: genes to organisms to ecosystems, atoms to materials to products, and digital media to the Internet. The conference breaks new ground through a dedicated video-conferencing system – a concept at the heart of the international UNESCO UniTwin, embracing scientists from low-income and distant countries. This book promotes an integrated system of research, education, and training. It also aims at contr...
Full Text Available Dual-mode wireless video transmission has two major problems. Firstly, one is time delay difference bringing about asynchronous reception decoding frame error phenomenon; secondly, dual-mode network bandwidth inconformity causes scheduling problem. In order to solve above two problems, a kind of TD-SCDMA/CDMA20001x dual-mode wireless video transmission design method is proposed. For the solution of decoding frame error phenomenon, the design puts forward adding frame identification and packet preprocessing at the sending and synchronizing combination at the receiving end. For the solution of scheduling problem, the wireless communication channel cooperative work and video data transmission scheduling management algorithm is proposed in the design.
Valli, D.; Ganesan, K.
Chaos based cryptosystems are an efficient method to deal with improved speed and highly secured multimedia encryption because of its elegant features, such as randomness, mixing, ergodicity, sensitivity to initial conditions and control parameters. In this paper, two chaos based cryptosystems are proposed: one is the higher-dimensional 12D chaotic map and the other is based on the Ikeda delay differential equation (DDE) suitable for designing a real-time secure symmetric video encryption scheme. These encryption schemes employ a substitution box (S-box) to diffuse the relationship between pixels of plain video and cipher video along with the diffusion of current input pixel with the previous cipher pixel, called cipher block chaining (CBC). The proposed method enhances the robustness against statistical, differential and chosen/known plain text attacks. Detailed analysis is carried out in this paper to demonstrate the security and uniqueness of the proposed scheme.
In the past decade, the display format from (HD High Definition) through Full HD(1920X1080) to UHD(4kX2k), mainly guides display industry to two directions: one is liquid crystal display(LCD) from 10 inch to 100 inch and more, and the other is projector. Although LCD has been popularly used in market; however, the investment for production such kind displays cost more money expenditure, and less consideration of environmental pollution and protection. The Projection system may be considered, due to more viewing access, flexible in location, energy saving and environmental protection issues. The topic is to design and fabricate a short throw factor liquid crystal on silicon (LCoS) projection system for cinema. It provides a projection lens system, including a tele-centric lens fitted for emitted LCoS to collimate light to enlarge the field angle. Then, the optical path is guided by a symmetric lens. Light of LCoS may pass through the lens, hit on and reflect through an aspherical mirror, to form a less distortion image on blank wall or screen for home cinema. The throw ratio is less than 0.33.
Naito, Hiromichi; Guyette, Francis X; Martin-Gill, Christian; Callaway, Clifton W
Video laryngoscopy (VL) is a technical adjunct to facilitate endotracheal intubation (ETI). VL also provides objective data for training and quality improvement, allowing evaluation of the technique and airway conditions during ETI. Previous studies of factors associated with ETI success or failure are limited by insufficient nomenclature, individual recall bias and self-report. We tested whether the covariates in prehospital VL recorded data were associated with ETI success. We also measured association between time and clinical variables. Retrospective review was conducted in a non-physician staffed helicopter emergency medical service system. ETI was typically performed using sedation and neuromuscular-blockade under protocolized orders. We obtained process and outcome variables from digitally recorded VL data. Patient characteristics data were also obtained from the emergency medical service record and linked to the VL recorded data. The primary outcome was to identify VL covariates associated with successful ETI attempts. Among 304 VL recorded ETI attempts in 268 patients, ETI succeeded for 244 attempts and failed for 60 attempts (first-pass success rate, 82% and overall success rate, 94%). Laryngoscope blade tip usually moved from a shallow position in the oropharynx to the vallecula. In the multivariable logistic regression analysis, attempt time (p = 0.02; odds ratio [OR] 0.99), Cormack-Lehane view (p Cormack-Lehane view, and longer ETI attempt time were negatively associated with successful ETI attempts. Initially shallow blade tip position may associate with longer ETI time. VL is useful for measuring and describing multiple factors of ETI and can provide valuable data.
Coffin, Caroline; Hewings, Ann; North, Sarah
Learning to argue is a key academic purpose for both first and second language students. It has been claimed that computer mediated asynchronous text-based conferencing is a useful medium for developing argumentation skills (Andriessen, Baker, & Suthers, 2003). This paper reports on two research studies which explore this claim. One study focused…
Bales, John W.
The F64 frame grabber is a high performance video image acquisition and processing board utilizing the TMS320C40 and TMS34020 processors. The hardware is designed for the ISA 16 bit bus and supports multiple digital or analog cameras. It has an acquisition rate of 40 million pixels per second, with a variable sampling frequency of 510 kHz to MO MHz. The board has a 4MB frame buffer memory expandable to 32 MB, and has a simultaneous acquisition and processing capability. It supports both VGA and RGB displays, and accepts all analog and digital video input standards.
Johnson, Don; Johnson, Mike
The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed.
Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih
Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.
Chen, Liang; Zhou, Yipeng; Chiu, Dah Ming
Bandwidth consumption is a significant concern for online video service providers. Practical video streaming systems usually use some form of HTTP streaming (progressive download) to let users download the video at a faster rate than the video bitrate. Since users may quit before viewing the complete video, however, much of the downloaded video will be "wasted". To the extent that users' departure behavior can be predicted, we develop smart streaming that can be used to improve user QoE with ...
Darabi, K; G. Ghinea
In this paper an expert-based model for generation of personalized video summaries is suggested. The video frames are initially scored and annotated by multiple video experts. Thereafter, the scores for the video segments that have been assigned the higher priorities by end users will be upgraded. Considering the required summary length, the highest scored video frames will be inserted into a personalized final summary. For evaluation purposes, the video summaries generated by our system have...
U.S. Geological Survey, Department of the Interior — These data are the trackline from the seafloor photograph and video survey conducted September 2004 using the mini-SeaBOSS sampling system on the R/V Rafael in...
Ridgway, James; Stannett, Mike
Although techniques for separate image and audio steganography are widely known, relatively little has been described concerning the hiding of information within video streams ("video steganography"). In this paper we review the current state of the art in this field, and describe the key issues we have encountered in developing a practical video steganography system. A supporting video is also available online at http://www.youtube.com/watch?v=YhnlHmZolRM
Full Text Available Vision-based monitoring systems using visible spectrum (regular video cameras can complement or substitute conventional sensors and provide rich positional and classification data. Although new camera technologies, including thermal video sensors, may improve the performance of digital video-based sensors, their performance under various conditions has rarely been evaluated at multimodal facilities. The purpose of this research is to integrate existing computer vision methods for automated data collection and evaluate the detection, classification, and speed measurement performance of thermal video sensors under varying lighting and temperature conditions. Thermal and regular video data was collected simultaneously under different conditions across multiple sites. Although the regular video sensor narrowly outperformed the thermal sensor during daytime, the performance of the thermal sensor is significantly better for low visibility and shadow conditions, particularly for pedestrians and cyclists. Retraining the algorithm on thermal data yielded an improvement in the global accuracy of 48%. Thermal speed measurements were consistently more accurate than for the regular video at daytime and nighttime. Thermal video is insensitive to lighting interference and pavement temperature, solves issues associated with visible light cameras for traffic data collection, and offers other benefits such as privacy, insensitivity to glare, storage space, and lower processing requirements.
Squire, Kurt D.
Recently, attention has been paid to computer and video games as a medium for learning. This article provides a way of conceptualizing them as possibility spaces for learning. It provides an overview of two research programs: (1) an after-school program using commercial games to develop deep expertise in game play and game creation, and (2) an…
Jahn, H.; Oertel, D.
The present analysis deals with the influence of the videochannel harmonic response characteristic of a push-broom scanner on the spatial transmission function and the signal-to-noise ratio. It is shown that when detector noise is prevalent, the video frequency bandwidth influences both the transmission function and the SNR, but influences only the transmission function when the photonoise prevails.
Full Text Available Text-based conferencing can be both asynchronous (i.e., participants log into the conference at separate times, and synchronous (i.e., interaction takes place in real time. It is thus subject to the same wide variation as the online audio- and video-conferencing methods (see the earlier Reports in this series. Synchronous text-based approaches (e.g., online chat groups and instant messaging systems are highly popular among online users generally owing to their ability to bring together special-interest groups from around the world without cost. In distance education (DE, however, synchronous chat methods are less widely used, owing in part to the problems of arranging for working adults in different time zones to join a discussion group simultaneously. Instant text messaging is more popular among DE users in view of the choice it provides between responding to a message immediately (synchronous communication or after a delay (asynchronous. The different synchronous and asynchronous approaches are likely to become more widely used in parallel with one another, as they are integrated in individual product packages.
Sendra, Sandra; Lloret, Jaime; Jimenez, Jose Miguel; Rodrigues, Joel J P C
Video surveillance is needed to control many activities performed in underwater environments. The use of wired media can be a problem since the material specially designed for underwater environments is very expensive. In order to transmit the images and videos wirelessly under water, three main technologies can be used: acoustic waves, which do not provide high bandwidth, optical signals, although the effect of light dispersion in water severely penalizes the transmitted signals and therefore, despite offering high transfer rates, the maximum distance is very small, and electromagnetic (EM) waves, which can provide enough bandwidth for video delivery. In the cases where the distance between transmitter and receiver is short, the use of EM waves would be an interesting option since they provide high enough data transfer rates to transmit videos with high resolution. This paper presents a practical study of the behavior of EM waves at 2.4 GHz in freshwater underwater environments. First, we discuss the minimum requirements of a network to allow video delivery. From these results, we measure the maximum distance between nodes and the round trip time (RTT) value depending on several parameters such as data transfer rate, signal modulations, working frequency, and water temperature. The results are statistically analyzed to determine their relation. Finally, the EM waves' behavior is modeled by a set of equations. The results show that there are some combinations of working frequency, modulation, transfer rate and temperature that offer better results than others. Our work shows that short communication distances with high data transfer rates is feasible.
Full Text Available Video surveillance is needed to control many activities performed in underwater environments. The use of wired media can be a problem since the material specially designed for underwater environments is very expensive. In order to transmit the images and videos wirelessly under water, three main technologies can be used: acoustic waves, which do not provide high bandwidth, optical signals, although the effect of light dispersion in water severely penalizes the transmitted signals and therefore, despite offering high transfer rates, the maximum distance is very small, and electromagnetic (EM waves, which can provide enough bandwidth for video delivery. In the cases where the distance between transmitter and receiver is short, the use of EM waves would be an interesting option since they provide high enough data transfer rates to transmit videos with high resolution. This paper presents a practical study of the behavior of EM waves at 2.4 GHz in freshwater underwater environments. First, we discuss the minimum requirements of a network to allow video delivery. From these results, we measure the maximum distance between nodes and the round trip time (RTT value depending on several parameters such as data transfer rate, signal modulations, working frequency, and water temperature. The results are statistically analyzed to determine their relation. Finally, the EM waves’ behavior is modeled by a set of equations. The results show that there are some combinations of working frequency, modulation, transfer rate and temperature that offer better results than others. Our work shows that short communication distances with high data transfer rates is feasible.
In this paper we present an automatic enhanced video display and navigation capability for networked streaming video and networked video playlists. Our proposed method uses Synchronized Multimedia Integration Language (SMIL) as presentation language and Real Time Streaming Protocol (RTSP) as network remote control protocol to automatically generate a "enhanced video strip" display for easy navigation. We propose and describe two approaches - a smart client approach and a smart server approach. We also describe a prototype system implementation of our proposed approach.
Racca, Roberto G.; Scotten, Larry N.
This article describes a method that allows the digital recording of sequences of three black and white images at rates of several thousand frames per second using a system consisting of an ordinary CCD camcorder, three flash units with color filters, a PC-based frame grabber board and some additional electronics. The maximum framing rate is determined by the duration of the flashtube emission, and for common photographic flash units lasting about 20 microsecond(s) it can exceed 10,000 frames per second in actual use. The subject under study is strobe- illuminated using a red, a green and a blue flash unit controlled by a special sequencer, and the three images are captured by a color CCD camera on a single video field. Color is used as the distinguishing parameter that allows the overlaid exposures to be resolved. The video output for that particular field will contain three individual scenes, one for each primary color component, which potentially can be resolved with no crosstalk between them. The output is electronically decoded into the primary color channels, frame grabbed and stored into digital memory, yielding three time-resolved images of the subject. A synchronization pulse provided by the flash sequencer triggers the frame grabbing so that the correct video field is acquired. A scheme involving the use of videotape as intermediate storage allows the frame grabbing to be performed using a monochrome video digitizer. Ideally each flash- illuminated scene would be confined to one color channel, but in practice various factors, both optical and electronic, affect color separation. Correction equations have been derived that counteract these effects in the digitized images and minimize 'ghosting' between frames. Once the appropriate coefficients have been established through a calibration procedure that needs to be performed only once for a given configuration of the equipment, the correction process is carried out transparently in software every time a
Livingstone, Nuala; Macdonald, Geraldine; Carr, Nicola
Restorative justice is "a process whereby parties with a stake in a specific offence resolve collectively how to deal with the aftermath of the offence and its implications for the future" (Marshall 2003). Despite the increasing use of restorative justice programmes as an alternative to court proceedings, no systematic review has been undertaken of the available evidence on the effectiveness of these programmes with young offenders. Recidivism in young offenders is a particularly worrying problem, as recent surveys have indicated the frequency of re-offences for young offenders has ranged from 40.2% in 2000 to 37.8% in 2007 (Ministry of Justice 2009) To evaluate the effects of restorative justice conferencing programmes for reducing recidivism in young offenders. We searched the following databases up to May 2012: CENTRAL, 2012 Issue 5, MEDLINE (1978 to current), Bibliography of Nordic Criminology (1999 to current), Index to Theses (1716 to current), PsycINFO (1887 to current), Social Sciences Citation Index (1970 to current), Sociological Abstracts (1952 to current), Social Care Online (1985 to current), Restorative Justice Online (1975 to current), Scopus (1823 to current), Science Direct (1823 to current), LILACS (1982 to current), ERIC (1966 to current), Restorative Justice Online (4 May 2012), WorldCat (9 May 2012), ClinicalTrials.gov (19 May 2012) and ICTRP (19 May 2012). ASSIA, National Criminal Justice Reference Service and Social Services Abstracts were searched up to May 2011. Relevant bibliographies, conference programmes and journals were also searched. Randomised controlled trials (RCTs) or quasi-RCTs of restorative justice conferencing versus management as usual, in young offenders. Two authors independently assessed the risk of bias of included trials and extracted the data. Where necessary, original investigators were contacted to obtain missing information. Four trials including a total of 1447 young offenders were included in the review. Results
Background Palliative care planning for nursing home residents with advanced dementia is often suboptimal. This study compared effects of facilitated case conferencing (FCC) with usual care (UC) on end-of-life care. Methods A two arm parallel cluster randomised controlled trial was conducted. The sample included people with advanced dementia from 20 Australian nursing homes and their families and professional caregivers. In each intervention nursing home (n = 10), Palliative Care Planning Coordinators (PCPCs) facilitated family case conferences and trained staff in person-centred palliative care for 16 hours per week over 18 months. The primary outcome was family-rated quality of end-of-life care (End-of-Life Dementia [EOLD] Scales). Secondary outcomes included nurse-rated EOLD scales, resident quality of life (Quality of Life in Late-stage Dementia [QUALID]) and quality of care over the last month of life (pharmacological/non-pharmacological palliative strategies, hospitalization or inappropriate interventions). Results Two-hundred-eighty-six people with advanced dementia took part but only 131 died (64 in UC and 67 in FCC which was fewer than anticipated), rendering the primary analysis under-powered with no group effect seen in EOLD scales. Significant differences in pharmacological (P life were seen. Intercurrent illness was associated with lower family-rated EOLD Satisfaction with Care (coefficient 2.97, P care. Future trials of case conferencing should consider outcomes and processes regarding decision making and planning for anticipated events and acute illness. Trial registration Australian New Zealand Clinical Trial Registry ACTRN12612001164886 PMID:28786995
Full Text Available Palliative care planning for nursing home residents with advanced dementia is often suboptimal. This study compared effects of facilitated case conferencing (FCC with usual care (UC on end-of-life care.A two arm parallel cluster randomised controlled trial was conducted. The sample included people with advanced dementia from 20 Australian nursing homes and their families and professional caregivers. In each intervention nursing home (n = 10, Palliative Care Planning Coordinators (PCPCs facilitated family case conferences and trained staff in person-centred palliative care for 16 hours per week over 18 months. The primary outcome was family-rated quality of end-of-life care (End-of-Life Dementia [EOLD] Scales. Secondary outcomes included nurse-rated EOLD scales, resident quality of life (Quality of Life in Late-stage Dementia [QUALID] and quality of care over the last month of life (pharmacological/non-pharmacological palliative strategies, hospitalization or inappropriate interventions.Two-hundred-eighty-six people with advanced dementia took part but only 131 died (64 in UC and 67 in FCC which was fewer than anticipated, rendering the primary analysis under-powered with no group effect seen in EOLD scales. Significant differences in pharmacological (P < 0.01 and non-pharmacological (P < 0.05 palliative management in last month of life were seen. Intercurrent illness was associated with lower family-rated EOLD Satisfaction with Care (coefficient 2.97, P < 0.05 and lower staff-rated EOLD Comfort Assessment with Dying (coefficient 4.37, P < 0.01. Per protocol analyses showed positive relationships between EOLD and staff hours to bed ratios, proportion of residents with dementia and staff attitudes.FCC facilitates a palliative approach to care. Future trials of case conferencing should consider outcomes and processes regarding decision making and planning for anticipated events and acute illness.Australian New Zealand Clinical Trial Registry ACTRN
Agar, Meera; Luckett, Tim; Luscombe, Georgina; Phillips, Jane; Beattie, Elizabeth; Pond, Dimity; Mitchell, Geoffrey; Davidson, Patricia M; Cook, Janet; Brooks, Deborah; Houltram, Jennifer; Goodall, Stephen; Chenoweth, Lynnette
Palliative care planning for nursing home residents with advanced dementia is often suboptimal. This study compared effects of facilitated case conferencing (FCC) with usual care (UC) on end-of-life care. A two arm parallel cluster randomised controlled trial was conducted. The sample included people with advanced dementia from 20 Australian nursing homes and their families and professional caregivers. In each intervention nursing home (n = 10), Palliative Care Planning Coordinators (PCPCs) facilitated family case conferences and trained staff in person-centred palliative care for 16 hours per week over 18 months. The primary outcome was family-rated quality of end-of-life care (End-of-Life Dementia [EOLD] Scales). Secondary outcomes included nurse-rated EOLD scales, resident quality of life (Quality of Life in Late-stage Dementia [QUALID]) and quality of care over the last month of life (pharmacological/non-pharmacological palliative strategies, hospitalization or inappropriate interventions). Two-hundred-eighty-six people with advanced dementia took part but only 131 died (64 in UC and 67 in FCC which was fewer than anticipated), rendering the primary analysis under-powered with no group effect seen in EOLD scales. Significant differences in pharmacological (P palliative management in last month of life were seen. Intercurrent illness was associated with lower family-rated EOLD Satisfaction with Care (coefficient 2.97, P dementia and staff attitudes. FCC facilitates a palliative approach to care. Future trials of case conferencing should consider outcomes and processes regarding decision making and planning for anticipated events and acute illness. Australian New Zealand Clinical Trial Registry ACTRN12612001164886.
Hayden, Emily M; Navedo, Deborah D; Gordon, James A
A critical barrier to expanding simulation-based instruction in medicine is the availability of clinical instructors. Allowing instructors to remotely observe and debrief simulation sessions may make simulation-based instruction more convenient, thus expanding the pool of instructors available. This study compared the impact of simulation sessions facilitated by in-person (IP) faculty versus those supervised remotely using Web-conferencing software (WebEx(®), Cisco [ www.webex.com/ ]). A convenience sample of preclinical medical students volunteered to "care for" patients in a simulation laboratory. Students received either standard IP or Web-conferenced (WC) instruction. WC sessions were facilitated by off-site instructors. A satisfaction survey (5-point Likert scale, where 1=strongly disagree and 5=strongly agree) was completed immediately following the sessions. Forty-four surveys were analyzed (WC n=25, IP n=19). In response to the question "Was the communication between faculty and students a barrier to understanding the case?," the average student responses were 2.8 (95% confidence interval [CI] 2.4-3.2) for WC and 4.5 (95% CI 4.0-5.0) for IP (p4.0-4.5) for WC and 4.9 (95% CI 4.6-5.2) for IP (p=0.0003). Both groups agreed that they acquired new skills (4.2 for WC, 4.5 for IP; p=0.39) and new knowledge (4.6 for WC, 4.7 for IP; p=0.41). Telecommunication can successfully enhance access to simulation-based instruction. In this study, a Web interface downgraded the quality of student-faculty communication. Future investigation is needed to better understand the impact of such an effect on the learning process and to reduce barriers that impede implementation of technology-facilitated supervision.
This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...
Full Text Available In this paper, error resilience is achieved by adaptive, application-layer rateless channel coding, which is used to protect H.264/Advanced Video Coding (AVC codec data-partitioned videos. A packetization strategy is an effective tool to control error rates and, in the paper, source-coded data partitioning serves to allocate smaller packets to more important compressed video data. The scheme for doing this is applied to real-time streaming across a broadband wireless link. The advantages of rateless code rate adaptivity are then demonstrated in the paper. Because the data partitions of a video slice are each assigned to different network packets, in congestion-prone wireless networks the increased number of packets per slice and their size disparity may increase the packet loss rate from buffer overflows. As a form of congestion resilience, this paper recommends packet-size dependent scheduling as a relatively simple way of alleviating the buffer-overflow problem arising from data-partitioned packets. The paper also contributes an analysis of data partitioning and packet sizes as a prelude to considering scheduling regimes. The combination of adaptive channel coding and prioritized packetization for error resilience with packet-size dependent packet scheduling results in a robust streaming scheme specialized for broadband wireless and real-time streaming applications such as video conferencing, video telephony, and telemedicine.
Van Reeth, Frank; Raymaekers, Chris; TREKELS, Peter; VERKOYEN, Stefan; FLERACKERS, Eddy
Conventional educational material is even more complemented with computer-based multimedia material. In order to make this material available to teachers and students in a structured manner, we developed a multimedia database and accompanying tools for creating, manipulating and formatting the teaching content. Recently, we expanded this educational multimedia database with the functionality to support streamed video as well. Given the vast amounts of data that needs to be stored and transmi...
Tosteberg, Joakim; Axelsson, Thomas
A team of developers from Epsilon AB has developed a lightweight remote controlledquadcopter named Crazyflie. The team wants to allow a pilot to navigate thequadcopter using video from an on-board camera as the only guidance. The masterthesis evaluates the feasibility of mounting a camera module on the quadcopter andstreaming images from the camera to a computer, using the existing quadcopterradio link. Using theoretical calculations and measurements, a set of requirementsthat must be fulfill...
In conventional electronic video stabilization, the stabilized frame is obtained by cropping the input frame to cancel camera shake. While a small cropping size results in strong stabilization, it does not provide us satisfactory results from the viewpoint of image quality, because it narrows the angle of view. By fusing several frames, we can effectively expand the area of input frames, and achieve strong stabilization even with a large cropping size. Several methods for doing so have been s...
Lee, Hyun Jeong; Oh, Se An
Respiratory-gated radiation therapy (RGRT) has been used to minimize the dose to normal tissue in lung-cancer radiotherapy. The present research aims to improve the regularity of respiration in RGRT using a video coached respiration guiding system. In the study, 16 patients with lung cancer were evaluated. The respiration signals of the patients were measured by a real-time position management (RPM) Respiratory Gating System (Varian, USA) and the patients were trained using the video coached respiration guiding system. The patients performed free breathing and guided breathing, and the respiratory cycles were acquired for ~5 min. Then, Microsoft Excel 2010 software was used to calculate the mean and standard deviation for each phase. The standard deviation was computed in order to analyze the improvement in the respiratory regularity with respect to the period and displacement. The standard deviation of the guided breathing decreased to 65.14% in the inhale peak and 71.04% in the exhale peak compared with the...
Chen, Ming; He, Jing; Deng, Rui; Chen, Qinghui; Zhang, Jinlong; Chen, Lin
To further investigate the feasibility of the digital signal processing (DSP) algorithms (e.g., symbol timing synchronization, channel estimation and equalization, and sampling clock frequency offset (SCFO) estimation and compensation) for real-time optical orthogonal frequency-division multiplexing (OFDM) system, 2.97-Gb/s real-time high-definition video signal parallel transmission is experimentally demonstrated in OFDM-based short-reach intensity-modulated direct-detection (IM-DD) systems. The experimental results show that, in the presence of ∼12 ppm SCFO between transmitter and receiver, the adaptively modulated OFDM signal transmission over 20 km standard single-mode fiber with an error bit rate less than 1 × 10-9 can be achieved by using only DSP-based small SCFO estimation and compensation method without utilizing forward error correction technique. To the best of our knowledge, for the first time, we successfully demonstrate that the video signal at a bit rate in excess of 1-Gb/s transmission in a simple real-valued inverse fast Fourier transform and fast Fourier transform based IM-DD optical OFDM system employing a directly modulated laser.
Agnisarman, Sruthy; Narasimha, Shraddhaa; Chalil Madathil, Kapil; Welch, Brandon; Brinda, Fnu; Ashok, Aparna; McElligott, James
Telemedicine is the use of technology to provide and support health care when distance separates the clinical service and the patient. Home-based telemedicine systems involve the use of such technology for medical support and care connecting the patient from the comfort of their homes with the clinician. In order for such a system to be used extensively, it is necessary to understand not only the issues faced by the patients in using them but also the clinician. The aim of this study was to conduct a heuristic evaluation of 4 telemedicine software platforms-Doxy.me, Polycom, Vidyo, and VSee-to assess possible problems and limitations that could affect the usability of the system from the clinician's perspective. It was found that 5 experts individually evaluated all four systems using Nielsen's list of heuristics, classifying the issues based on a severity rating scale. A total of 46 unique problems were identified by the experts. The heuristics most frequently violated were visibility of system status and Error prevention amounting to 24% (11/46 issues) each. Esthetic and minimalist design was second contributing to 13% (6/46 issues) of the total errors. Heuristic evaluation coupled with a severity rating scale was found to be an effective method for identifying problems with the systems. Prioritization of these problems based on the rating provides a good starting point for resolving the issues affecting these platforms. There is a need for better transparency and a more streamlined approach for how physicians use telemedicine systems. Visibility of the system status and speaking the users' language are keys for achieving this.
Lee, Sung-Ho; Jang, Bumjoon; Kim, Dong Hee; Park, Chang Hyun; Bae, Gyuri; Park, Seung Woo; Park, Seung-Han
Unlike those of other ordinary laser scanning microscopies in the past, nonlinear optical laser scanning microscopy (SHG, THG microscopy) applied ultrafast laser technology which has high peak powers with relatively inexpensive, low-average-power. It short pulse nature reduces the ionization damage in organic molecules. And it enables us to take bright label-free images. In this study, we measured cell division of zebrafish egg with ultrafast video images using multimodal nonlinear optical microscope. The result shows in-vivo cell division label-free imaging with sub-cellular resolution.
Full Text Available Background: Online focus groups have been increasing in use over the last 2 decades, including in biomedical and health-related research. However, most of this research has made use of text-based services such as email, discussion boards, and chat rooms that do not replicate the experience of face-to-face focus groups. Web conferencing services have the potential to more closely match the face-to-face focus group experience, including important visual and aural cues. This paper provides critical reflections on using a web conferencing service to conduct online focus groups. Methods: We conducted both online and face-to-face focus groups as part of the same study. The online groups were conducted in real-time using the web conferencing service, Blackboard Collaborate TM. We used reflective practice to assess the similarities and differences in the conduct and content of the groups across the two platforms. Results: We found that further research using such services is warranted, particularly when working with hard-to-reach or geographically dispersed populations. The level of discussion and the quality of the data obtained was similar to that found in face-to-face groups. However, some issues remain, particularly in relation to managing technical issues experienced by participants and ensuring adequate recording quality to facilitate transcription and analysis. Conclusions: Our experience with using web conferencing for online focus groups suggests that they have the potential to offer a realistic and comparable alternative to face-to-face focus groups, especially for geographically dispersed populations such as rural and remote health practitioners. Further testing of these services is warranted but researchers should carefully consider the service they use to minimise the impact of technical difficulties.
Lee, A R; Yang, S; Shin, Y H; Kim, J A; Chung, I S; Cho, H S; Lee, J J
We evaluated the effects of three airway manipulation manoeuvres: (a) conventional (single-handed chin lift); (b) backward, upward and right-sided pressure (BURP) manoeuvre; and (c) modified jaw thrust manoeuvre (two-handed aided by an assistant) on laryngeal view and intubation time using the Clarus Video System in 215 patients undergoing general anaesthesia with orotracheal intubation. In the first part of this study, the laryngeal view was recorded as a modified Cormack-Lehane grade with each manoeuvre. In the second part, intubation was performed using the assigned airway manipulation. The primary outcome was the time to intubation, and the secondary outcomes were the modified Cormack-Lehane grade, the number of attempts and the overall success rate. There were significant differences in modified Cormack-Lehane grade between the three airway manipulations (p < 0.0001). Post-hoc analysis indicated that the modified jaw thrust improved the laryngeal view compared with the conventional (p < 0.0001) and the BURP manoeuvres (p < 0.0001). The BURP worsened the laryngeal view compared with the conventional manoeuvre (p = 0.0132). The time to intubation in the modified jaw thrust group was shorter than with the conventional manoeuvre (p = 0.0004) and the BURP group (p < 0.0001). We conclude that the modified jaw thrust is the most effective manoeuvre at improving the laryngeal view and shortening intubation time with the Clarus Video System. © 2013 The Association of Anaesthetists of Great Britain and Ireland.
Mol, J.J.D.; Pouwelse, J.A.; Meulpolder, M.; Epema, D.H.J.; Sips, H.J.
Centralised solutions for Video-on-Demand (VoD) services, which stream pre-recorded video content to multiple clients who start watching at the moments of their own choosing, are not scalable because of the high bandwidth requirements of the central video servers. Peer-to-peer (P2P) techniques which
Renato Bobsin Machado
Full Text Available OBJECTIVE: Develop a prototype using computer resources to optimize the management process of clinical information and video colonoscopy exams. MATERIALS AND METHODS: Through meetings with medical and computer experts, the following requirements were defined: management of information about medical professionals, patients and exams; video and image captured by video colonoscopes during the exam, and the availability of these videos and images on the Web for further analysis. The technologies used were Java, Flex, JBoss, Red5, JBoss SEAM, MySQL and Flamingo. RESULTS AND DISCUSSION: The prototype contributed to the area of colonocospy by providing resources to maintain the patients' history, tests and images from video colonoscopies. The web-based application allows greater flexibility to physicians and specialists. The resources for remote analysis of data and tests can help doctors and patients in the examination and diagnosis. CONCLUSION: The implemented prototype has contributed to improve colonoscopy-related processes. Future activities include the prototype deployment in the Service of Coloproctology and the utilization of this model to allow real-time monitoring of these exams and knowledge extraction from such structured database using artificial intelligence.OBJETIVO: Desenvolver um protótipo por meio de recursos computacionais para a otimização de processos de gerenciamento de informações clínicas e de exames de videocolonoscopia. MATERIAIS E MÉTODOS: Por meio de reuniões com especialistas médicos e computacionais, definiram-se os seguintes requisitos: gestão de informações sobre profissionais médicos, pacientes e exames complementares; aquisição dos vídeos e captura de imagens a partir do videocolonoscópio durante a realização desse exame, e a disponibilidade por meio da Web para análise posterior dessas imagens. As tecnologias aplicadas foram: Java, Flex, JBOSS, Red5, JBOSS SEAM, MySQL e Flamingo. RESULTADOS E
Nortvig, Anne Mette; Sørensen, Birgitte Holm
This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...
Full Text Available In the past few years there has been an explosion in the use of digital video data. Many people have personal computers at home, and with the help of the Internet users can easily share video files on their computer. This makes possible the unauthorized use of digital media, and without adequate protection systems the authors and distributors have no means to prevent it.Digital watermarking techniques can help these systems to be more effective by embedding secret data right into the video stream. This makes minor changes in the frames of the video, but these changes are almost imperceptible to the human visual system. The embedded information can involve copyright data, access control etc. A robust watermark is resistant to various distortions of the video, so it cannot be removed without affecting the quality of the host medium. In this paper I propose a video watermarking scheme that fulfills the requirements of a robust watermark.
Patriciu, Alexandru; Challacombe, Benjamin; Dasgupta, Prokar; Kavoussi, Louis; Stoianovici, Dan
The paper presents a new telementoring system incorporating audio-video communication and remote robotic control. The system was developed around an off the shelf ISDN video conferencing system enhanced with video annotation and remote robot control features. The user can remotely control a robot of perform needle alignment and insertion in a Percutaneous access procedure. Particular attention was devoted to ensure the safety of the procedure. The data connection is continuously monitored and in the event of a failure the robot control is switched to the local operator. Two series of randomized trials were performed between Baltimore and London. The accuracy and procedure time were evaluated for manual needle placement, local robotic needle placement and remotely controlled robotic needle placement. The test showed that while the procedure time is not improved by the robotic approach there is an improvement in the accuracy of the procedure. The study showed also that there is no significant difference between the locally controlled robotic needled placement and the remotely controlled robotic needle placement. Thus, the proposed system can be safely used for remote robotic percutaneous access procedures.
Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul
Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.
Full Text Available As the public education system in Northern Ontario continues to take a downward spiral, a plethora of secondary school students are being placed in an alternative educational environment. Juxtaposing the two educational settings reveals very similar methods and characteristics of educating our youth as opposed to using a truly alternative approach to education. This video reviews the relationship between public education and alternative education in a remote Northern Ontario setting. It is my belief that the traditional methods of teaching are not appropriate in educating at risk students in alternative schools. Paper and pencil worksheets do not motivate these students to learn and succeed. Alternative education should emphasize experiential learning, a just in time curriculum based on every unique individual and the students true passion for everyday life. Cameron Culbert was born on February 3rd, 1977 in North Bay, Ontario. His teenage years were split between attending public school and his willed curriculum on the ski hill. Culbert spent 10 years (1996-2002 & 2006-2010 competing for Canada as an alpine ski racer. His passion for teaching and coaching began as an athlete and has now transferred into the classroom and the community. As a graduate of Nipissing University (BA, BEd, MEd. Camerons research interests are alternative education, physical education and technology in the classroom. Currently Cameron is an active educator and coach in Northern Ontario.
He, Ye; Fei, Kevin; Fernandez, Gustavo A.; Delp, Edward J.
Due to the increasing user expectation on watching experience, moving web high quality video streaming content from the small screen in mobile devices to the larger TV screen has become popular. It is crucial to develop video quality metrics to measure the quality change for various devices or network conditions. In this paper, we propose an automated scoring system to quantify user satisfaction. We compare the quality of local videos with the videos transmitted to a TV. Four video quality metrics, namely Image Quality, Rendering Quality, Freeze Time Ratio and Rate of Freeze Events are used to measure video quality change during web content mirroring. To measure image quality and rendering quality, we compare the matched frames between the source video and the destination video using barcode tools. Freeze time ratio and rate of freeze events are measured after extracting video timestamps. Several user studies are conducted to evaluate the impact of each objective video quality metric on the subjective user watching experience.
Ebe, Kazuyu, E-mail: firstname.lastname@example.org; Tokuyama, Katsuichi; Baba, Ryuta; Ogihara, Yoshisada; Ichikawa, Kosuke; Toyama, Joji [Joetsu General Hospital, 616 Daido-Fukuda, Joetsu-shi, Niigata 943-8507 (Japan); Sugimoto, Satoru [Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo 113-8421 (Japan); Utsunomiya, Satoru; Kagamu, Hiroshi; Aoyama, Hidefumi [Graduate School of Medical and Dental Sciences, Niigata University, Niigata 951-8510 (Japan); Court, Laurence [The University of Texas MD Anderson Cancer Center, Houston, Texas 77030-4009 (United States)
Purpose: To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Methods: Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio–caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on the target phantom during a reproduction of the patient’s tumor motion. A substitute target with the patient’s tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors’ QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients’ tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Results: Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors
Afiouni, Einar Nour; Øvrelid, Leif Julian
This project aims to examine the possibilities of using game theoretic concepts and multi-agent systems in modern video games with real time demands. We have implemented a multi-issue negotiation system for the strategic video game Civilization IV, evaluating different negotiation techniques with a focus on the use of opponent modeling to improve negotiation results.
Full Text Available The aim of this paper is to describe a task-cycling pedagogy for language learning using a technique we have called Stimulated Reflection. This pedagogical approach has been developed in the light of the new technology options available, especially those that facilitate audiovisual forms of interaction among language learners and teachers. In this instance, the pedagogy is implemented in the context of introducing students to audio-conferencing (A-C tools as a support for their ongoing independent learning. The approach is designed to develop a balance for learners between attention to fluency and meaning on one hand, and form and accuracy on the other. The particular focus here is on the learning of Italian as a foreign language, although the ideas and principles are presented with a view to the teaching and learning of any language. The article is in three parts. The first considers appropriate theoretical frameworks for the use of technology-mediated tools in language learning, with a particular emphasis on the focus-on-form literature and task design (Doughty, 2003; Doughty & Williams, 1998; Skehan, 1998. The second part sets out the approach we have taken in the Italian project and discusses specifically the idea of task cycling (Willis, 1996 and Stimulated Reflection. The third part presents extracts of stimulated reflection episodes that serve to illustrate the new pedagogic approach.
Saur, Drew D.; Tan, Yap-Peng; Kulkarni, Sanjeev R.; Ramadge, Peter J.
Automated analysis and annotation of video sequences are important for digital video libraries, content-based video browsing and data mining projects. A successful video annotation system should provide users with useful video content summary in a reasonable processing time. Given the wide variety of video genres available today, automatically extracting meaningful video content for annotation still remains hard by using current available techniques. However, a wide range video has inherent structure such that some prior knowledge about the video content can be exploited to improve our understanding of the high-level video semantic content. In this paper, we develop tools and techniques for analyzing structured video by using the low-level information available directly from MPEG compressed video. Being able to work directly in the video compressed domain can greatly reduce the processing time and enhance storage efficiency. As a testbed, we have developed a basketball annotation system which combines the low-level information extracted from MPEG stream with the prior knowledge of basketball video structure to provide high level content analysis, annotation and browsing for events such as wide- angle and close-up views, fast breaks, steals, potential shots, number of possessions and possession times. We expect our approach can also be extended to structured video in other domains.
Yaron, Avi; Bar-Zohar, Meir; Horesh, Nadav
Sophisticated surgeries require the integration of several medical imaging modalities, like MRI and CT, which are three-dimensional. Many efforts are invested in providing the surgeon with this information in an intuitive & easy to use manner. A notable development, made by Visionsense, enables the surgeon to visualize the scene in 3D using a miniature stereoscopic camera. It also provides real-time 3D measurements that allow registration of navigation systems as well as 3D imaging modalities, overlaying these images on the stereoscopic video image in real-time. The real-time MIS 'see through tissue' fusion solutions enable the development of new MIS procedures in various surgical segments, such as spine, abdomen, cardio-thoracic and brain. This paper describes 3D surface reconstruction and registration methods using Visionsense camera, as a step toward fully automated multi-modality 3D registration.
Full Text Available Biometrics verification can be efficiently used for intrusion detection and intruder identification in video surveillance systems. Biometrics techniques can be largely divided into traditional and the so-called soft biometrics. Whereas traditional biometrics deals with physical characteristics such as face features, eye iris, and fingerprints, soft biometrics is concerned with such information as gender, national origin, and height. Traditional biometrics is versatile and highly accurate. But it is very difficult to get traditional biometric data from a distance and without personal cooperation. Soft biometrics, although featuring less accuracy, can be used much more freely though. Recently, many researchers have been made on human identification using soft biometrics data collected from a distance. In this paper, we use both traditional and soft biometrics for human identification and propose a framework for solving such problems as lighting, occlusion, and shadowing.
Full Text Available Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES and (ii generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security.
Chandrasekaran, Jeyamala; Thiruvengadam, S J
Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i) generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES) and (ii) generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security.
Full Text Available The task of human hand trajectory tracking and gesture trajectory recognition based on synchronized color and depth video is considered. Toward this end, in the facet of hand tracking, a joint observation model with the hand cues of skin saliency, motion and depth is integrated into particle filter in order to move particles to local peak in the likelihood. The proposed hand tracking method, namely, salient skin, motion, and depth based particle filter (SSMD-PF, is capable of improving the tracking accuracy considerably, in the context of the signer performing the gesture toward the camera device and in front of moving, cluttered backgrounds. In the facet of gesture recognition, a shape-order context descriptor on the basis of shape context is introduced, which can describe the gesture in spatiotemporal domain. The efficient shape-order context descriptor can reveal the shape relationship and embed gesture sequence order information into descriptor. Moreover, the shape-order context leads to a robust score for gesture invariant. Our approach is complemented with experimental results on the settings of the challenging hand-signed digits datasets and American sign language dataset, which corroborate the performance of the novel techniques.
Peters, Suzanne M; Pinter, Ilona J; Pothuizen, Helen H J; de Heer, Raymond C; van der Harst, Johanneke E; Spruijt, Berry M
In the past, studies in behavioral neuroscience and drug development have relied on simple and quick readout parameters of animal behavior to assess treatment efficacy or to understand underlying brain mechanisms. The predominant use of classical behavioral tests has been repeatedly criticized during the last decades because of their poor reproducibility, poor translational value and the limited explanatory power in functional terms. We present a new method to monitor social behavior of rats using automated video tracking. The velocity of moving and the distance between two rats were plotted in frequency distributions. In addition, behavior was manually annotated and related to the automatically obtained parameters for a validated interpretation. Inter-individual distance in combination with velocity of movement provided specific behavioral classes, such as moving with high velocity when "in contact" or "in proximity". Human observations showed that these classes coincide with following (chasing) behavior. In addition, when animals are "in contact", but at low velocity, behaviors such as allogrooming and social investigation were observed. Also, low dose treatment with morphine and short isolation increased the time animals spent in contact or in proximity at high velocity. Current methods that involve the investigation of social rat behavior are mostly limited to short and relatively simple manual observations. A new and automated method for analyzing social behavior in a social interaction test is presented here and shows to be sensitive to drug treatment and housing conditions known to influence social behavior in rats. Copyright © 2016 Elsevier B.V. All rights reserved.
Magic Lantern and Honeywell FM and T worked together to develop lower-cost, visible light solid-state laser sources to use in laser projector products. Work included a new family of video displays that use lasers as light sources. The displays would project electronic images up to 15 meters across and provide better resolution and clarity than movie film, up to five times the resolution of the best available computer monitors, up to 20 times the resolution of television, and up to six times the resolution of HDTV displays. The products that could be developed as a result of this CRADA could benefit the economy in many ways, such as: (1) Direct economic impact in the local manufacture and marketing of the units. (2) Direct economic impact in exports and foreign distribution. (3) Influencing the development of other elements of display technology that take advantage of the signals that these elements allow. (4) Increased productivity for engineers, FAA controllers, medical practitioners, and military operatives.
Monini, Simonetta; Marinozzi, Franco; Atturo, Francesca; Bini, Fabiano; Marchelletta, Silvia; Barbara, Maurizio
To propose a new objective video-recording procedure to assess and monitor over time the severity of facial nerve palsy. No objective methods for facial palsy (FP) assessment are universally accepted. The face of subjects presenting with different degrees of facial nerve deficit, as measured by the House-Brackmann (HB) grading system, was videotaped after positioning, at specific points, 10 gray circular markers made of a retroreflective material. Video-recording included the resting position and six ordered facial movements. Editing and data elaboration was performed using a software instructed to assess marker distances. From the differences of the marker distances between the two sides was then extracted a score. The higher the FP degree, the higher the score registered during each movement. The statistical significance differed during the various movements between the different FP degrees, being uniform when closing the eyes gently; whereas when wrinkling the nose, there was no difference between the HB grade III and IV groups and, when smiling, no difference was evidenced between the HB grade IV and V groups.The global range index, which represents the overall degree of FP, was between 6.2 and 7.9 in the normal subjects (HB grade I); between 10.6 and 18.91 in HB grade II; between 22.19 and 33.06 in HB grade III; between 38.61 and 49.75 in HB grade IV; and between 50.97 and 66.88 in HB grade V. The proposed objective methodology could provide numerical data that correspond to the different degrees of FP, as assessed by the subjective HB grading system. These data can in addition be used singularly to score selected areas of the paralyzed face when recovery occurs with a different timing in the different face regions.
Dette kapitel har fokus på metodiske problemstillinger, der opstår i forhold til at bruge (digital) video i forbindelse med forskningskommunikation, ikke mindst online. Video har længe været benyttet i forskningen til dataindsamling og forskningskommunikation. Med digitaliseringen og internettet er...... der dog opstået nye muligheder og udfordringer i forhold til at formidle og distribuere forskningsresultater til forskellige målgrupper via video. Samtidig er klassiske metodologiske problematikker som forskerens positionering i forhold til det undersøgte stadig aktuelle. Både klassiske og nye...... problemstillinger diskuteres i kapitlet, som rammesætter diskussionen ud fra forskellige positioneringsmuligheder: formidler, historiefortæller, eller dialogist. Disse positioner relaterer sig til genrer inden for ’akademisk video’. Afslutningsvis præsenteres en metodisk værktøjskasse med redskaber til planlægning...
Kerofsky, Louis; Jagannath, Abhijith; Reznik, Yuriy
We describe the design of a video streaming system using adaptation to viewing conditions to reduce the bitrate needed for delivery of video content. A visual model is used to determine sufficient resolution needed under various viewing conditions. Sensors on a mobile device estimate properties of the viewing conditions, particularly the distance to the viewer. We leverage the framework of existing adaptive bitrate streaming systems such as HLS, Smooth Streaming or MPEG-DASH. The client rate selection logic is modified to include a sufficient resolution computed using the visual model and the estimated viewing conditions. Our experiments demonstrate significant bitrate savings compare to conventional streaming methods which do not exploit viewing conditions.
This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... World Videos. The workshops were run on December 4, 2016, in Cancun in Mexico. The two workshops together received 13 papers. Each paper was then reviewed by at least two expert reviewers in the field. In all, 11 papers were accepted to be presented at the workshops. The topics covered in the papers...
Carnaz, Letícia; Moriguchi, Cristiane S; de Oliveira, Ana Beatriz; Santiago, Paulo R P; Caurin, Glauco A P; Hansson, Gert-Åke; Coury, Helenice J C Gil
This study compared neck range of movement recording using three different methods goniometers (EGM), inclinometers (INC) and a three-dimensional video analysis system (IMG) in simultaneous and synchronized data collection. Twelve females performed neck flexion-extension, lateral flexion, rotation and circumduction. The differences between EGM, INC, and IMG were calculated sample by sample. For flexion-extension movement, IMG underestimated the amplitude by 13%; moreover, EGM showed a crosstalk of about 20% for lateral flexion and rotation axes. In lateral flexion movement, all systems showed similar amplitude and the inter-system differences were moderate (4-7%). For rotation movement, EGM showed a high crosstalk (13%) for flexion-extension axis. During the circumduction movement, IMG underestimated the amplitude of flexion-extension movements by about 11%, and the inter-system differences were high (about 17%) except for INC-IMG regarding lateral flexion (7%) and EGM-INC regarding flexion-extension (10%). For application in workplace, INC presents good results compared to IMG and EGM though INC cannot record rotation. EGM should be improved in order to reduce its crosstalk errors and allow recording of the full neck range of movement. Due to non-optimal positioning of the cameras for recording flexion-extension, IMG underestimated the amplitude of these movements. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
Full Text Available Various studies have discussed the pedagogical potential of video game play in the classroom but resistance to such texts remains high. The study presented here discusses the case study of one young boy who, having failed to learn to read in the public school system was able to learn in a private Sudbury model school where video games were not only allowed but considered important learning tools. Findings suggest that the incorporation of such new texts in today’s public schools have the potential to motivate and enhance the learning of children.
Full Text Available Various studies have discussed the pedagogical potential of video game play in the classroom but resistance to such texts remains high. The study presented here discusses the case study of one young boy who, having failed to learn to read in the public school system was able to learn in a private Sudbury model school where video games were not only allowed but considered important learning tools. Findings suggest that the incorporation of such new texts in today’s public schools have the potential to motivate and enhance the learning of children.
Emerging video applications are being developed where multiple views of a scene are captured. Two central issues in the deployment of future multiview video (MVV) systems are compression efficiency and interactive video experience, which makes it necessary to develop advanced technologies on multiview video coding (MVC) and interactive multiview video streaming (IMVS). The former aims at efficient compression of all MVV data in a ratedistortion (RD) optimal manner by exploiting both temporal ...
Cihak, David F.; Smith, Catherine C.; Cornett, Ashlee; Coleman, Mari Beth
The use of video modeling (VM) procedures in conjunction with the picture exchange communication system (PECS) to increase independent communicative initiations in preschool-age students was evaluated in this study. The four participants were 3-year-old children with limited communication skills prior to the intervention. Two of the students had…
Takeda, Naohito; Takeuchi, Isao; Haruna, Mitsumasa
In order to develop an e-learning system that promotes self-learning, lectures and basic operations in laboratory practice of chemistry were recorded and edited on DVD media, consisting of 8 streaming videos as learning materials. Twenty-six students wanted to watch the DVD, and answered the following questions after they had watched it: "Do you think the video would serve to encourage you to study independently in the laboratory practice?" Almost all students (95%) approved of its usefulness, and more than 60% of them watched the videos repeatedly in order to acquire deeper knowledge and skill of the experimental operations. More than 60% answered that the demonstration-experiment should be continued in the laboratory practice, in spite of distribution of the DVD media.
This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real W...
Jensen, Karsten; Juhl, Jens
There is an increasing demand for assessing foot mal positions and an interest in monitoring the effect of treatment. In the last decades several different motion capture systems has been used. This abstract describes a new low cost motion capture system.......There is an increasing demand for assessing foot mal positions and an interest in monitoring the effect of treatment. In the last decades several different motion capture systems has been used. This abstract describes a new low cost motion capture system....
Gonzalez, J.; Pomares, H.; Damas, M.; Garcia-Sanchez,P.; Rodriguez-Alvarez, M.; Palomares, J. M.
As embedded systems are becoming prevalent in everyday life, many universities are incorporating embedded systems-related courses in their undergraduate curricula. However, it is not easy to motivate students in such courses since they conceive of embedded systems as bizarre computing elements, different from the personal computers with which they…
Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.
Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.
Mahmood Rajpoot, Qasim; Jensen, Christian D.
Pervasive usage of video surveillance is rapidly increasing in developed countries. Continuous security threats to public safety demand use of such systems. Contemporary video surveillance systems offer advanced functionalities which threaten the privacy of those recorded in the video....... There is a need to balance the usage of video surveillance against its negative impact on privacy. This chapter aims to highlight the privacy issues in video surveillance and provides a model to help identify the privacy requirements in a video surveillance system. The authors make a step in the direction...... of investigating the existing legal infrastructure for ensuring privacy in video surveillance and suggest guidelines in order to help those who want to deploy video surveillance while least compromising the privacy of people and complying with legal infrastructure....
Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.
Vision is only a part of a larger system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. This mechanism provides a reliable recognition if the target is occluded or cannot be recognized. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Logic of visual scenes can be captured in Network-Symbolic models and used for disambiguation of visual information. Network-Symbolic Transformations derive abstract structures, which allow for invariant recognition of an object as exemplar of a class. Active vision helps build consistent, unambiguous models. Such Image/Video Understanding Systems will be able reliably recognizing targets in real-world conditions.
Very low bit rate video coding has received considerable attention in academia and industry in terms of both coding algorithms and standards activities. In addition to the earlier ITU-T efforts on H.320 standardization for video conferencing from 64 kbps to 1.544 Mbps in ISDN environment, the ITU-T/SG15 has formed an expert group on low bit coding (LBC) for visual telephone below 64 kbps. The ITU-T/SG15/LBC work consists of two phases: the near-term and long-term. The near-term standard H.32P/N, based on existing compression technologies, mainly addresses the issues related to visual telephony at below 28.8 kbps, the V.34 modem rate used in the existing Public Switched Telephone Network (PSTN). H.32P/N will be technically frozen in January '95. The long-term standard H.32P/L, relying on fundamentally new compression technologies with much improved performance, will address video telephony in both PSTN and mobile environment. The ISO/SG29/WG11, after its highly visible and successful MPEG 1/2 work, is starting to focus on the next- generation audiovisual multimedia coding standard MPEG 4. With the recent change of direction, MPEG 4 intends to provide an audio visual coding standard allowing for interactivity, high compression, and/or universal accessibility, with high degree of flexibility and extensibility. This paper briefly summarizes these on-going standards activities undertaken by ITU-T/LBC and ISO/MPEG 4 as of December 1994.
Madachy, Raymond J.
Naval Postgraduate School Graduate School of Engineering & Applied Sciences, Total Ownership Cost Modeling presented by Raymond J. Madachy, Associate Professor of Systems Engineering at the Naval Postgraduate School. Total Ownership Cost (TOC) is the sum cost of system acquisition, development, and operations including direct and indirect costs. In the DoD, cost modeling is needed to enable tradespace analysis of affordability with other system ilities. Parametric cost models will be overv...
Video monitoring of visible atmospheric emissions: from a manual device to a new fully automatic detection and classification device; Video surveillance des rejets atmospheriques d'un site siderurgique: d'un systeme manuel a la detection automatique
Bardet, I.; Ryckelynck, F.; Desmonts, T. [Sollac, 59 - Dunkerque (France)
Complete text of publication follows: the context of strong local sensitivity to dust emissions from an integrated steel plant justifies the monitoring of the emissions of abnormally coloured smokes from this plant. In a first step, the watch is done 'visually' by screening and counting the puff emissions through a set of seven cameras and video recorders. The development of a new device making automatic picture analysis allowed to render the inspection automatic. The new system detects and counts the incidents and sends an alarm to the process operator. This way for automatic detection can be extended, after some tests, to other uses in the environmental field. (authors)
Full Text Available Video streaming over the Internet has gained significant popularity during the last years, and the academy and industry have realized a great research effort in this direction. In this scenario, scalable video coding (SVC has emerged as an important video standard to provide more functionality to video transmission and storage applications. This paper proposes and evaluates two strategies based on scalable video coding for P2P video streaming services. In the first strategy, SVC is used to offer differentiated quality video to peers with heterogeneous capacities. The second strategy uses SVC to reach a homogeneous video quality between different videos from different sources. The obtained results show that our proposed strategies enable a system to improve its performance and introduce benefits such as differentiated quality of video for clients with heterogeneous capacities and variable network conditions.
Full Text Available Background and objective In recent years, Da Vinci robot system applied in the treatment of intrathoracic surgery mediastinal diseases become more mature. The aim of this study is to summarize the clinical data about mediastinal lesions of General Hospital of Shenyang Military Region in the past 4 years, then to analyze the treatment effect and promising applications of da Vinci robot system in the surgical treatment of mediastinal lesions. Methods 203 cases of mediastinal lesions were collected from General Hospital of Shenyang Military Region between 2010 and 2013. These patients were divided into two groups da Vinci and video-assisted thoracoscopic surgery (VATS according to the selection of the treatments. The time in surgery, intraoperative blood loss, postoperative drainage amount within three days after surgery, the period of bearing drainage tubes, hospital stays and hospitalization expense were then compared. Results All patients were successfully operated, the postoperative recovery is good and there is no perioperative death. The different of the time in surgery between two groups is Robots group 82 (20-320 min and thoracoscopic group 89 (35-360 min (P>0.05. The intraoperative blood loss between two groups is robot group 10 (1-100 mL and thoracoscopic group 50 (3-1,500 mL. The postoperative drainage amount within three days after surgery between two groups is robot group 215 (0-2,220 mL and thoracoscopic group 350 (50-1,810 mL. The period of bearing drainage tubes after surgery between two groups is robot group 3 (0-10 d and thoracoscopic group: 5 (1-18 d. The difference of hospital stays between two groups is robot group 7 (2-15 d and thoracoscopic group 9 (2-50 d. The hospitalization expense between two groups is robot group (18,983.6±4,461.2 RMB and thoracoscopic group (9,351.9±2,076.3 RMB (All P<0.001. Conclusion The da Vinci robot system is safe and efficient in the treatment of mediastinal lesions compared with video
van Houten, Ynze; Schuurman, Jan Gerrit; Verhagen, Pleunes Willem; Enser, Peter; Kompatsiaris, Yiannis; O’Connor, Noel E.; Smeaton, Alan F.; Smeulders, Arnold W.M.
With information systems, the real design problem is not increased access to information, but greater efficiency in finding useful information. In our approach to video content browsing, we try to match the browsing environment with human information processing structures by applying ideas from
Beheshti, Mobina; Taspolat, Ata; Kaya, Omer Sami; Sapanca, Hamza Fatih
Nowadays, video plays a significant role in education in terms of its integration into traditional classes, the principal delivery system of information in classes particularly in online courses as well as serving as a foundation of many blended classes. Hence, education is adopting a modern approach of instruction with the target of moving away…
De Laat, PB
According to David Teece, only strong and integrated firms can successfully innovate in a systemic fashion. Looser coalitions consisting of joint ventures, alliances, or virtual partners will not be able to create a systemic innovation, let alone to set standards for it, or to control its further
Maryland State Dept. of Education, Baltimore. School Facilities Branch.
Telecommunications infrastructure has the dual challenges of maintaining quality while accommodating change, issues that have long been met through a series of implementation standards. This document is designed to ensure that telecommunications systems within the Maryland public school system are also capable of meeting both challenges and…
... by computer simulations, with/without supplementary gyro and GPS. How various system parameters impact the achievable precision of panoramic system in 3-D terrain feature localization and UAV motion estimation is determined for the A=0.5-2 KM...
Full Text Available Structural health monitoring (SHM has become a viable tool to provide owners of structures and mechanical systems with quantitative and objective data for maintenance and repair. Traditionally, discrete contact sensors such as strain gages or accelerometers have been used for SHM. However, distributed remote sensors could be advantageous since they don’t require cabling and can cover an area rather than a limited number of discrete points. Along this line we propose a novel monitoring methodology based on video analysis. By employing commercially available digital cameras combined with efficient signal processing methods we can measure and compute the fundamental frequency of vibration of structural systems. The basic concept is that small changes in the intensity value of a monitored pixel with fixed coordinates caused by the vibration of structures can be captured by employing techniques such as the Fast Fourier Transform (FFT. In this paper we introduce the basic concept and mathematical theory of this proposed so-called virtual visual sensor (VVS, we present a set of initial laboratory experiments to demonstrate the accuracy of this approach, and provide a practical in-service monitoring example of an in-service bridge. Finally, we discuss further work to improve the current methodology.
Michael B. McCamy
Full Text Available Our eyes are in continuous motion. Even when we attempt to fix our gaze, we produce so called “fixational eye movements”, which include microsaccades, drift, and ocular microtremor (OMT. Microsaccades, the largest and fastest type of fixational eye movement, shift the retinal image from several dozen to several hundred photoreceptors and have equivalent physical characteristics to saccades, only on a smaller scale (Martinez-Conde, Otero-Millan & Macknik, 2013. OMT occurs simultaneously with drift and is the smallest of the fixational eye movements (∼1 photoreceptor width, >0.5 arcmin, with dominant frequencies ranging from 70 Hz to 103 Hz (Martinez-Conde, Macknik & Hubel, 2004. Due to OMT’s small amplitude and high frequency, the most accurate and stringent way to record it is the piezoelectric transduction method. Thus, OMT studies are far rarer than those focusing on microsaccades or drift. Here we conducted simultaneous recordings of OMT and microsaccades with a piezoelectric device and a commercial infrared video tracking system. We set out to determine whether OMT could help to restore perceptually faded targets during attempted fixation, and we also wondered whether the piezoelectric sensor could affect the characteristics of microsaccades. Our results showed that microsaccades, but not OMT, counteracted perceptual fading. We moreover found that the piezoelectric sensor affected microsaccades in a complex way, and that the oculomotor system adjusted to the stress brought on by the sensor by adjusting the magnitudes of microsaccades.
Full Text Available This paper describes a master-slave visual surveillance system that uses stationary-dynamic camera assemblies to achieve wide field of view and selective focus of interest. In this system, the fish-eye panoramic camera is capable of monitoring a large area, and the PTZ dome camera has high mobility and zoom ability. In order to achieve the precise interaction, preprocessing spatial calibration between these two cameras is required. This paper introduces a novel calibration approach to automatically calculate a transformation matrix model between two coordinate systems by matching feature points. In addition, a distortion correction method based on Midpoint Circle Algorithm is proposed to handle obvious horizontal distortion in the captured panoramic image. Experimental results using realistic scenes have demonstrated the efficiency and applicability of the system with real-time surveillance.
implementation. The system currently has a bug in that there is no synchronisation between the input frames and the tracked objects reported for each...frame (due to a bug in the third party MPEG decoder). It was therefore necessary to synchronise the reporting with the input frames by hand, and this...algorithms for our VMTI system. References 1. S. Ali and M. Shah. COCOA - tracking in aerial imagery. Proc. Int. Conf. on Computer Vision, Beijing, China
This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition...
include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition......This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...
Full Text Available The highly efficient and robust stitching of aerial video captured by unmanned aerial vehicles (UAVs is a challenging problem in the field of robot vision. Existing commercial image stitching systems have seen success with offline stitching tasks, but they cannot guarantee high-speed performance when dealing with online aerial video sequences. In this paper, we present a novel system which has an unique ability to stitch high-frame rate aerial video at a speed of 150 frames per second (FPS. In addition, rather than using a high-speed vision platform such as FPGA or CUDA, our system is running on a normal personal computer. To achieve this, after the careful comparison of the existing invariant features, we choose the FAST corner and binary descriptor for efficient feature extraction and representation, and present a spatial and temporal coherent filter to fuse the UAV motion information into the feature matching. The proposed filter can remove the majority of feature correspondence outliers and significantly increase the speed of robust feature matching by up to 20 times. To achieve a balance between robustness and efficiency, a dynamic key frame-based stitching framework is used to reduce the accumulation errors. Extensive experiments on challenging UAV datasets demonstrate that our approach can break through the speed limitation and generate an accurate stitching image for aerial video stitching tasks.
Helen Gail Prosser
Northern Lakes College in north-central Alberta is the first post-secondary institution in Canada to use the Media on Demand digital video system to stream large video files between dispersed locations (Karlsen). Staff and students at distant locations of Northern Lakes College are now viewing more than 350 videos using video streaming technology. This has been made possible by SuperNet, a high capacity broadband network that connects schools, hospitals, libraries and government offices thr...
Full Text Available Shooting free throws plays an important role in basketball. The major problem in performing a correct free throw seems to be inappropriate training. Training is performed offline and it is often not that persistent. The aim of this paper is to consciously modify and control the free throw using biofeedback. Elbow and shoulder dynamics are calculated by an image processing technique equipped with a video image acquisition system. The proposed setup in this paper, named learning control system, is able to quantify and provide feedback of the above parameters in real time as audio signals. Therefore, it yielded to performing a correct learning and conscious control of shooting. Experimental results showed improvements in the free throw shooting style including shot pocket and locked position. The mean values of elbow and shoulder angles were controlled approximately on 89o and 26o, for shot pocket and also these angles were tuned approximately on 180o and 47o respectively for the locked position (closed to the desired pattern of the free throw based on valid FIBA references. Not only the mean values enhanced but also the standard deviations of these angles decreased meaningfully, which shows shooting style convergence and uniformity. Also, in training conditions, the average percentage of making successful free throws increased from about 64% to even 87% after using this setup and in competition conditions the average percentage of successful free throws enhanced about 20%, although using the learning control system may not be the only reason for these outcomes. The proposed system is easy to use, inexpensive, portable and real time applicable.
Lee, Hyun Jeong; Yea, Ji Woon; Oh, Se An
Respiratory-gated radiation therapy (RGRT) has been used to minimize the dose to normal tissue in lung-cancer radiotherapy. The present research aims to improve the regularity of respiration in RGRT by using a video-coached respiration guiding system. In the study, 16 patients with lung cancer were evaluated. The respiration signals of the patients were measured by using a realtime position management (RPM) respiratory gating system (Varian, USA), and the patients were trained using the video-coaching respiration guiding system. The patients performed free breathing and guided breathing, and the respiratory cycles were acquired for ~5 min. Then, Microsoft Excel 2010 software was used to calculate the mean and the standard deviation for each phase. The standard deviation was computed in order to analyze the improvement in the respiratory regularity with respect to the period and the displacement. The standard deviation of the guided breathing decreased to 48.8% in the inhale peak and 24.2% in the exhale peak compared with the values for the free breathing of patient 6. The standard deviation of the respiratory cycle was found to be decreased when using the respiratory guiding system. The respiratory regularity was significantly improved when using the video-coaching respiration guiding system. Therefore, the system is useful for improving the accuracy and the efficiency of RGRT.
Full Text Available In recent days there is a growing interest in the study of creative writing. A number of approaches for teaching creative writing have also been investigated. However, studies investigating creative writing particularly for primary school students are hardly to find. The aim of the present research is to figure out how conferencing approach is applied to teach poetry writing and to find out the impact of this approach to the students’ writing skills. The study used a classroom action research with 30 sixth-grade students as the participants. To ensure the present approach effectively improves the learning achievement, the study used three cycles of teaching steps, including classical, group, and individual. Various media and sources to support the learning activities were also used. The results of the study show that there is a significant improvement in students’ writing skills, in which the average score of the third cycle was twice higher than that of the first cycle. This suggests that conferencing instruction had been successful in improving students’ writing skills. The process of interaction, both among students and between students and the teachers, were also emphasized. In addition, the teachers gained an experience of assesing poetry writing analytically using four aspects: creative idea, diction, information, and imagination.
Roth, Susan King
In winter of 1993, a design research project was conducted in the Department of Interior Design at Ohio State University by interdisciplinary teams of graduate students from Industrial Design, Industrial Systems and Engineering, Marketing, and Communication. It was, in effect, a course which aimed to apply knowledge from the students' diverse…
Vision is a part of information system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. It is hard to split the entire system apart, and vision mechanisms cannot be completely understood separately from informational processes related to knowledge and intelligence. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Vision is a component of situation awareness, motion and planning systems. Foveal vision provides semantic analysis, recognizing objects in the scene. Peripheral vision guides fovea to salient objects and provides scene context. Biologically inspired Network-Symbolic representation, in which both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, converts visual information into relational Network-Symbolic structures, avoiding precise artificial computations of 3-D models. Network-Symbolic transformations derive more abstract structures that allows for invariant recognition of an object as exemplar of a class and for a reliable identification even if the object is occluded. Systems with such smart vision will be able to navigate in real environment and understand real-world situations.
Vision evolved as a sensory system for reaching, grasping and other motion activities. In advanced creatures, it has become a vital component of situation awareness, navigation and planning systems. Vision is part of a larger information system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. It is hard to split such a system apart. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for natural processing of visual information. It converts visual information into relational Network-Symbolic models, avoiding artificial precise computations of 3-dimensional models. Logic of visual scenes can be captured in such models and used for disambiguation of visual information. Network-Symbolic transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps create unambiguous network-symbolic models. This approach is consistent with NIST RCS. The UGV, equipped with such smart vision, will be able to plan path and navigate in a real environment, perceive and understand complex real-world situations and act accordingly.
Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.
Duncan, Susan Hanley; Dickie, Ida
The notion that everyone who is impacted by a crime has an investment in the process of how the offender is dealt with is gaining acceptance in diverse contexts around the world. This notion, called restorative justice, is an approach that brings together the offender and individuals impacted by the offender's behavior in a problem-solving process…
Spin-Neto, Rubens; Matzen, Louise H; Schropp, Lars; Gotfredsen, Erik; Wenzel, Ann
To compare video observation (VO) with a novel three-dimensional registration method, based on an accelerometer-gyroscope (AG) system, to detect patient movement during CBCT examination. The movements were further analyzed according to complexity and patient age. In 181 patients (118 females/63 males; age average 30 years, range: 9-84 years), 206 CBCT examinations were performed, which were video-recorded during examination. An AG was, at the same time, attached to the patient head to track head position in three dimensions. Three observers scored patient movement (yes/no) by VO. AG provided movement data on the x-, y- and z-axes. Thresholds for AG-based registration were defined at 0.5, 1, 2, 3 and 4 mm (movement distance). Movement detected by VO was compared with that registered by AG, according to movement complexity (uniplanar vs multiplanar, as defined by AG) and patient age (≤15, 16-30 and ≥31 years). According to AG, movement ≥0.5 mm was present in 160 (77.7%) examinations. According to VO, movement was present in 46 (22.3%) examinations. One VO-detected movement was not registered by AG. Overall, VO did not detect 71.9% of the movements registered by AG at the 0.5-mm threshold. At a movement distance ≥4 mm, 20% of the AG-registered movements were not detected by VO. Multiplanar movements such as lateral head rotation (72.1%) and nodding/swallowing (52.6%) were more often detected by VO in comparison with uniplanar movements, such as head lifting (33.6%) and anteroposterior translation (35.6%), at the 0.5-mm threshold. The prevalence of patients who move was highest in patients younger than 16 years (64.3% for VO and 92.3% for AG-based registration at the 0.5-mm threshold). AG-based movement registration resulted in a higher prevalence of patient movement during CBCT examination than VO-based registration. Also, AG-registered multiplanar movements were more frequently detected by VO than uniplanar movements. The prevalence of patients who move
Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.
Wax, David B; Hill, Bryan; Levin, Matthew A
Medical hardware and software device interoperability standards are not uniform. The result of this lack of standardization is that information available on clinical devices may not be readily or freely available for import into other systems for research, decision support, or other purposes. We developed a novel system to import discrete data from an anesthesia machine ventilator by capturing images of the graphical display screen and using image processing to extract the data with off-the-shelf hardware and open-source software. We were able to successfully capture and verify live ventilator data from anesthesia machines in multiple operating rooms and store the discrete data in a relational database at a substantially lower cost than vendor-sourced solutions.
Full Text Available In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios.
specifications, or other data does not license the holder or any other person or corporation or convey any rights or permission to manufacture , use , or sell...NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data included in this document for any purpose other than...0730, 20 Feb 2018. 14. ABSTRACT We have developed an automated physiological data-organizing and information-summary system (Critical Care Air
Frisoli, M; Edelhoff, J M; Gersdorff, N; Nicolet, J; Braidot, A; Engelke, W
This study provides a direct comparison between two registration systems used in quantifying mandibular opening movements: two-dimensional videography and electronic axiography, which is used as a reference. A total of 32 volunteers (age: 27.2 ± 6.8 - gender: 17 F - 15 M) participated in the study and repeated a characteristic movement, the frontal Posselt, used in the clinical evaluation of the temporomandibular joint. Frontal Posselt diagrams were reconstructed with the data gathered from both systems, which yielded acceptably similar data. Three commonly assessed parameters were obtained from each diagram and compared. These parameters were: maximum opening, right laterotrusion and left laterotrusion. Both descriptive statistics and the ANOVA test suggested that there was no significant difference between the estimated maximum opening parameter and the reference system (p = 0.217, 95% confidence). Laterotrusion values, on the other hand, appear to be overestimated by videography system and to show greater variability. Two-dimensional videography appears to be a suitable tool with resolution that is adequate for tracing mandibular movements - and opening values, in particular - for screening purposes, long-term observation, and as a quick check for dysfunction as far as frontal plane trajectories are concerned. Reliability and acceptable quality of 2D videography data, acquired in this work, show that it has clear advantages for its wide application in the dental office due to simplicity and low cost for maximum opening measurement given the usefulness of this parameter in the detection of temporomandibular disorders.
Boozer, G. A.; Mckibbin, D. D.; Haas, M. R.; Erickson, E. F.
This simulator was created so that C-141 Kuiper Airborne Observatory investigators could test their Airborne Data Acquisition and Management System software on a system which is generally more accessible than the ADAMS on the plane. An investigator can currently test most of his data acquisition program using the data computer simulator in the Cave. (The Cave refers to the ground-based computer facilities for the KAO and the associated support personnel.) The main Cave computer is interfaced to the data computer simulator in order to simulate the data-Exec computer communications. However until now, there has been no way to test the data computer interface to the tracker. The simulator described here simulates both the KAO Exec and tracker computers with software which runs on the same Hewlett-Packard (HP) computer as the investigator's data acquisition program. A simulator control box is hardwired to the computer to provide monitoring of tracker functions, to provide an operator panel similar to the real tracker, and to simulate the 180 deg phase shifting of the chopper squre-wave reference with beam switching. If run in the Cave, one can use their Exec simulator and this tracker simulator.
Full Text Available Micro-expressions play an essential part in understanding non-verbal communication and deceit detection. They are involuntary, brief facial movements that are shown when a person is trying to conceal something. Automatic analysis of micro-expression is challenging due to their low amplitude and to their short duration (they occur as fast as 1/15 to 1/25 of a second. We propose a fully micro-expression analysis system consisting of a high-speed image acquisition setup and a software framework which can detect the frames when the micro-expressions occurred as well as determine the type of the emerged expression. The detection and classification methods use fast and simple motion descriptors based on absolute image differences. The recognition module it only involves the computation of several 2D Gaussian probabilities. The software framework was tested on two publicly available high speed micro-expression databases and the whole system was used to acquire new data. The experiments we performed show that our solution outperforms state of the art works which use more complex and computationally intensive descriptors.
Buhl, Mie; Ørngreen, Rikke; Levinsen, Karin
practice in Danish higher education. As the use of VC becomes more common, challenges emerge that affects both the participants’ experience of space and time - also called telepresence (Draper 1998). The notion of telepresence exposes how the spatial and temporary processes of which the teaching......Teaching performance in performative arts – video conference on the highest level of music education Mie Buhl, Rikke Ørngreen, Karin Levinsen Aalborg University, KILD – Communication, it and learning design & ILD – It and Learning Design Video Conferencing (VC) is becoming an increasing teaching...... in a virtual room put apart in physical room (what we identify as the third room). The music teacher must find new ways of facilitating the performative aspects of practising music. A teaching practice of narration, metaphors and dramatization appears to be an effective mode of helping the student to play...
Metze, Rosalie N.; Kwekkeboom, Rick H.; Abma, Tineke A.
Family Group Conferencing (FGC), a model in which a person and his or her social network make their own ‘care’ plan, is used in youth care and might also be useful in elderly care to support older persons living at home.
Asscher, Jessica J.|info:eu-repo/dai/nl/288661834; Dijkstra, Sharon; Stams, Geert Jan J. M.; Dekovic, Maja|info:eu-repo/dai/nl/088030563; Creemers, Hanneke E.
Background: The model of Family group-conferencing (FG-c) for decision making in child welfare has rapidly spread over the world during the past decades. Its popularity is likely to be caused by its philosophy, emphasizing participation and autonomy of families, rather than based on positive
Metze, R.N.; Kwekkeboom, R.H.; Abma, T.A.
Aim: Family Group Conferencing (FGC), a model in which a person and his or her social network make their own 'care' plan, is used in youth care and might also be useful in elderly care to support older persons living at home. In Amsterdam, the Netherlands, FGC was implemented for older adults but
Russo, Paolo; Gualdi-Russo, Emanuela; Pellegrinelli, Alberto; Balboni, Juri; Furini, Alessio
Using an interdisciplinary approach the authors demonstrate the possibility to obtain reliable anthropometric data of a subject by means of a new video surveillance system. In general the use of current video surveillance systems provides law enforcement with useful data to solve many crimes. Unfortunately the quality of the images and the way in which they are taken often makes it very difficult to judge the compatibility between suspect and perpetrator. In this paper, the authors present the results obtained with a low-cost photogrammetric video surveillance system based on a pair of common surveillance cameras synchronized with each other. The innovative aspect of the system is that it allows estimation with considerable accuracy not only of body height (error 0.1-3.1cm, SD 1.8-4.5cm) but also of other anthropometric characters of the subject, consequently with better determination of the biological profile and greatly increased effectiveness of the judgment of compatibility. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Arriaga, Patrícia; Esteves, Francisco; Carneiro, Paula; Monteiro, Maria Benedicta
This study was conducted to analyze the short-term effects of violent electronic games, played with or without a virtual reality (VR) device, on the instigation of aggressive behavior. Physiological arousal (heart rate (HR)), priming of aggressive thoughts, and state hostility were also measured to test their possible mediation on the relationship between playing the violent game (VG) and aggression. The participants--148 undergraduate students--were randomly assigned to four treatment conditions: two groups played a violent computer game (Unreal Tournament), and the other two a non-violent game (Motocross Madness), half with a VR device and the remaining participants on the computer screen. In order to assess the game effects the following instruments were used: a BIOPAC System MP100 to measure HR, an Emotional Stroop task to analyze the priming of aggressive and fear thoughts, a self-report State Hostility Scale to measure hostility, and a competitive reaction-time task to assess aggressive behavior. The main results indicated that the violent computer game had effects on state hostility and aggression. Although no significant mediation effect could be detected, regression analyses showed an indirect effect of state hostility between playing a VG and aggression. Copyright 2008 Wiley-Liss, Inc.
Saavedra M., Antonio; Pezoa, Jorge E.; Zarkesh-Ha, Payman; Figueroa, Miguel
We propose a platform for robust face recognition in Infrared (IR) images using Compressive Sensing (CS). In line with CS theory, the classification problem is solved using a sparse representation framework, where test images are modeled by means of a linear combination of the training set. Because the training set constitutes an over-complete dictionary, we identify new images by finding their sparsest representation based on the training set, using standard l1-minimization algorithms. Unlike conventional face-recognition algorithms, we feature extraction is performed using random projections with a precomputed binary matrix, as proposed in the CS literature. This random sampling reduces the effects of noise and occlusions such as facial hair, eyeglasses, and disguises, which are notoriously challenging in IR images. Thus, the performance of our framework is robust to these noise and occlusion factors, achieving an average accuracy of approximately 90% when the UCHThermalFace database is used for training and testing purposes. We implemented our framework on a high-performance embedded digital system, where the computation of the sparse representation of IR images was performed by a dedicated hardware using a deeply pipelined architecture on an Field-Programmable Gate Array (FPGA).