WorldWideScience

Sample records for modeling scheme video

  1. Effective Quality-of-Service Renegotiating Schemes for Streaming Video

    Directory of Open Access Journals (Sweden)

    Song Hwangjun

    2004-01-01

    Full Text Available This paper presents effective quality-of-service renegotiating schemes for streaming video. The conventional network supporting quality of service generally allows a negotiation at a call setup. However, it is not efficient for the video application since the compressed video traffic is statistically nonstationary. Thus, we consider the network supporting quality-of-service renegotiations during the data transmission and study effective quality-of-service renegotiating schemes for streaming video. The token bucket model, whose parameters are token filling rate and token bucket size, is adopted for the video traffic model. The renegotiating time instants and the parameters are determined by analyzing the statistical information of compressed video traffic. In this paper, two renegotiating approaches, that is, fixed renegotiating interval case and variable renegotiating interval case, are examined. Finally, the experimental results are provided to show the performance of the proposed schemes.

  2. VBR video traffic models

    CERN Document Server

    Tanwir, Savera

    2014-01-01

    There has been a phenomenal growth in video applications over the past few years. An accurate traffic model of Variable Bit Rate (VBR) video is necessary for performance evaluation of a network design and for generating synthetic traffic that can be used for benchmarking a network. A large number of models for VBR video traffic have been proposed in the literature for different types of video in the past 20 years. Here, the authors have classified and surveyed these models and have also evaluated the models for H.264 AVC and MVC encoded video and discussed their findings.

  3. Efficient Video Streaming Scheme for Next Generations of Mobile Networks

    Directory of Open Access Journals (Sweden)

    Majdi Ashibani

    2005-04-01

    Full Text Available Video streaming over next generations of mobile networks has undergone enormous development recently due to the continuing growth in wireless communication, especially since the emergence of 3G wireless networks. The new generations of wireless networks pose many challenges, including supporting quality of service over wireless communication links. This is due to the time-varying characteristics of wireless channel. Therefore, a more flexible and efficient bandwidth allocation scheme is needed. This paper is a part of ongoing work to come up with a more robust scheme that is capable of rapidly adapting to changes in network conditions. The proposed scheme focuses on the wireless part of the network, providing high quality video service and better utilization of network resources.

  4. Rate control scheme for consistent video quality in scalable video codec.

    Science.gov (United States)

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  5. AN EFFICIENT WATERMARKING SCHEME FOR H.264/AVC COMPRESSED VIDEO

    Directory of Open Access Journals (Sweden)

    Dawen Xu

    2014-08-01

    Full Text Available Since H.264/AVC is the most widely-deployed video coding standard and has gained dominance, the necessity of copyright protection and authentication that are appropriate for this standard is unquestionable. According to H.264/AVC specific codec architecture, an efficient watermarking scheme for H.264/AVC video is proposed. The watermark information is embedded into quantized residual coefficients by slightly modulating the coefficients with specific symbol encoding, instead of directly adding the watermark to the quantized coefficients. It is not necessary to fully decode H.264/AVC compressed stream both in the embedding and extracting processes. Experimental results show that the proposed scheme can preserve high imperceptibility while achieving enough robustness against various attacks such as requantization, transcoding, AWGN, brightness and contrast adjustment.

  6. A scheme for racquet sports video analysis with the combination of audio-visual information

    Science.gov (United States)

    Xing, Liyuan; Ye, Qixiang; Zhang, Weigang; Huang, Qingming; Yu, Hua

    2005-07-01

    As a very important category in sports video, racquet sports video, e.g. table tennis, tennis and badminton, has been paid little attention in the past years. Considering the characteristics of this kind of sports video, we propose a new scheme for structure indexing and highlight generating based on the combination of audio and visual information. Firstly, a supervised classification method is employed to detect important audio symbols including impact (ball hit), audience cheers, commentator speech, etc. Meanwhile an unsupervised algorithm is proposed to group video shots into various clusters. Then, by taking advantage of temporal relationship between audio and visual signals, we can specify the scene clusters with semantic labels including rally scenes and break scenes. Thirdly, a refinement procedure is developed to reduce false rally scenes by further audio analysis. Finally, an exciting model is proposed to rank the detected rally scenes from which many exciting video clips such as game (match) points can be correctly retrieved. Experiments on two types of representative racquet sports video, table tennis video and tennis video, demonstrate encouraging results.

  7. An Adaptive Motion Estimation Scheme for Video Coding

    Directory of Open Access Journals (Sweden)

    Pengyu Liu

    2014-01-01

    Full Text Available The unsymmetrical-cross multihexagon-grid search (UMHexagonS is one of the best fast Motion Estimation (ME algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised.

  8. A Novel Rate Control Scheme for Constant Bit Rate Video Streaming

    Directory of Open Access Journals (Sweden)

    Venkata Phani Kumar M

    2015-08-01

    Full Text Available In this paper, a novel rate control mechanism is proposed for constant bit rate video streaming. The initial quantization parameter used for encoding a video sequence is determined using the average spatio-temporal complexity of the sequence, its resolution and the target bit rate. Simple linear estimation models are then used to predict the number of bits that would be necessary to encode a frame for a given complexity and quantization parameter. The experimental results demonstrate that our proposed rate control mechanism significantly outperforms the existing rate control scheme in the Joint Model (JM reference software in terms of Peak Signal to Noise Ratio (PSNR and consistent perceptual visual quality while achieving the target bit rate. Furthermore, the proposed scheme is validated through implementation on a miniature test-bed.

  9. QIM blind video watermarking scheme based on Wavelet transform and principal component analysis

    Directory of Open Access Journals (Sweden)

    Nisreen I. Yassin

    2014-12-01

    Full Text Available In this paper, a blind scheme for digital video watermarking is proposed. The security of the scheme is established by using one secret key in the retrieval of the watermark. Discrete Wavelet Transform (DWT is applied on each video frame decomposing it into a number of sub-bands. Maximum entropy blocks are selected and transformed using Principal Component Analysis (PCA. Quantization Index Modulation (QIM is used to quantize the maximum coefficient of the PCA blocks of each sub-band. Then, the watermark is embedded into the selected suitable quantizer values. The proposed scheme is tested using a number of video sequences. Experimental results show high imperceptibility. The computed average PSNR exceeds 45 dB. Finally, the scheme is applied on two medical videos. The proposed scheme shows high robustness against several attacks such as JPEG coding, Gaussian noise addition, histogram equalization, gamma correction, and contrast adjustment in both cases of regular videos and medical videos.

  10. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    André Thomas

    2007-01-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  11. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    Thomas André

    2007-03-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  12. Video Waterscrambling: Towards a Video Protection Scheme Based on the Disturbance of Motion Vectors

    Science.gov (United States)

    Bodo, Yann; Laurent, Nathalie; Laurent, Christophe; Dugelay, Jean-Luc

    2004-12-01

    With the popularity of high-bandwidth modems and peer-to-peer networks, the contents of videos must be highly protected from piracy. Traditionally, the models utilized to protect this kind of content are scrambling and watermarking. While the former protects the content against eavesdropping (a priori protection), the latter aims at providing a protection against illegal mass distribution (a posteriori protection). Today, researchers agree that both models must be used conjointly to reach a sufficient level of security. However, scrambling works generally by encryption resulting in an unintelligible content for the end-user. At the moment, some applications (such as e-commerce) may require a slight degradation of content so that the user has an idea of the content before buying it. In this paper, we propose a new video protection model, called waterscrambling, whose aim is to give such a quality degradation-based security model. This model works in the compressed domain and disturbs the motion vectors, degrading the video quality. It also allows embedding of a classical invisible watermark enabling protection against mass distribution. In fact, our model can be seen as an intermediary solution to scrambling and watermarking.

  13. Video Waterscrambling: Towards a Video Protection Scheme Based on the Disturbance of Motion Vectors

    Directory of Open Access Journals (Sweden)

    Yann Bodo

    2004-10-01

    Full Text Available With the popularity of high-bandwidth modems and peer-to-peer networks, the contents of videos must be highly protected from piracy. Traditionally, the models utilized to protect this kind of content are scrambling and watermarking. While the former protects the content against eavesdropping (a priori protection, the latter aims at providing a protection against illegal mass distribution (a posteriori protection. Today, researchers agree that both models must be used conjointly to reach a sufficient level of security. However, scrambling works generally by encryption resulting in an unintelligible content for the end-user. At the moment, some applications (such as e-commerce may require a slight degradation of content so that the user has an idea of the content before buying it. In this paper, we propose a new video protection model, called waterscrambling, whose aim is to give such a quality degradation-based security model. This model works in the compressed domain and disturbs the motion vectors, degrading the video quality. It also allows embedding of a classical invisible watermark enabling protection against mass distribution. In fact, our model can be seen as an intermediary solution to scrambling and watermarking.

  14. A Novel Mobile Video Community Discovery Scheme Using Ontology-Based Semantical Interest Capture

    Directory of Open Access Journals (Sweden)

    Ruiling Zhang

    2016-01-01

    Full Text Available Leveraging network virtualization technologies, the community-based video systems rely on the measurement of common interests to define and steady relationship between community members, which promotes video sharing performance and improves scalability community structure. In this paper, we propose a novel mobile Video Community discovery scheme using ontology-based semantical interest capture (VCOSI. An ontology-based semantical extension approach is proposed, which describes video content and measures video similarity according to video key word selection methods. In order to reduce the calculation load of video similarity, VCOSI designs a prefix-filtering-based estimation algorithm to decrease energy consumption of mobile nodes. VCOSI further proposes a member relationship estimate method to construct scalable and resilient node communities, which promotes video sharing capacity of video systems with the flexible and economic community maintenance. Extensive tests show how VCOSI obtains better performance results in comparison with other state-of-the-art solutions.

  15. A blind video watermarking scheme resistant to rotation and collusion attacks

    Directory of Open Access Journals (Sweden)

    Amlan Karmakar

    2016-04-01

    Full Text Available In this paper, Discrete Cosine Transform (DCT based blind video watermarking algorithm is proposed, which is perceptually invisible and robust against rotation and collusion attacks. To make the scheme resistant against rotation, watermark is embedded within the square blocks, placed on the middle position of every luminance channel. Then Zernike moments of those square blocks are calculated. The rotation invariance property of the Complex Zernike moments is exploited to predict the rotation angle of the video at the time of extraction of watermark bits. To make the scheme robust against collusion, design of the scheme is done in such a way that the embedding blocks will vary for the successive frames of the video. A Pseudo Random Number (PRN generator and a permutation vector are used to achieve the goal. The experimental results show that the scheme is robust against conventional video attacks, rotation attack and collusion attacks.

  16. Design and Smartphone-Based Implementation of a Chaotic Video Communication Scheme via WAN Remote Transmission

    Science.gov (United States)

    Lin, Zhuosheng; Yu, Simin; Li, Chengqing; Lü, Jinhu; Wang, Qianxue

    This paper proposes a chaotic secure video remote communication scheme that can perform on real WAN networks, and implements it on a smartphone hardware platform. First, a joint encryption and compression scheme is designed by embedding a chaotic encryption scheme into the MJPG-Streamer source codes. Then, multiuser smartphone communications between the sender and the receiver are implemented via WAN remote transmission. Finally, the transmitted video data are received with the given IP address and port in an Android smartphone. It should be noted that, this is the first time that chaotic video encryption schemes are implemented on such a hardware platform. The experimental results demonstrate that the technical challenges on hardware implementation of secure video communication are successfully solved, reaching a balance amongst sufficient security level, real-time processing of massive video data, and utilization of available resources in the hardware environment. The proposed scheme can serve as a good application example of chaotic secure communications for smartphone and other mobile facilities in the future.

  17. Research on Matrix-type Packet Loss Compensation Scheme for Wireless Video Transmission on Subway

    Directory of Open Access Journals (Sweden)

    Fan Qing-Wu

    2017-01-01

    Full Text Available As the mainstream wireless LAN technology, Wi-Fi can achieve fast data transfer. With the subway moving in a high speed, video data transmission between the metro and the ground is achieved through Wi-Fi technology. This paper aims at solving the Caton problem caused by switching packet loss in the process of playing real-time video on the train terminal, and proposes matrix-type packet loss compensation scheme. Finally, the feasibility of the scheme is verified by experiments.

  18. Adaptive rate selection scheme for video transmission to resolve IEEE 802.11 performance anomaly

    Science.gov (United States)

    Tang, Guijin; Zhu, Xiuchang

    2011-10-01

    Multi-rate transmission may lead to performance anomaly in an IEEE 802.11 network. It will decrease the throughputs of all the higher rate stations. This paper proposes an adaptive rate selection scheme for video service when performance anomaly occurs. Considering that video has the characteristic of tolerance to packet loss, we actively drop several packets so as to select the rates as high as possible for transmitting packets. Experiment shows our algorithm can decrease the delay and jitter of video, and improve the system throughput as well.

  19. Block-classified motion compensation scheme for digital video

    Energy Technology Data Exchange (ETDEWEB)

    Zafar, S. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.; Zhang, Ya-Qin [David Sarnoff Research Center, Princeton, NJ (United States); Jabbari, B. [George Mason Univ., Fairfax, VA (United States). Dept. of Electrical and Computer Engineering

    1996-03-01

    A novel scheme for block-based motion compensation is introduced in which a block is classified according to the energy that is directly related to the motion activity it represents. This classification allows more flexibility in controlling the bit rate arid the signal-to-noise ratio and results in a reduction in motion search complexity. The method introduced is not dependent on the particular type of motion search algorithm implemented and can thus be used with any method assuming that the underlying matching criteria used is minimum absolute difference. It has been shown that the method is superior to a simple motion compensation algorithm where all blocks are motion compensated regardless of the energy resulting after the displaced difference.

  20. Video Quality Prediction Models Based on Video Content Dynamics for H.264 Video over UMTS Networks

    Directory of Open Access Journals (Sweden)

    Asiya Khan

    2010-01-01

    Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.

  1. Intelligent Model for Video Survillance Security System

    Directory of Open Access Journals (Sweden)

    J. Vidhya

    2013-12-01

    Full Text Available Video surveillance system senses and trails out all the threatening issues in the real time environment. It prevents from security threats with the help of visual devices which gather the information related to videos like CCTV’S and IP (Internet Protocol cameras. Video surveillance system has become a key for addressing problems in the public security. They are mostly deployed on the IP based network. So, all the possible security threats exist in the IP based application might also be the threats available for the reliable application which is available for video surveillance. In result, it may increase cybercrime, illegal video access, mishandling videos and so on. Hence, in this paper an intelligent model is used to propose security for video surveillance system which ensures safety and it provides secured access on video.

  2. A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation

    Directory of Open Access Journals (Sweden)

    Xinghao Jiang

    2014-01-01

    Full Text Available A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.

  3. A robust H.264/AVC video watermarking scheme with drift compensation.

    Science.gov (United States)

    Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing

    2014-01-01

    A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.

  4. Joint source-channel distortion modeling for MPEG-4 video.

    Science.gov (United States)

    Sabir, Muhammad Farooq; Heath, Robert W; Bovik, Alan Conrad

    2009-01-01

    Multimedia communication has become one of the main applications in commercial wireless systems. Multimedia sources, mainly consisting of digital images and videos, have high bandwidth requirements. Since bandwidth is a valuable resource, it is important that its use should be optimized for image and video communication. Therefore, interest in developing new joint source-channel coding (JSCC) methods for image and video communication is increasing. Design of any JSCC scheme requires an estimate of the distortion at different source coding rates and under different channel conditions. The common approach to obtain this estimate is via simulations or operational rate-distortion curves. These approaches, however, are computationally intensive and, hence, not feasible for real-time coding and transmission applications. A more feasible approach to estimate distortion is to develop models that predict distortion at different source coding rates and under different channel conditions. Based on this idea, we present a distortion model for estimating the distortion due to quantization and channel errors in MPEG-4 compressed video streams at different source coding rates and channel bit error rates. This model takes into account important aspects of video compression such as transform coding, motion compensation, and variable length coding. Results show that our model estimates distortion within 1.5 dB of actual simulation values in terms of peak-signal-to-noise ratio.

  5. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  6. Block-classified bidirectional motion compensation scheme for wavelet-decomposed digital video

    Energy Technology Data Exchange (ETDEWEB)

    Zafar, S. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.; Zhang, Y.Q. [David Sarnoff Research Center, Princeton, NJ (United States); Jabbari, B. [George Mason Univ., Fairfax, VA (United States)

    1997-08-01

    In this paper the authors introduce a block-classified bidirectional motion compensation scheme for the previously developed wavelet-based video codec, where multiresolution motion estimation is performed in the wavelet domain. The frame classification structure described in this paper is similar to that used in the MPEG standard. Specifically, the I-frames are intraframe coded, the P-frames are interpolated from a previous I- or a P-frame, and the B-frames are bidirectional interpolated frames. They apply this frame classification structure to the wavelet domain with variable block sizes and multiresolution representation. They use a symmetric bidirectional scheme for the B-frames and classify the motion blocks as intraframe, compensated either from the preceding or the following frame, or bidirectional (i.e., compensated based on which type yields the minimum energy). They also introduce the concept of F-frames, which are analogous to P-frames but are predicted from the following frame only. This improves the overall quality of the reconstruction in a group of pictures (GOP) but at the expense of extra buffering. They also study the effect of quantization of the I-frames on the reconstruction of a GOP, and they provide intuitive explanation for the results. In addition, the authors study a variety of wavelet filter-banks to be used in a multiresolution motion-compensated hierarchical video codec.

  7. A Hybrid Scheme Based on Pipelining and Multitasking in Mobile Application Processors for Advanced Video Coding

    Directory of Open Access Journals (Sweden)

    Muhammad Asif

    2015-01-01

    Full Text Available One of the key requirements for mobile devices is to provide high-performance computing at lower power consumption. The processors used in these devices provide specific hardware resources to handle computationally intensive video processing and interactive graphical applications. Moreover, processors designed for low-power applications may introduce limitations on the availability and usage of resources, which present additional challenges to the system designers. Owing to the specific design of the JZ47x series of mobile application processors, a hybrid software-hardware implementation scheme for H.264/AVC encoder is proposed in this work. The proposed scheme distributes the encoding tasks among hardware and software modules. A series of optimization techniques are developed to speed up the memory access and data transferring among memories. Moreover, an efficient data reusage design is proposed for the deblock filter video processing unit to reduce the memory accesses. Furthermore, fine grained macroblock (MB level parallelism is effectively exploited and a pipelined approach is proposed for efficient utilization of hardware processing cores. Finally, based on parallelism in the proposed design, encoding tasks are distributed between two processing cores. Experiments show that the hybrid encoder is 12 times faster than a highly optimized sequential encoder due to proposed techniques.

  8. No Reference Video-Quality-Assessment Model for Monitoring Video Quality of IPTV Services

    Science.gov (United States)

    Yamagishi, Kazuhisa; Okamoto, Jun; Hayashi, Takanori; Takahashi, Akira

    Service providers should monitor the quality of experience of a communication service in real time to confirm its status. To do this, we previously proposed a packet-layer model that can be used for monitoring the average video quality of typical Internet protocol television content using parameters derived from transmitted packet headers. However, it is difficult to monitor the video quality per user using the average video quality because video quality depends on the video content. To accurately monitor the video quality per user, a model that can be used for estimating the video quality per video content rather than the average video quality should be developed. Therefore, to take into account the impact of video content on video quality, we propose a model that calculates the difference in video quality between the video quality of the estimation-target video and the average video quality estimated using a packet-layer model. We first conducted extensive subjective quality assessments for different codecs and video sequences. We then model their characteristics based on parameters related to compression and packet loss. Finally, we verify the performance of the proposed model by applying it to unknown data sets different from the training data sets used for developing the model.

  9. Cross-band noise model refinement for transform domain Wyner–Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2012-01-01

    TDWZ video coding trails that of conventional video coding solutions, mainly due to the quality of side information, inaccurate noise modeling and loss in the final coding step. The major goal of this paper is to enhance the accuracy of the noise modeling, which is one of the most important aspects...... influencing the coding performance of DVC. A TDWZ video decoder with a novel cross-band based adaptive noise model is proposed, and a noise residue refinement scheme is introduced to successively update the estimated noise residue for noise modeling after each bit-plane. Experimental results show...... that the proposed noise model and noise residue refinement scheme can improve the rate-distortion (RD) performance of TDWZ video coding significantly. The quality of the side information modeling is also evaluated by a measure of the ideal code length....

  10. Two schemes for rapid generation of digital video holograms using PC cluster

    Science.gov (United States)

    Park, Hanhoon; Song, Joongseok; Kim, Changseob; Park, Jong-Il

    2017-12-01

    Computer-generated holography (CGH), which is a process of generating digital holograms, is computationally expensive. Recently, several methods/systems of parallelizing the process using graphic processing units (GPUs) have been proposed. Indeed, use of multiple GPUs or a personal computer (PC) cluster (each PC with GPUs) enabled great improvements in the process speed. However, extant literature has less often explored systems involving rapid generation of multiple digital holograms and specialized systems for rapid generation of a digital video hologram. This study proposes a system that uses a PC cluster and is able to more efficiently generate a video hologram. The proposed system is designed to simultaneously generate multiple frames and accelerate the generation by parallelizing the CGH computations across a number of frames, as opposed to separately generating each individual frame while parallelizing the CGH computations within each frame. The proposed system also enables the subprocesses for generating each frame to execute in parallel through multithreading. With these two schemes, the proposed system significantly reduced the data communication time for generating a digital hologram when compared with that of the state-of-the-art system.

  11. Subjective quality assessment of an adaptive video streaming model

    Science.gov (United States)

    Tavakoli, Samira; Brunnström, Kjell; Wang, Kun; Andrén, Börje; Shahid, Muhammad; Garcia, Narciso

    2014-01-01

    With the recent increased popularity and high usage of HTTP Adaptive Streaming (HAS) techniques, various studies have been carried out in this area which generally focused on the technical enhancement of HAS technology and applications. However, a lack of common HAS standard led to multiple proprietary approaches which have been developed by major Internet companies. In the emerging MPEG-DASH standard the packagings of the video content and HTTP syntax have been standardized; but all the details of the adaptation behavior are left to the client implementation. Nevertheless, to design an adaptation algorithm which optimizes the viewing experience of the enduser, the multimedia service providers need to know about the Quality of Experience (QoE) of different adaptation schemes. Taking this into account, the objective of this experiment was to study the QoE of a HAS-based video broadcast model. The experiment has been carried out through a subjective study of the end user response to various possible clients' behavior for changing the video quality taking different QoE-influence factors into account. The experimental conclusions have made a good insight into the QoE of different adaptation schemes which can be exploited by HAS clients for designing the adaptation algorithms.

  12. Attention modeling for video quality assessment

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Perkis, Andrew

    2010-01-01

    averaged spatiotemporal pooling. The local quality is derived from visual attention modeling and quality variations over frames. Saliency, motion, and contrast information are taken into account in modeling visual attention, which is then integrated into IQMs to calculate the local quality of a video frame...

  13. Player behavioural modelling for video games

    NARCIS (Netherlands)

    van Lankveld, G.; Spronck, P.H.M.; Bakkes, S.C.J.

    2012-01-01

    Player behavioural modelling has grown from a means to improve the playing strength of computer programs that play classic games (e.g., chess), to a means for impacting the player experience and satisfaction in video games, as well as in cross-domain applications such as interactive storytelling. In

  14. Learning to Swim Using Video Modelling and Video Feedback within a Self-Management Program

    Science.gov (United States)

    Lao, So-An; Furlonger, Brett E.; Moore, Dennis W.; Busacca, Margherita

    2016-01-01

    Although many adults who cannot swim are primarily interested in learning by direct coaching there are options that have a focus on self-directed learning. As an alternative a self-management program combined with video modelling, video feedback and high quality and affordable video technology was used to assess its effectiveness to assisting an…

  15. Joint Optimized CPU and Networking Control Scheme for Improved Energy Efficiency in Video Streaming on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Sung-Woong Jo

    2017-01-01

    Full Text Available Video streaming service is one of the most popular applications for mobile users. However, mobile video streaming services consume a lot of energy, resulting in a reduced battery life. This is a critical problem that results in a degraded user’s quality of experience (QoE. Therefore, in this paper, a joint optimization scheme that controls both the central processing unit (CPU and wireless networking of the video streaming process for improved energy efficiency on mobile devices is proposed. For this purpose, the energy consumption of the network interface and CPU is analyzed, and based on the energy consumption profile a joint optimization problem is formulated to maximize the energy efficiency of the mobile device. The proposed algorithm adaptively adjusts the number of chunks to be downloaded and decoded in each packet. Simulation results show that the proposed algorithm can effectively improve the energy efficiency when compared with the existing algorithms.

  16. 4K Video Traffic Prediction using Seasonal Autoregressive Modeling

    Directory of Open Access Journals (Sweden)

    D. R. Marković

    2017-06-01

    Full Text Available From the perspective of average viewer, high definition video streams such as HD (High Definition and UHD (Ultra HD are increasing their internet presence year over year. This is not surprising, having in mind expansion of HD streaming services, such as YouTube, Netflix etc. Therefore, high definition video streams are starting to challenge network resource allocation with their bandwidth requirements and statistical characteristics. Need for analysis and modeling of this demanding video traffic has essential importance for better quality of service and experience support. In this paper we use an easy-to-apply statistical model for prediction of 4K video traffic. Namely, seasonal autoregressive modeling is applied in prediction of 4K video traffic, encoded with HEVC (High Efficiency Video Coding. Analysis and modeling were performed within R programming environment using over 17.000 high definition video frames. It is shown that the proposed methodology provides good accuracy in high definition video traffic modeling.

  17. Efficient Hybrid Watermarking Scheme for Security and Transmission Bit Rate Enhancement of 3D Color-Plus-Depth Video Communication

    Science.gov (United States)

    El-Shafai, W.; El-Rabaie, S.; El-Halawany, M.; Abd El-Samie, F. E.

    2018-03-01

    Three-Dimensional Video-plus-Depth (3DV + D) comprises diverse video streams captured by different cameras around an object. Therefore, there is a great need to fulfill efficient compression to transmit and store the 3DV + D content in compressed form to attain future resource bounds whilst preserving a decisive reception quality. Also, the security of the transmitted 3DV + D is a critical issue for protecting its copyright content. This paper proposes an efficient hybrid watermarking scheme for securing the 3DV + D transmission, which is the homomorphic transform based Singular Value Decomposition (SVD) in Discrete Wavelet Transform (DWT) domain. The objective of the proposed watermarking scheme is to increase the immunity of the watermarked 3DV + D to attacks and achieve adequate perceptual quality. Moreover, the proposed watermarking scheme reduces the transmission-bandwidth requirements for transmitting the color-plus-depth 3DV over limited-bandwidth wireless networks through embedding the depth frames into the color frames of the transmitted 3DV + D. Thus, it saves the transmission bit rate and subsequently it enhances the channel bandwidth-efficiency. The performance of the proposed watermarking scheme is compared with those of the state-of-the-art hybrid watermarking schemes. The comparisons depend on both the subjective visual results and the objective results; the Peak Signal-to-Noise Ratio (PSNR) of the watermarked frames and the Normalized Correlation (NC) of the extracted watermark frames. Extensive simulation results on standard 3DV + D sequences have been conducted in the presence of attacks. The obtained results confirm that the proposed hybrid watermarking scheme is robust in the presence of attacks. It achieves not only very good perceptual quality with appreciated PSNR values and saving in the transmission bit rate, but also high correlation coefficient values in the presence of attacks compared to the existing hybrid watermarking schemes.

  18. Managed Video as a Service for a Video Surveillance Model

    Directory of Open Access Journals (Sweden)

    Dan Benta

    2009-01-01

    Full Text Available The increasing demand for security systems hasresulted in rapid development of video surveillance and videosurveillance has turned into a major area of interest andmanagement challenge. Personal experience in specializedcompanies helped me to adapt demands of users of videosecurity systems to system performance. It is known thatpeople wish to obtain maximum profit with minimum effort,but security is not neglected. Surveillance systems and videomonitoring should provide only necessary information and torecord only when there is activity. Via IP video surveillanceservices provides more safety in this sector, being able torecord information on servers located in other locations thanthe IP cameras. Also, these systems allow real timemonitoring of goods or activities that take place in supervisedperimeters. View live and recording can be done via theInternet from any computer, using a web browser. Access tothe surveillance system is granted after a user and passwordauthentication.

  19. No-Reference Video Quality Assessment Model for Distortion Caused by Packet Loss in the Real-Time Mobile Video Services

    Directory of Open Access Journals (Sweden)

    Jiarun Song

    2014-01-01

    Full Text Available Packet loss will make severe errors due to the corruption of related video data. For most video streams, because the predictive coding structures are employed, the transmission errors in one frame will not only cause decoding failure of itself at the receiver side, but also propagate to its subsequent frames along the motion prediction path, which will bring a significant degradation of end-to-end video quality. To quantify the effects of packet loss on video quality, a no-reference objective quality assessment model is presented in this paper. Considering the fact that the degradation of video quality significantly relies on the video content, the temporal complexity is estimated to reflect the varying characteristic of video content, using the macroblocks with different motion activities in each frame. Then, the quality of the frame affected by the reference frame loss, by error propagation, or by both of them is evaluated, respectively. Utilizing a two-level temporal pooling scheme, the video quality is finally obtained. Extensive experimental results show that the video quality estimated by the proposed method matches well with the subjective quality.

  20. Robust Adaptable Video Copy Detection

    DEFF Research Database (Denmark)

    Assent, Ira; Kremer, Hardy

    2009-01-01

    Video copy detection should be capable of identifying video copies subject to alterations e.g. in video contrast or frame rates. We propose a video copy detection scheme that allows for adaptable detection of videos that are altered temporally (e.g. frame rate change) and/or visually (e.g. change...... in contrast). Our query processing combines filtering and indexing structures for efficient multistep computation of video copies under this model. We show that our model successfully identifies altered video copies and does so more reliably than existing models....

  1. Two adaptive radiative transfer schemes for numerical weather prediction models

    Directory of Open Access Journals (Sweden)

    V. Venema

    2007-11-01

    Full Text Available Radiative transfer calculations in atmospheric models are computationally expensive, even if based on simplifications such as the δ-two-stream approximation. In most weather prediction models these parameterisation schemes are therefore called infrequently, accepting additional model error due to the persistence assumption between calls. This paper presents two so-called adaptive parameterisation schemes for radiative transfer in a limited area model: A perturbation scheme that exploits temporal correlations and a local-search scheme that mainly takes advantage of spatial correlations. Utilising these correlations and with similar computational resources, the schemes are able to predict the surface net radiative fluxes more accurately than a scheme based on the persistence assumption. An important property of these adaptive schemes is that their accuracy does not decrease much in case of strong reductions in the number of calls to the δ-two-stream scheme. It is hypothesised that the core idea can also be employed in parameterisation schemes for other processes and in other dynamical models.

  2. SYNTHESIS OF VISCOELASTIC MATERIAL MODELS (SCHEMES

    Directory of Open Access Journals (Sweden)

    V. Bogomolov

    2014-10-01

    Full Text Available The principles of structural viscoelastic schemes construction for materials with linear viscoelastic properties in accordance with the given experimental data on creep tests are analyzed. It is shown that there can be only four types of materials with linear visco-elastic properties.

  3. Learning from Video Modeling Examples: Does Gender Matter?

    Science.gov (United States)

    Hoogerheide, Vincent; Loyens, Sofie M. M.; van Gog, Tamara

    2016-01-01

    Online learning from video modeling examples, in which a human model demonstrates and explains how to perform a learning task, is an effective instructional method that is increasingly used nowadays. However, model characteristics such as gender tend to differ across videos, and the model-observer similarity hypothesis suggests that such…

  4. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    Science.gov (United States)

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. VIDEO '78: International Experience and Models.

    Science.gov (United States)

    Prix Jeunesse Foundation, Munich (Germany).

    The use of video technology as an alternative form of communications media for artists, teachers, students, and community groups in the United States, Europe, and Third World nations is described. Specific examples of video use in the various countries are outlined and individual characteristics of program organization, production, and…

  6. Comparing Video Modeling and Graduated Guidance Together and Video Modeling Alone for Teaching Role Playing Skills to Children with Autism

    Science.gov (United States)

    Akmanoglu, Nurgul; Yanardag, Mehmet; Batu, E. Sema

    2014-01-01

    Teaching play skills is important for children with autism. The purpose of the present study was to compare effectiveness and efficiency of providing video modeling and graduated guidance together and video modeling alone for teaching role playing skills to children with autism. The study was conducted with four students. The study was conducted…

  7. Sub-component modeling for face image reconstruction in video communications

    Science.gov (United States)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  8. Constructing Self-Modeling Videos: Procedures and Technology

    Science.gov (United States)

    Collier-Meek, Melissa A.; Fallon, Lindsay M.; Johnson, Austin H.; Sanetti, Lisa M. H.; Delcampo, Marisa A.

    2012-01-01

    Although widely recommended, evidence-based interventions are not regularly utilized by school practitioners. Video self-modeling is an effective and efficient evidence-based intervention for a variety of student problem behaviors. However, like many other evidence-based interventions, it is not frequently used in schools. As video creation…

  9. Recognizing Strokes in Tennis Videos Using Hidden Markov Models

    NARCIS (Netherlands)

    Petkovic, M.; Jonker, Willem; Zivkovic, Z.

    This paper addresses content-based video retrieval with an emphasis on recognizing events in tennis game videos. In particular, we aim at recognizing different classes of tennis strokes using automatic learning capability of Hidden Markov Models. Driven by our domain knowledge, a robust player

  10. Video Modeling and Prompting in Practice: Teaching Cooking Skills

    Science.gov (United States)

    Kellems, Ryan O.; Mourra, Kjerstin; Morgan, Robert L.; Riesen, Tim; Glasgow, Malinda; Huddleston, Robin

    2016-01-01

    This article discusses the creation of video modeling (VM) and video prompting (VP) interventions for teaching novel multi-step tasks to individuals with disabilities. This article reviews factors to consider when selecting skills to teach, and students for whom VM/VP may be successful, as well as the difference between VM and VP and circumstances…

  11. Streaming Video Modeling for Robotics Teleoperation

    Science.gov (United States)

    2011-08-11

    When creating a context a RGBA pixel format is used, and the format of the destination is copied from the codec object. The frame object requires...stream, configuration, buffers, and a header. A video format object is created with default settings and a video stream that uses the codec id from...the stream, and a copy of the codec context is made for use in the decoding process. Decoding and Displaying the Frame Decoding and displaying

  12. Optimal modulation and coding scheme allocation of scalable video multicast over IEEE 802.16e networks

    Directory of Open Access Journals (Sweden)

    Tsai Chia-Tai

    2011-01-01

    Full Text Available Abstract With the rapid development of wireless communication technology and the rapid increase in demand for network bandwidth, IEEE 802.16e is an emerging network technique that has been deployed in many metropolises. In addition to the features of high data rate and large coverage, it also enables scalable video multicasting, which is a potentially promising application, over an IEEE 802.16e network. How to optimally assign the modulation and coding scheme (MCS of the scalable video stream for the mobile subscriber stations to improve spectral efficiency and maximize utility is a crucial task. We formulate this MCS assignment problem as an optimization problem, called the total utility maximization problem (TUMP. This article transforms the TUMP into a precedence constraint knapsack problem, which is a NP-complete problem. Then, a branch and bound method, which is based on two dominance rules and a lower bound, is presented to solve the TUMP. The simulation results show that the proposed branch and bound method can find the optimal solution efficiently.

  13. An operator model-based filtering scheme

    International Nuclear Information System (INIS)

    Sawhney, R.S.; Dodds, H.L.; Schryer, J.C.

    1990-01-01

    This paper presents a diagnostic model developed at Oak Ridge National Laboratory (ORNL) for off-normal nuclear power plant events. The diagnostic model is intended to serve as an embedded module of a cognitive model of the human operator, one application of which could be to assist control room operators in correctly responding to off-normal events by providing a rapid and accurate assessment of alarm patterns and parameter trends. The sequential filter model is comprised of two distinct subsystems --- an alarm analysis followed by an analysis of interpreted plant signals. During the alarm analysis phase, the alarm pattern is evaluated to generate hypotheses of possible initiating events in order of likelihood of occurrence. Each hypothesis is further evaluated through analysis of the current trends of state variables in order to validate/reject (in the form of increased/decreased certainty factor) the given hypothesis. 7 refs., 4 figs

  14. Multiple Adaptations and Content-Adaptive FEC Using Parameterized RD Model for Embedded Wavelet Video

    Directory of Open Access Journals (Sweden)

    Yu Ya-Huei

    2007-01-01

    Full Text Available Scalable video coding (SVC has been an active research topic for the past decade. In the past, most SVC technologies were based on a coarse-granularity scalable model which puts many scalability constraints on the encoded bitstreams. As a result, the application scenario of adapting a preencoded bitstream multiple times along the distribution chain has not been seriously investigated before. In this paper, a model-based multiple-adaptation framework based on a wavelet video codec, MC-EZBC, is proposed. The proposed technology allows multiple adaptations on both the video data and the content-adaptive FEC protection codes. For multiple adaptations of video data, rate-distortion information must be embedded within the video bitstream in order to allow rate-distortion optimized operations for each adaptation. Experimental results show that the proposed method reduces the amount of side information by more than 50% on average when compared to the existing technique. It also reduces the number of iterations required to perform the tier-2 entropy coding by more than 64% on average. In addition, due to the nondiscrete nature of the rate-distortion model, the proposed framework also enables multiple adaptations of content-adaptive FEC protection scheme for more flexible error-resilient transmission of bitstreams.

  15. Iteration schemes for parallelizing models of superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Gray, P.A. [Michigan State Univ., East Lansing, MI (United States)

    1996-12-31

    The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.

  16. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    Science.gov (United States)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  17. Multi-model ensemble schemes for predicting northeast monsoon ...

    Indian Academy of Sciences (India)

    An attempt has been made to improve the accuracy of predicted rainfall using three different multi-model ensemble (MME) schemes, viz., simple arithmetic mean of models (EM), principal component regression (PCR) and singular value decomposition based multiple linear regressions (SVD). It is found out that among ...

  18. Inflationary gravitational waves in collapse scheme models

    Energy Technology Data Exchange (ETDEWEB)

    Mariani, Mauro, E-mail: mariani@carina.fcaglp.unlp.edu.ar [Facultad de Ciencias Astronómicas y Geofísicas, Universidad Nacional de La Plata, Paseo del Bosque S/N, 1900 La Plata (Argentina); Bengochea, Gabriel R., E-mail: gabriel@iafe.uba.ar [Instituto de Astronomía y Física del Espacio (IAFE), UBA-CONICET, CC 67, Suc. 28, 1428 Buenos Aires (Argentina); León, Gabriel, E-mail: gleon@df.uba.ar [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Universitaria – Pab. I, 1428 Buenos Aires (Argentina)

    2016-01-10

    The inflationary paradigm is an important cornerstone of the concordance cosmological model. However, standard inflation cannot fully address the transition from an early homogeneous and isotropic stage, to another one lacking such symmetries corresponding to our present universe. In previous works, a self-induced collapse of the wave function has been suggested as the missing ingredient of inflation. Most of the analysis regarding the collapse hypothesis has been solely focused on the characteristics of the spectrum associated to scalar perturbations, and within a semiclassical gravity framework. In this Letter, working in terms of a joint metric-matter quantization for inflation, we calculate, for the first time, the tensor power spectrum and the tensor-to-scalar ratio corresponding to the amplitude of primordial gravitational waves resulting from considering a generic self-induced collapse.

  19. Inflationary gravitational waves in collapse scheme models

    Directory of Open Access Journals (Sweden)

    Mauro Mariani

    2016-01-01

    Full Text Available The inflationary paradigm is an important cornerstone of the concordance cosmological model. However, standard inflation cannot fully address the transition from an early homogeneous and isotropic stage, to another one lacking such symmetries corresponding to our present universe. In previous works, a self-induced collapse of the wave function has been suggested as the missing ingredient of inflation. Most of the analysis regarding the collapse hypothesis has been solely focused on the characteristics of the spectrum associated to scalar perturbations, and within a semiclassical gravity framework. In this Letter, working in terms of a joint metric-matter quantization for inflation, we calculate, for the first time, the tensor power spectrum and the tensor-to-scalar ratio corresponding to the amplitude of primordial gravitational waves resulting from considering a generic self-induced collapse.

  20. Inflationary gravitational waves in collapse scheme models

    International Nuclear Information System (INIS)

    Mariani, Mauro; Bengochea, Gabriel R.; León, Gabriel

    2016-01-01

    The inflationary paradigm is an important cornerstone of the concordance cosmological model. However, standard inflation cannot fully address the transition from an early homogeneous and isotropic stage, to another one lacking such symmetries corresponding to our present universe. In previous works, a self-induced collapse of the wave function has been suggested as the missing ingredient of inflation. Most of the analysis regarding the collapse hypothesis has been solely focused on the characteristics of the spectrum associated to scalar perturbations, and within a semiclassical gravity framework. In this Letter, working in terms of a joint metric-matter quantization for inflation, we calculate, for the first time, the tensor power spectrum and the tensor-to-scalar ratio corresponding to the amplitude of primordial gravitational waves resulting from considering a generic self-induced collapse.

  1. Introducing a moisture scheme to a nonhydrostatic sigma coordinate model

    CSIR Research Space (South Africa)

    Bopape, Mary-Jane M

    2011-09-01

    Full Text Available and precipitation in mid-latitude cyclones. VII: A model for the ?seeder-feeder? process in warm-frontal rainbands. Journal of the Atmospheric Sciences, 40, 1185-1206. Stensrud DJ, 2007: Parameterization schemes. Keys to understanding numerical weather...

  2. Acharya Nachiketa Multi-model ensemble schemes for predicting ...

    Indian Academy of Sciences (India)

    AUTHOR INDEX. Acharya Nachiketa. Multi-model ensemble schemes for predicting northeast monsoon rainfall over peninsular India. 795. Agarwal Neeraj see Shahi Naveen R. 337. Aggarwal Neha see Jha Neerja. 663. Ahmed Shakeel see Sarah S. 399. Alavi Amir Hossein see Mousavi Seyyed Mohammad. 1001.

  3. Efficient Delivery of Scalable Video Using a Streaming Class Model

    Directory of Open Access Journals (Sweden)

    Jason J. Quinlan

    2018-03-01

    Full Text Available When we couple the rise in video streaming with the growing number of portable devices (smart phones, tablets, laptops, we see an ever-increasing demand for high-definition video online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide a graceful changes in video quality, all while respecting viewing satisfaction. In this context, the use of well-known scalable/layered media streaming techniques, commonly known as scalable video coding (SVC, is an attractive solution. SVC encodes a number of video quality levels within a single media stream. This has been shown to be an especially effective and efficient solution, but it fares badly in the presence of datagram losses. While multiple description coding (MDC can reduce the effects of packet loss on scalable video delivery, the increased delivery cost is counterproductive for constrained networks. This situation is accentuated in cases where only the lower quality level is required. In this paper, we assess these issues and propose a new approach called Streaming Classes (SC through which we can define a key set of quality levels, each of which can be delivered in a self-contained manner. This facilitates efficient delivery, yielding reduced transmission byte-cost for devices requiring lower quality, relative to MDC and Adaptive Layer Distribution (ALD (42% and 76% respective reduction for layer 2, while also maintaining high levels of consistent quality. We also illustrate how selective packetisation technique can further reduce the effects of packet loss on viewable quality by

  4. A new parallelization algorithm of ocean model with explicit scheme

    Science.gov (United States)

    Fu, X. D.

    2017-08-01

    This paper will focus on the parallelization of ocean model with explicit scheme which is one of the most commonly used schemes in the discretization of governing equation of ocean model. The characteristic of explicit schema is that calculation is simple, and that the value of the given grid point of ocean model depends on the grid point at the previous time step, which means that one doesn’t need to solve sparse linear equations in the process of solving the governing equation of the ocean model. Aiming at characteristics of the explicit scheme, this paper designs a parallel algorithm named halo cells update with tiny modification of original ocean model and little change of space step and time step of the original ocean model, which can parallelize ocean model by designing transmission module between sub-domains. This paper takes the GRGO for an example to implement the parallelization of GRGO (Global Reduced Gravity Ocean model) with halo update. The result demonstrates that the higher speedup can be achieved at different problem size.

  5. Stochastic modeling of soundtrack for efficient segmentation and indexing of video

    Science.gov (United States)

    Naphade, Milind R.; Huang, Thomas S.

    1999-12-01

    Tools for efficient and intelligent management of digital content are essential for digital video data management. An extremely challenging research area in this context is that of multimedia analysis and understanding. The capabilities of audio analysis in particular for video data management are yet to be fully exploited. We present a novel scheme for indexing and segmentation of video by analyzing the audio track. This analysis is then applied to the segmentation and indexing of movies. We build models for some interesting events in the motion picture soundtrack. The models built include music, human speech and silence. We propose the use of hidden Markov models to model the dynamics of the soundtrack and detect audio-events. Using these models we segment and index the soundtrack. A practical problem in motion picture soundtracks is that the audio in the track is of a composite nature. This corresponds to the mixing of sounds from different sources. Speech in foreground and music in background are common examples. The coexistence of multiple individual audio sources forces us to model such events explicitly. Experiments reveal that explicit modeling gives better result than modeling individual audio events separately.

  6. Using Video-Based Modeling to Promote Acquisition of Fundamental Motor Skills

    Science.gov (United States)

    Obrusnikova, Iva; Rattigan, Peter J.

    2016-01-01

    Video-based modeling is becoming increasingly popular for teaching fundamental motor skills to children in physical education. Two frequently used video-based instructional strategies that incorporate modeling are video prompting (VP) and video modeling (VM). Both strategies have been used across multiple disciplines and populations to teach a…

  7. Corruption model of loss propagation for relative prioritized packet video

    Science.gov (United States)

    Kim, Jin-Gyeong; Kim, JongWon; Kuo, C.-C. Jay

    2000-12-01

    Several analytical models have been recently introduced to estimate the impact of the error propagation effect on the source video caused by lossy transmission channels. However, previous work focused either on the statistical aspects for the whole sequence or had a high computational complexity. In this work, we concentrate on estimating the distortion caused by the loss of a packet with a moderate computational complexity. The proposed model considers both the spatial filtering effect and the temporal dependency that affect the error propagation behavior. To verify this model, a real loss propagation effect is measured and compared with that of the expected distortion level derived by the model. Also, its applicability to the quality of service (QoS) of transmitted video is demonstrated through the packet video evaluation over the simulated differentiated service (DiffServ) forwarding mechanism.

  8. An integrated urban drainage system model for assessing renovation scheme.

    Science.gov (United States)

    Dong, X; Zeng, S; Chen, J; Zhao, D

    2012-01-01

    Due to sustained economic growth in China over the last three decades, urbanization has been on a rapidly expanding track. In recent years, regional industrial relocations were also accelerated across the country from the east coast to the west inland. These changes have led to a large-scale redesign of urban infrastructures, including the drainage system. To help the reconstructed infrastructures towards a better sustainability, a tool is required for assessing the efficiency and environmental performance of different renovation schemes. This paper developed an integrated dynamic modeling tool, which consisted of three models for describing the sewer, the wastewater treatment plant (WWTP) and the receiving water body respectively. Three auxiliary modules were also incorporated to conceptualize the model, calibrate the simulations, and analyze the results. The developed integrated modeling tool was applied to a case study in Shenzhen City, which is one of the most dynamic cities and facing considerable challenges for environmental degradation. The renovation scheme proposed to improve the environmental performance of Shenzhen City's urban drainage system was modeled and evaluated. The simulation results supplied some suggestions for the further improvement of the renovation scheme.

  9. Adaptive Packet Combining Scheme in Three State Channel Model

    Science.gov (United States)

    Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak

    2018-01-01

    The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.

  10. Model building by Coset Space Dimensional Reduction scheme

    Science.gov (United States)

    Jittoh, Toshifumi; Koike, Masafumi; Nomura, Takaaki; Sato, Joe; Shimomura, Takashi

    2009-04-01

    We investigate the gauge-Higgs unification models within the scheme of the coset space dimensional reduction, beginning with a gauge theory in a fourteen-dimensional spacetime where extra-dimensional space has the structure of a ten-dimensional compact coset space. We found seventeen phenomenologically acceptable models through an exhaustive search for the candidates of the coset spaces, the gauge group in fourteen dimension, and fermion representation. Of the seventeen, ten models led to SO(10)(×U(1)) GUT-like models after dimensional reduction, three models led to SU(5)×U(1) GUT-like models, and four to SU(3)×SU(2)×U(1)×U(1) Standard-Model-like models. The combinations of the coset space, the gauge group in the fourteen-dimensional spacetime, and the representation of the fermion contents of such models are listed.

  11. An Industrial Model Based Disturbance Feedback Control Scheme

    DEFF Research Database (Denmark)

    Kawai, Fukiko; Nakazawa, Chikashi; Vinther, Kasper

    2014-01-01

    This paper presents a model based disturbance feedback control scheme. Industrial process systems have been traditionally controlled by using relay and PID controller. However these controllers are affected by disturbances and model errors and these effects degrade control performance. The authors...... propose a new control method that can decrease the negative impact of disturbance and model errors. The control method is motivated by industrial practice by Fuji Electric. Simulation tests are examined with a conventional PID controller and the disturbance feedback control. The simulation results...... demonstrate the effectiveness of the proposed method comparing with the conventional PID controller...

  12. Generalized Roe's numerical scheme for a two-fluid model

    International Nuclear Information System (INIS)

    Toumi, I.; Raymond, P.

    1993-01-01

    This paper is devoted to a mathematical and numerical study of a six equation two-fluid model. We will prove that the model is strictly hyperbolic due to the inclusion of the virtual mass force term in the phasic momentum equations. The two-fluid model is naturally written under a nonconservative form. To solve the nonlinear Riemann problem for this nonconservative hyperbolic system, a generalized Roe's approximate Riemann solver, is used, based on a linearization of the nonconservative terms. A Godunov type numerical scheme is built, using this approximate Riemann solver. 10 refs., 5 figs,

  13. When Video Games Tell Stories: A Model of Video Game Narrative Architectures

    Directory of Open Access Journals (Sweden)

    Marcello Arnaldo Picucci

    2014-11-01

    Full Text Available In the present study a model is proposed offering a comprehensive categorization of video game narrative structures intended as the methods and techniques used by game designers and allowed by the medium to deliver the story content throughout the gameplay in collaboration with the players. A case is first made for the presence of narrative in video games and its growth of importance as a central component in game design. An in-depth analysis ensues focusing on how games tell stories, guided by the criteria of linearity/nonlinearity, interactivity and randomness. Light is shed upon the fundamental architectures through which stories are told as well as the essential boundaries posed by the close link between narrative and game AI.

  14. A collaborative video sketching model in the making

    DEFF Research Database (Denmark)

    Gundersen, Peter; Ørngreen, Rikke; Henningsen, Birgitte

    2018-01-01

    The literature on design research emphasizes working in iterative cycles that investigate and explore many ideas and alternative designs. However, these cycles are seldom applied or documented in educational research papers. In this paper, we illustrate the development process of a video sketching...... model, where we explore the relation between the educational research design team, their sketching and video sketching activities. The results show how sketching can be done in different modes and how it supports thinking, communication, reflection and distributed cognition in design teams when...

  15. The Video Role Model as an Enterprise Teaching Aid

    Science.gov (United States)

    Robertson, Martyn; Collins, Amanda

    2003-01-01

    This article examines the need to develop a more enterprising approach to learning by adopting an experiential approach. It specifically examines the use of video case studies of entrepreneurial role models within an enterprise module at Leeds Metropolitan University. The exercise enables students to act as a consultant or counsellor and apply…

  16. Learning from video modeling examples : Effects of seeing the human model's face

    NARCIS (Netherlands)

    Van Gog, Tamara; Verveer, Ilse; Verveer, Lise

    2014-01-01

    Video modeling examples in which a human(-like) model shows learners how to perform a task are increasingly used in education, as they have become very easy to create and distribute in e-learning environments. However, little is known about design guidelines to optimize learning from video modeling

  17. Digital Video as a Personalized Learning Assignment: A Qualitative Study of Student Authored Video Using the ICSDR Model

    Science.gov (United States)

    Campbell, Laurie O.; Cox, Thomas D.

    2018-01-01

    Students within this study followed the ICSDR (Identify, Conceptualize/Connect, Storyboard, Develop, Review/Reflect/Revise) development model to create digital video, as a personalized and active learning assignment. The participants, graduate students in education, indicated that following the ICSDR framework for student-authored video guided…

  18. An integration scheme for stiff solid-gas reactor models

    Directory of Open Access Journals (Sweden)

    Bjarne A. Foss

    2001-04-01

    Full Text Available Many dynamic models encounter numerical integration problems because of a large span in the dynamic modes. In this paper we develop a numerical integration scheme for systems that include a gas phase, and solid and liquid phases, such as a gas-solid reactor. The method is based on neglecting fast dynamic modes and exploiting the structure of the algebraic equations. The integration method is suitable for a large class of industrially relevant systems. The methodology has proven remarkably efficient. It has in practice performed excellent and been a key factor for the success of the industrial simulator for electrochemical furnaces for ferro-alloy production.

  19. An Automatic Bleeding Frame and Region Detection Scheme for Wireless Capsule Endoscopy Videos Based on Interplane Intensity Variation Profile in Normalized RGB Color Space

    Directory of Open Access Journals (Sweden)

    Amit Kumar Kundu

    2018-01-01

    Full Text Available Wireless capsule endoscopy (WCE is an effective video technology to diagnose gastrointestinal (GI disease, such as bleeding. In order to avoid conventional tedious and risky manual review process of long duration WCE videos, automatic bleeding detection schemes are getting importance. In this paper, to investigate bleeding, the analysis of WCE images is carried out in normalized RGB color space as human perception of bleeding is associated with different shades of red. In the proposed method, at first, from the WCE image frame, an efficient region of interest (ROI is extracted based on interplane intensity variation profile in normalized RGB space. Next, from the extracted ROI, the variation in the normalized green plane is presented with the help of histogram. Features are extracted from the proposed normalized green plane histograms. For classification purpose, the K-nearest neighbors classifier is employed. Moreover, bleeding zones in a bleeding image are extracted utilizing some morphological operations. For performance evaluation, 2300 WCE images obtained from 30 publicly available WCE videos are used in a tenfold cross-validation scheme and the proposed method outperforms the reported four existing methods having an accuracy of 97.86%, a sensitivity of 95.20%, and a specificity of 98.32%.

  20. Fast Proton Titration Scheme for Multiscale Modeling of Protein Solutions.

    Science.gov (United States)

    Teixeira, Andre Azevedo Reis; Lund, Mikael; da Silva, Fernando Luís Barroso

    2010-10-12

    Proton exchange between titratable amino acid residues and the surrounding solution gives rise to exciting electric processes in proteins. We present a proton titration scheme for studying acid-base equilibria in Metropolis Monte Carlo simulations where salt is treated at the Debye-Hückel level. The method, rooted in the Kirkwood model of impenetrable spheres, is applied on the three milk proteins α-lactalbumin, β-lactoglobulin, and lactoferrin, for which we investigate the net-charge, molecular dipole moment, and charge capacitance. Over a wide range of pH and salt conditions, excellent agreement is found with more elaborate simulations where salt is explicitly included. The implicit salt scheme is orders of magnitude faster than the explicit analog and allows for transparent interpretation of physical mechanisms. It is shown how the method can be expanded to multiscale modeling of aqueous salt solutions of many biomolecules with nonstatic charge distributions. Important examples are protein-protein aggregation, protein-polyelectrolyte complexation, and protein-membrane association.

  1. Study on noise prediction model and control schemes for substation.

    Science.gov (United States)

    Chen, Chuanmin; Gao, Yang; Liu, Songtao

    2014-01-01

    With the government's emphasis on environmental issues of power transmission and transformation project, noise pollution has become a prominent problem now. The noise from the working transformer, reactor, and other electrical equipment in the substation will bring negative effect to the ambient environment. This paper focuses on using acoustic software for the simulation and calculation method to control substation noise. According to the characteristics of the substation noise and the techniques of noise reduction, a substation's acoustic field model was established with the SoundPLAN software to predict the scope of substation noise. On this basis, 4 reasonable noise control schemes were advanced to provide some helpful references for noise control during the new substation's design and construction process. And the feasibility and application effect of these control schemes can be verified by using the method of simulation modeling. The simulation results show that the substation always has the problem of excessive noise at boundary under the conventional measures. The excess noise can be efficiently reduced by taking the corresponding noise reduction methods.

  2. Study on Noise Prediction Model and Control Schemes for Substation

    Science.gov (United States)

    Gao, Yang; Liu, Songtao

    2014-01-01

    With the government's emphasis on environmental issues of power transmission and transformation project, noise pollution has become a prominent problem now. The noise from the working transformer, reactor, and other electrical equipment in the substation will bring negative effect to the ambient environment. This paper focuses on using acoustic software for the simulation and calculation method to control substation noise. According to the characteristics of the substation noise and the techniques of noise reduction, a substation's acoustic field model was established with the SoundPLAN software to predict the scope of substation noise. On this basis, 4 reasonable noise control schemes were advanced to provide some helpful references for noise control during the new substation's design and construction process. And the feasibility and application effect of these control schemes can be verified by using the method of simulation modeling. The simulation results show that the substation always has the problem of excessive noise at boundary under the conventional measures. The excess noise can be efficiently reduced by taking the corresponding noise reduction methods. PMID:24672356

  3. Cross modality registration of video and magnetic tracker data for 3D appearance and structure modeling

    Science.gov (United States)

    Sargent, Dusty; Chen, Chao-I.; Wang, Yuan-Fang

    2010-02-01

    The paper reports a fully-automated, cross-modality sensor data registration scheme between video and magnetic tracker data. This registration scheme is intended for use in computerized imaging systems to model the appearance, structure, and dimension of human anatomy in three dimensions (3D) from endoscopic videos, particularly colonoscopic videos, for cancer research and clinical practices. The proposed cross-modality calibration procedure operates this way: Before a colonoscopic procedure, the surgeon inserts a magnetic tracker into the working channel of the endoscope or otherwise fixes the tracker's position on the scope. The surgeon then maneuvers the scope-tracker assembly to view a checkerboard calibration pattern from a few different viewpoints for a few seconds. The calibration procedure is then completed, and the relative pose (translation and rotation) between the reference frames of the magnetic tracker and the scope is determined. During the colonoscopic procedure, the readings from the magnetic tracker are used to automatically deduce the pose (both position and orientation) of the scope's reference frame over time, without complicated image analysis. Knowing the scope movement over time then allows us to infer the 3D appearance and structure of the organs and tissues in the scene. While there are other well-established mechanisms for inferring the movement of the camera (scope) from images, they are often sensitive to mistakes in image analysis, error accumulation, and structure deformation. The proposed method using a magnetic tracker to establish the camera motion parameters thus provides a robust and efficient alternative for 3D model construction. Furthermore, the calibration procedure does not require special training nor use expensive calibration equipment (except for a camera calibration pattern-a checkerboard pattern-that can be printed on any laser or inkjet printer).

  4. Video Modeling and Word Identification in Adolescents with Autism Spectrum Disorder

    Science.gov (United States)

    Morlock, Larissa; Reynolds, Jennifer L.; Fisher, Sycarah; Comer, Ronald J.

    2015-01-01

    Video modeling involves the learner viewing videos of a model demonstrating a target skill. According to the National Professional Development Center on Autism Spectrum Disorders (2011), video modeling is an evidenced-based intervention for individuals with Autism Spectrum Disorder (ASD) in elementary through middle school. Little research exists…

  5. An intracloud lightning parameterization scheme for a storm electrification model

    Science.gov (United States)

    Helsdon, John H., Jr.; Wu, Gang; Farley, Richard D.

    1992-01-01

    The parameterization of an intracloud lightning discharge has been implemented in the present storm electrification model. The initiation, propagation direction, and termination of the discharge are computed using the magnitude and direction of the electric field vector as the determining criteria. The charge redistribution due to the lightning is approximated assuming the channel to be an isolated conductor with zero net charge over its entire length. Various simulations involving differing amounts of charge transferred and distribution of charges have been done. Values of charge transfer, dipole moment change, and electrical energy dissipation computed in the model are consistent with observations. The effects of the lightning-produced ions on the hydrometeor charges and electric field components depend strongly on the amount of charge transferred. A comparison between the measured electric field change of an actual intracloud flash and the field change due to the simulated discharge shows favorable agreement. Limitations of the parameterization scheme are discussed.

  6. Dynamics Model Abstraction Scheme Using Radial Basis Functions

    Directory of Open Access Journals (Sweden)

    Silvia Tolu

    2012-01-01

    Full Text Available This paper presents a control model for object manipulation. Properties of objects and environmental conditions influence the motor control and learning. System dynamics depend on an unobserved external context, for example, work load of a robot manipulator. The dynamics of a robot arm change as it manipulates objects with different physical properties, for example, the mass, shape, or mass distribution. We address active sensing strategies to acquire object dynamical models with a radial basis function neural network (RBF. Experiments are done using a real robot’s arm, and trajectory data are gathered during various trials manipulating different objects. Biped robots do not have high force joint servos and the control system hardly compensates all the inertia variation of the adjacent joints and disturbance torque on dynamic gait control. In order to achieve smoother control and lead to more reliable sensorimotor complexes, we evaluate and compare a sparse velocity-driven versus a dense position-driven control scheme.

  7. Algorithms for Optimal Model Distributions in Adaptive Switching Control Schemes

    Directory of Open Access Journals (Sweden)

    Debarghya Ghosh

    2016-03-01

    Full Text Available Several multiple model adaptive control architectures have been proposed in the literature. Despite many advances in theory, the crucial question of how to synthesize the pairs model/controller in a structurally optimal way is to a large extent not addressed. In particular, it is not clear how to place the pairs model/controller is such a way that the properties of the switching algorithm (e.g., number of switches, learning transient, final performance are optimal with respect to some criteria. In this work, we focus on the so-called multi-model unfalsified adaptive supervisory switching control (MUASSC scheme; we define a suitable structural optimality criterion and develop algorithms for synthesizing the pairs model/controller in such a way that they are optimal with respect to the structural optimality criterion we defined. The peculiarity of the proposed optimality criterion and algorithms is that the optimization is carried out so as to optimize the entire behavior of the adaptive algorithm, i.e., both the learning transient and the steady-state response. A comparison is made with respect to the model distribution of the robust multiple model adaptive control (RMMAC, where the optimization considers only the steady-state ideal response and neglects any learning transient.

  8. A fully scalable motion model for scalable video coding.

    Science.gov (United States)

    Kao, Meng-Ping; Nguyen, Truong

    2008-06-01

    Motion information scalability is an important requirement for a fully scalable video codec, especially for decoding scenarios of low bit rate or small image size. So far, several scalable coding techniques on motion information have been proposed, including progressive motion vector precision coding and motion vector field layered coding. However, it is still vague on the required functionalities of motion scalability and how it collaborates flawlessly with other scalabilities, such as spatial, temporal, and quality, in a scalable video codec. In this paper, we first define the functionalities required for motion scalability. Based on these requirements, a fully scalable motion model is proposed along with tailored encoding techniques to minimize the coding overhead of scalability. Moreover, the associated rate distortion optimized motion estimation algorithm will be provided to achieve better efficiency throughout various decoding scenarios. Simulation results will be presented to verify the superiorities of proposed scalable motion model over nonscalable ones.

  9. Operation quality assessment model for video conference system

    Science.gov (United States)

    Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian

    2018-01-01

    Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.

  10. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    Science.gov (United States)

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  11. Gas leak detection in infrared video with background modeling

    Science.gov (United States)

    Zeng, Xiaoxia; Huang, Likun

    2018-03-01

    Background modeling plays an important role in the task of gas detection based on infrared video. VIBE algorithm is a widely used background modeling algorithm in recent years. However, the processing speed of the VIBE algorithm sometimes cannot meet the requirements of some real time detection applications. Therefore, based on the traditional VIBE algorithm, we propose a fast prospect model and optimize the results by combining the connected domain algorithm and the nine-spaces algorithm in the following processing steps. Experiments show the effectiveness of the proposed method.

  12. Two nonlinear control schemes contrasted on a hydrodynamiclike model

    Science.gov (United States)

    Keefe, Laurence R.

    1993-01-01

    The principles of two flow control strategies, those of Huebler (Luescher and Huebler, 1989) and of Ott et al. (1990) are discussed, and the two schemes are compared for their ability to control shear flow, using fully developed and transitional solutions of the Ginzburg-Landau equation as models for such flows. It was found that the effectiveness of both methods in obtaining control of fully developed flows depended strongly on the 'distance' in state space between the uncontrolled flow and goal dynamics. There were conceptual difficulties in applying the Ott et al. method to transitional convectively unstable flows. On the other hand, the Huebler method worked well, within certain limitations, although at a large cost in energy terms.

  13. Radiolytic oxidation of propane: computer modeling of the reaction scheme

    International Nuclear Information System (INIS)

    Gupta, A.K.; Hanrahan, R.J.

    1991-01-01

    The oxidation of gaseous propane under gamma radiolysis was studied at 100 torr pressure and 25 o C, at oxygen pressures from 1 to 15 torr. Major oxygen-containing products and their G-values with 10% added oxygen are as follows: acetone, 0.98; i-propyl alcohol, 0.86; propionaldehyde, 0.43; n-propyl alcohol, 0.11; acrolein, 0.14; and allyl alcohol, 0.038. The formation of major oxygen-containing products was explained on the basis that the alkyl radicals combine with molecular oxygen to give peroxyl radicals; the peroxyl radicals react with one another to give alkoxyl radicals, which in turn react with one another to form carbonyl compounds and alcohols. The reaction scheme for the formation of major products was examined using computer modeling based on a mechanism involving 28 reactions. Yields could be brought into agreement with the data within experimental error in nearly all cases. (author)

  14. Video Modeling and Observational Learning to Teach Gaming Access to Students with ASD

    Science.gov (United States)

    Spriggs, Amy D.; Gast, David L.; Knight, Victoria F.

    2016-01-01

    The purpose of this study was to evaluate both video modeling and observational learning to teach age-appropriate recreation and leisure skills (i.e., accessing video games) to students with autism spectrum disorder. Effects of video modeling were evaluated via a multiple probe design across participants and criteria for mastery were based on…

  15. An Overview of Data Models and Query Languages for Content-based Video Retrieval

    NARCIS (Netherlands)

    Petkovic, M.; Jonker, Willem

    As a large amount of video data becomes publicly available, the need to model and query this data efficiently becomes significant. Consequently, content-based retrieval of video data turns out to be a challenging and important problem addressing areas such as video modelling, indexing, querying,

  16. Effectiveness of Video Modeling Provided by Mothers in Teaching Play Skills to Children with Autism

    Science.gov (United States)

    Besler, Fatma; Kurt, Onur

    2016-01-01

    Video modeling is an evidence-based practice that can be used to provide instruction to individuals with autism. Studies show that this instructional practice is effective in teaching many types of skills such as self-help skills, social skills, and academic skills. However, in previous studies, videos used in the video modeling process were…

  17. Examining the Effects of Video Modeling and Prompts to Teach Activities of Daily Living Skills.

    Science.gov (United States)

    Aldi, Catarina; Crigler, Alexandra; Kates-McElrath, Kelly; Long, Brian; Smith, Hillary; Rehak, Kim; Wilkinson, Lisa

    2016-12-01

    Video modeling has been shown to be effective in teaching a number of skills to learners diagnosed with autism spectrum disorders (ASD). In this study, we taught two young men diagnosed with ASD three different activities of daily living skills (ADLS) using point-of-view video modeling. Results indicated that both participants met criterion for all ADLS. Participants did not maintain mastery criterion at a 1-month follow-up, but did score above baseline at maintenance with and without video modeling. • Point-of-view video models may be an effective intervention to teach daily living skills. • Video modeling with handheld portable devices (Apple iPod or iPad) can be just as effective as video modeling with stationary viewing devices (television or computer). • The use of handheld portable devices (Apple iPod and iPad) makes video modeling accessible and possible in a wide variety of environments.

  18. Comparison of three ice cloud optical schemes in climate simulations with community atmospheric model version 5

    Science.gov (United States)

    Zhao, Wenjie; Peng, Yiran; Wang, Bin; Yi, Bingqi; Lin, Yanluan; Li, Jiangnan

    2018-05-01

    A newly implemented Baum-Yang scheme for simulating ice cloud optical properties is compared with existing schemes (Mitchell and Fu schemes) in a standalone radiative transfer model and in the global climate model (GCM) Community Atmospheric Model Version 5 (CAM5). This study systematically analyzes the effect of different ice cloud optical schemes on global radiation and climate by a series of simulations with a simplified standalone radiative transfer model, atmospheric GCM CAM5, and a comprehensive coupled climate model. Results from the standalone radiative model show that Baum-Yang scheme yields generally weaker effects of ice cloud on temperature profiles both in shortwave and longwave spectrum. CAM5 simulations indicate that Baum-Yang scheme in place of Mitchell/Fu scheme tends to cool the upper atmosphere and strengthen the thermodynamic instability in low- and mid-latitudes, which could intensify the Hadley circulation and dehydrate the subtropics. When CAM5 is coupled with a slab ocean model to include simplified air-sea interaction, reduced downward longwave flux to surface in Baum-Yang scheme mitigates ice-albedo feedback in the Arctic as well as water vapor and cloud feedbacks in low- and mid-latitudes, resulting in an overall temperature decrease by 3.0/1.4 °C globally compared with Mitchell/Fu schemes. Radiative effect and climate feedback of the three ice cloud optical schemes documented in this study can be referred for future improvements on ice cloud simulation in CAM5.

  19. Noise Residual Learning for Noise Modeling in Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The noise model is one of the inherently difficult challenges in DVC. This paper considers Transform Domain Wyner-Ziv (TDWZ) coding and proposes...... noise residual learning techniques that take residues from previously decoded frames into account to estimate the decoding residue more precisely. Moreover, the techniques calculate a number of candidate noise residual distributions within a frame to adaptively optimize the soft side information during...

  20. The Global Classroom Video Conferencing Model and First Evaluations

    DEFF Research Database (Denmark)

    Weitze, Charlotte Lærke; Ørngreen, Rikke; Levinsen, Karin

    2013-01-01

    pedagogical innovativeness, including collaborative and technological issues. The research is based on the Global Classroom Model as it is implemented and used at an adult learning center in Denmark (VUC Storstrøm). VUC Storstrøms (VUC) Global Classroom Model is an approach to video conferencing and e...... are present on campus in the classroom, while other students are participating simultaneously from their home using laptops. Although the Global Classroom Model is pedagogically flexible, the students are required to attend according to regulations from the Ministry of Children and Education to pass....... All these matters need to be taken into consideration when implementing the Global Classroom Model. Through the start-up period of a PhD study and through a research-based competence development project with senior researchers, we have gained knowledge about the experiences, challenges, and potentials...

  1. Radiolytic oxidation of propane: Computer modeling of the reaction scheme

    Science.gov (United States)

    Gupta, Avinash K.; Hanrahan, Robert J.

    The oxidation of gaseous propane under gamma radiolysis was studied at 100 torr pressure and 25°C, at oxygen pressures from 1 to 15 torr. Major oxygen-containing products and their G-values with 10% added oxygen are as follows: acetone, 0.98; i-propyl alcohol, 0.86; propionaldehyde, 0.43; n-propyl alcohol, 0.11; acrolein, 0.14; and allyl alcohol, 0.038. Minor products include i-butyl alcohol, t-amyl alcohol, n-butyl alcohol, n-amyl alcohol, and i-amyl alcohol. Small yields of i-hexyl alcohol and n-hexyl alcohol were also observed. There was no apparent difference in the G-values at pressures of 50, 100 and 150 torr. When the oxygen concentration was decreased below 5%, the yields of acetone, i-propyl alcohol, and n-propyl alcohol increased, the propionaldehyde yield decreased, and the yields of other products remained constant. The formation of major oxygen-containing products was explained on the basis that the alkyl radicals combine with molecular oxygen to give peroxyl radicals; the peroxyl radicals react with one another to give alkoxyl radicals, which in turn react with one another to form carbonyl compounds and alcohols. The reaction scheme for the formation of major products was examined using computer modeling based on a mechanism involving 28 reactions. Yields could be brought into agreement with the data within experimental error in nearly all cases.

  2. VIDEO SEGMENTATION USING A NOVEL LBP DESCRIPTOR

    Directory of Open Access Journals (Sweden)

    Zhongkun He

    2014-08-01

    Full Text Available Video segmentation is the basis for content-based video retrieval, object recognition, object tracking, and video compression. This paper proposes a kind of novel and easy spatial-temporal LBP coding method, using the spatial-temporal 2 × 2 × 2 neighborhood clique to encode the changes in a video. Based on the coding method, a scheme of video segmentation is developed. Compared to the traditional segmentation method, its distinguished advantage is that it does not need to construct the background model and is simple in computation. Experimental results indicate that this new algorithm can give satisfying segmentation results.

  3. Energy saving approaches for video streaming on smartphone based on QoE modeling

    DEFF Research Database (Denmark)

    Ballesteros, Luis Guillermo Martinez; Ickin, Selim; Fiedler, Markus

    2016-01-01

    In this paper, we study the influence of video stalling on QoE. We provide QoE models that are obtained in realistic scenarios on the smartphone, and provide energy-saving approaches for smartphone by leveraging the proposed QoE models in relation to energy. Results show that approximately 5J...... is saved in a 3 minutes video clip with an acceptable Mean Opinion Score (MOS) level when the video frames are skipped. If the video frames are not skipped, then it is suggested to avoid freezes during a video stream as the freezes highly increase the energy waste on the smartphones....

  4. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    Science.gov (United States)

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  5. 3 Lectures: "Lagrangian Models", "Numerical Transport Schemes", and "Chemical and Transport Models"

    Science.gov (United States)

    Douglass, A.

    2005-01-01

    The topics for the three lectures for the Canadian Summer School are Lagrangian Models, numerical transport schemes, and chemical and transport models. In the first lecture I will explain the basic components of the Lagrangian model (a trajectory code and a photochemical code), the difficulties in using such a model (initialization) and show some applications in interpretation of aircraft and satellite data. If time permits I will show some results concerning inverse modeling which is being used to evaluate sources of tropospheric pollutants. In the second lecture I will discuss one of the core components of any grid point model, the numerical transport scheme. I will explain the basics of shock capturing schemes, and performance criteria. I will include an example of the importance of horizontal resolution to polar processes. We have learned from NASA's global modeling initiative that horizontal resolution matters for predictions of the future evolution of the ozone hole. The numerical scheme will be evaluated using performance metrics based on satellite observations of long-lived tracers. The final lecture will discuss the evolution of chemical transport models over the last decade. Some of the problems with assimilated winds will be demonstrated, using satellite data to evaluate the simulations.

  6. Nitrogen and Phosphorus Biomass-Kinetic Model for Chlorella vulgaris in a Biofuel Production Scheme

    Science.gov (United States)

    2010-03-01

    NITROGEN AND PHOSPHORUS BIOMASS-KINETIC MODEL FOR CHLORELLA VULGARIS IN A BIOFUEL PRODUCTION SCHEME THESIS William M. Rowley, Major...States Government. AFIT/GES/ENV/10-M04 NITROGEN AND PHOSPHORUS BIOMASS-KINETIC MODEL FOR CHLORELLA VULGARIS IN A BIOFUEL...MODEL FOR CHLORELLA VULGARIS IN A BIOFUEL PRODUCTION SCHEME William M. Rowley, BS Major, USMC Approved

  7. Sensitivity experiments of a regional climate model to the different convective schemes over Central Africa

    Science.gov (United States)

    Armand J, K. M.

    2017-12-01

    In this study, version 4 of the regional climate model (RegCM4) is used to perform 6 years simulation including one year for spin-up (from January 2001 to December 2006) over Central Africa using four convective schemes: The Emmanuel scheme (MIT), the Grell scheme with Arakawa-Schulbert closure assumption (GAS), the Grell scheme with Fritsch-Chappell closure assumption (GFC) and the Anthes-Kuo scheme (Kuo). We have investigated the ability of the model to simulate precipitation, surface temperature, wind and aerosols optical depth. Emphasis in the model results were made in December-January-February (DJF) and July-August-September (JAS) periods. Two subregions have been identified for more specific analysis namely: zone 1 which corresponds to the sahel region mainly classified as desert and steppe and zone 2 which is a region spanning the tropical rain forest and is characterised by a bimodal rain regime. We found that regardless of periods or simulated parameters, MIT scheme generally has a tendency to overestimate. The GAS scheme is more suitable in simulating the aforementioned parameters, as well as the diurnal cycle of precipitations everywhere over the study domain irrespective of the season. In JAS, model results are similar in the representation of regional wind circulation. Apart from the MIT scheme, all the convective schemes give the same trends in aerosols optical depth simulations. Additional experiment reveals that the use of BATS instead of Zeng scheme to calculate ocean flux appears to improve the quality of the model simulations.

  8. On usage of CABARET scheme for tracer transport in INM ocean model

    International Nuclear Information System (INIS)

    Diansky, Nikolay; Kostrykin, Sergey; Gusev, Anatoly; Salnikov, Nikolay

    2010-01-01

    The contemporary state of ocean numerical modelling sets some requirements for the numerical advection schemes used in ocean general circulation models (OGCMs). The most important requirements are conservation, monotonicity and numerical efficiency including good parallelization properties. Investigation of some advection schemes shows that one of the best schemes satisfying the criteria is CABARET scheme. 3D-modification of the CABARET scheme was used to develop a new transport module (for temperature and salinity) for the Institute of Numerical Mathematics ocean model (INMOM). Testing of this module on some common benchmarks shows a high accuracy in comparison with the second-order advection scheme used in the INMOM. This new module was incorporated in the INMOM and experiments with the modified model showed a better simulation of oceanic circulation than its previous version.

  9. Universal block diagram based modeling and simulation schemes for fractional-order control systems.

    Science.gov (United States)

    Bai, Lu; Xue, Dingyü

    2017-05-08

    Universal block diagram based schemes are proposed for modeling and simulating the fractional-order control systems in this paper. A fractional operator block in Simulink is designed to evaluate the fractional-order derivative and integral. Based on the block, the fractional-order control systems with zero initial conditions can be modeled conveniently. For modeling the system with nonzero initial conditions, the auxiliary signal is constructed in the compensation scheme. Since the compensation scheme is very complicated, therefore the integrator chain scheme is further proposed to simplify the modeling procedures. The accuracy and effectiveness of the schemes are assessed in the examples, the computation results testify the block diagram scheme is efficient for all Caputo fractional-order ordinary differential equations (FODEs) of any complexity, including the implicit Caputo FODEs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  10. BOT schemes as financial model of hydro power projects

    International Nuclear Information System (INIS)

    Grausam, A.

    1997-01-01

    Build-operate-transfer (BOT) schemes are the latest methods adopted in the developing infrastructure projects. This paper outlines the project financing through BOT schemes and briefly focuses on the factors particularly relevant to hydro power projects. Hydro power development provides not only the best way to produce electricity, it can also solve problems in different fields, such as navigation problems in case of run-of-the river plants, ground water management systems and flood control etc. This makes HPP projects not cheaper, but hydro energy is a clean and renewable energy and the hydro potential worldwide will play a major role to meet the increased demand in future. 5 figs

  11. Non-intrusive Packet-Layer Model for Monitoring Video Quality of IPTV Services

    Science.gov (United States)

    Yamagishi, Kazuhisa; Hayashi, Takanori

    Developing a non-intrusive packet-layer model is required to passively monitor the quality of experience (QoE) during service. We propose a packet-layer model that can be used to estimate the video quality of IPTV using quality parameters derived from transmitted packet headers. The computational load of the model is lighter than that of the model that takes video signals and/or video-related bitstream information such as motion vectors as input. This model is applicable even if the transmitted bitstream information is encrypted because it uses transmitted packet headers rather than bitstream information. For developing the model, we conducted three extensive subjective quality assessments for different encoders and decoders (codecs), and video content. Then, we modeled the subjective video quality assessment characteristics based on objective features affected by coding and packet loss. Finally, we verified the model's validity by applying our model to unknown data sets different from training data sets used above.

  12. Modeling and Simulation of Downlink Subcarrier Allocation Schemes in LTE

    DEFF Research Database (Denmark)

    Popovska Avramova, Andrijana; Yan, Ying; Dittmann, Lars

    2012-01-01

    The efficient utilization of the air interface in the LTE standard is achieved through a combination of subcarrier allocation schemes, adaptive modulation and coding, and transmission power allotment. The scheduler in the base station has a major role in achieving the required QoS and the overall...

  13. Analyses of models for promotion schemes and ownership arrangements

    DEFF Research Database (Denmark)

    Hansen, Lise-Lotte Pade; Schröder, Sascha Thorsten; Münster, Marie

    2011-01-01

    as increase the national competitiveness. The stationary fuel cell technology is still in a rather early stage of development and faces a long list of challenges and barriers of which some are linked directly to the technology through the need of cost decrease and reliability improvements. Others are linked...... countries should opt to support stationary fuel cells, we find that in Denmark it would be promising to apply the net metering based support scheme for households with an electricity consumption exceeding the electricity production from the fuel cell. In France and Portugal the most promising support scheme...... is price premium when the fuel cell is run as a part of a virtual power plant. From a system perspective, it appears that it is more important which kind of energy system (represented by country) the FC’s are implemented in, rather than which operation strategy is used. In an energy system with lots...

  14. EFFICIENT USE OF VIDEO FOR 3D MODELLING OF CULTURAL HERITAGE OBJECTS

    Directory of Open Access Journals (Sweden)

    B. Alsadik

    2015-03-01

    Full Text Available Currently, there is a rapid development in the techniques of the automated image based modelling (IBM, especially in advanced structure-from-motion (SFM and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 – 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  15. Unconditionally energy stable numerical schemes for phase-field vesicle membrane model

    Science.gov (United States)

    Guillén-González, F.; Tierra, G.

    2018-02-01

    Numerical schemes to simulate the deformation of vesicles membranes via minimizing the bending energy have been widely studied in recent times due to its connection with many biological motivated problems. In this work we propose a new unconditionally energy stable numerical scheme for a vesicle membrane model that satisfies exactly the conservation of volume constraint and penalizes the surface area constraint. Moreover, we extend these ideas to present an unconditionally energy stable splitting scheme decoupling the interaction of the vesicle with a surrounding fluid. Finally, the well behavior of the proposed schemes are illustrated through several computational experiments.

  16. A New Key Predistribution Scheme for Multiphase Sensor Networks Using a New Deployment Model

    Directory of Open Access Journals (Sweden)

    Boqing Zhou

    2014-01-01

    Full Text Available During the lifecycle of sensor networks, making use of the existing key predistribution schemes using deployment knowledge for pairwise key establishment and authentication between nodes, a new challenge is elevated. Either the resilience against node capture attacks or the global connectivity will significantly decrease with time. In this paper, a new deployment model is developed for multiphase deployment sensor networks, and then a new key management scheme is further proposed. Compared with the existing schemes using deployment knowledge, our scheme has better performance in global connectivity, resilience against node capture attacks throughout their lifecycle.

  17. New Identity-Based Blind Signature and Blind Decryption Scheme in the Standard Model

    Science.gov (United States)

    Phong, Le Trieu; Ogata, Wakaha

    We explicitly describe and analyse blind hierachical identity-based encryption (blind HIBE) schemes, which are natural generalizations of blind IBE schemes [20]. We then uses the blind HIBE schemes to construct: (1) An identity-based blind signature scheme secure in the standard model, under the computational Diffie-Hellman (CDH) assumption, and with much shorter signature size and lesser communication cost, compared to existing proposals. (2) A new mechanism supporting a user to buy digital information over the Internet without revealing what he/she has bought, while protecting the providers from cheating users.

  18. Evaluation of Online Video Usage and Learning Satisfaction: An Extension of the Technology Acceptance Model

    Science.gov (United States)

    Nagy, Judit T.

    2018-01-01

    The aim of the study was to examine the determining factors of students' video usage and their learning satisfaction relating to the supplementary application of educational videos, accessible in a Moodle environment in a Business Mathematics Course. The research model is based on the extension of "Technology Acceptance Model" (TAM), in…

  19. Reviewing Instructional Studies Conducted Using Video Modeling to Children with Autism

    Science.gov (United States)

    Acar, Cimen; Diken, Ibrahim H.

    2012-01-01

    This study explored 31 instructional research articles written using video modeling to children with autism and published in peer-reviewed journals. The studies in this research have been reached by searching EBSCO, Academic Search Complete, ERIC and other Anadolu University online search engines and using keywords such as "autism, video modeling,…

  20. An improved snow scheme for the ECMWF land surface model: Description and offline validation

    Science.gov (United States)

    Emanuel Dutra; Gianpaolo Balsamo; Pedro Viterbo; Pedro M. A. Miranda; Anton Beljaars; Christoph Schar; Kelly Elder

    2010-01-01

    A new snow scheme for the European Centre for Medium-Range Weather Forecasts (ECMWF) land surface model has been tested and validated. The scheme includes a new parameterization of snow density, incorporating a liquid water reservoir, and revised formulations for the subgrid snow cover fraction and snow albedo. Offline validation (covering a wide range of spatial and...

  1. Scheme for calculation of multi-layer cloudiness and precipitation for climate models of intermediate complexity

    NARCIS (Netherlands)

    Eliseev, A. V.; Coumou, D.; Chernokulsky, A. V.; Petoukhov, V.; Petri, S.

    2013-01-01

    In this study we present a scheme for calculating the characteristics of multi-layer cloudiness and precipitation for Earth system models of intermediate complexity (EMICs). This scheme considers three-layer stratiform cloudiness and single-column convective clouds. It distinguishes between ice and

  2. Enhanced Physics-Based Numerical Schemes for Two Classes of Turbulence Models

    Directory of Open Access Journals (Sweden)

    Leo G. Rebholz

    2009-01-01

    Full Text Available We present enhanced physics-based finite element schemes for two families of turbulence models, the NS- models and the Stolz-Adams approximate deconvolution models. These schemes are delicate extensions of a method created for the Navier-Stokes equations in Rebholz (2007, that achieve high physical fidelity by admitting balances of both energy and helicity that match the true physics. The schemes' development requires carefully chosen discrete curl, discrete Laplacian, and discrete filtering operators, in order to permit the necessary differential operator commutations.

  3. Godunov-type schemes for hydrodynamic and magnetohydrodynamic modeling

    International Nuclear Information System (INIS)

    Vides-Higueros, Jeaniffer

    2014-01-01

    The main objective of this thesis concerns the study, design and numerical implementation of finite volume schemes based on the so-Called Godunov-Type solvers for hyperbolic systems of nonlinear conservation laws, with special attention given to the Euler equations and ideal MHD equations. First, we derive a simple and genuinely two-Dimensional Riemann solver for general conservation laws that can be regarded as an actual 2D generalization of the HLL approach, relying heavily on the consistency with the integral formulation and on the proper use of Rankine-Hugoniot relations to yield expressions that are simple enough to be applied in the structured and unstructured contexts. Then, a comparison between two methods aiming to numerically maintain the divergence constraint of the magnetic field for the ideal MHD equations is performed and we show how the 2D Riemann solver can be employed to obtain robust divergence-Free simulations. Next, we derive a relaxation scheme that incorporates gravity source terms derived from a potential into the hydrodynamic equations, an important problem in astrophysics, and finally, we review the design of finite volume approximations in curvilinear coordinates, providing a fresher view on an alternative discretization approach. Throughout this thesis, numerous numerical results are shown. (author) [fr

  4. A Memory Hierarchy Model Based on Data Reuse for Full-Search Motion Estimation on High-Definition Digital Videos

    Directory of Open Access Journals (Sweden)

    Alba Sandyra Bezerra Lopes

    2012-01-01

    Full Text Available The motion estimation is the most complex module in a video encoder requiring a high processing throughput and high memory bandwidth, mainly when the focus is high-definition videos. The throughput problem can be solved increasing the parallelism in the internal operations. The external memory bandwidth may be reduced using a memory hierarchy. This work presents a memory hierarchy model for a full-search motion estimation core. The proposed memory hierarchy model is based on a data reuse scheme considering the full search algorithm features. The proposed memory hierarchy expressively reduces the external memory bandwidth required for the motion estimation process, and it provides a very high data throughput for the ME core. This throughput is necessary to achieve real time when processing high-definition videos. When considering the worst bandwidth scenario, this memory hierarchy is able to reduce the external memory bandwidth in 578 times. A case study for the proposed hierarchy, using 32×32 search window and 8×8 block size, was implemented and prototyped on a Virtex 4 FPGA. The results show that it is possible to reach 38 frames per second when processing full HD frames (1920×1080 pixels using nearly 299 Mbytes per second of external memory bandwidth.

  5. An Efficient Code-Based Threshold Ring Signature Scheme with a Leader-Participant Model

    Directory of Open Access Journals (Sweden)

    Guomin Zhou

    2017-01-01

    Full Text Available Digital signature schemes with additional properties have broad applications, such as in protecting the identity of signers allowing a signer to anonymously sign a message in a group of signers (also known as a ring. While these number-theoretic problems are still secure at the time of this research, the situation could change with advances in quantum computing. There is a pressing need to design PKC schemes that are secure against quantum attacks. In this paper, we propose a novel code-based threshold ring signature scheme with a leader-participant model. A leader is appointed, who chooses some shared parameters for other signers to participate in the signing process. This leader-participant model enhances the performance because every participant including the leader could execute the decoding algorithm (as a part of signing process upon receiving the shared parameters from the leader. The time complexity of our scheme is close to Courtois et al.’s (2001 scheme. The latter is often used as a basis to construct other types of code-based signature schemes. Moreover, as a threshold ring signature scheme, our scheme is as efficient as the normal code-based ring signature.

  6. SEMPATH Ontology: modeling multidisciplinary treatment schemes utilizing semantics.

    Science.gov (United States)

    Alexandrou, Dimitrios Al; Pardalis, Konstantinos V; Bouras, Thanassis D; Karakitsos, Petros; Mentzas, Gregoris N

    2012-03-01

    A dramatic increase of demand for provided treatment quality has occurred during last decades. The main challenge to be confronted, so as to increase treatment quality, is the personalization of treatment, since each patient constitutes a unique case. Healthcare provision encloses a complex environment since healthcare provision organizations are highly multidisciplinary. In this paper, we present the conceptualization of the domain of clinical pathways (CP). The SEMPATH (SEMantic PATHways) Oontology comprises three main parts: 1) the CP part; 2) the business and finance part; and 3) the quality assurance part. Our implementation achieves the conceptualization of the multidisciplinary domain of healthcare provision, in order to be further utilized for the implementation of a Semantic Web Rules (SWRL rules) repository. Finally, SEMPATH Ontology is utilized for the definition of a set of SWRL rules for the human papillomavirus) disease and its treatment scheme. © 2012 IEEE

  7. Soft rotator model and {sup 246}Cm low-lying level scheme

    Energy Technology Data Exchange (ETDEWEB)

    Porodzinskij, Yu.V.; Sukhovitskij, E.Sh. [Radiation Physics and Chemistry Problems Inst., Minsk-Sosny (Belarus)

    1997-03-01

    Non-axial soft rotator nuclear model is suggested as self-consistent approach for interpretation of level schemes, {gamma}-transition probabilities and neutron interaction with even-even nuclei. (author)

  8. Experimental validation of convection-diffusion discretisation scheme employed for computational modelling of biological mass transport

    Directory of Open Access Journals (Sweden)

    Ku David N

    2010-07-01

    Full Text Available Abstract Background The finite volume solver Fluent (Lebanon, NH, USA is a computational fluid dynamics software employed to analyse biological mass-transport in the vasculature. A principal consideration for computational modelling of blood-side mass-transport is convection-diffusion discretisation scheme selection. Due to numerous discretisation schemes available when developing a mass-transport numerical model, the results obtained should either be validated against benchmark theoretical solutions or experimentally obtained results. Methods An idealised aneurysm model was selected for the experimental and computational mass-transport analysis of species concentration due to its well-defined recirculation region within the aneurysmal sac, allowing species concentration to vary slowly with time. The experimental results were obtained from fluid samples extracted from a glass aneurysm model, using the direct spectrophometric concentration measurement technique. The computational analysis was conducted using the four convection-diffusion discretisation schemes available to the Fluent user, including the First-Order Upwind, the Power Law, the Second-Order Upwind and the Quadratic Upstream Interpolation for Convective Kinetics (QUICK schemes. The fluid has a diffusivity of 3.125 × 10-10 m2/s in water, resulting in a Peclet number of 2,560,000, indicating strongly convection-dominated flow. Results The discretisation scheme applied to the solution of the convection-diffusion equation, for blood-side mass-transport within the vasculature, has a significant influence on the resultant species concentration field. The First-Order Upwind and the Power Law schemes produce similar results. The Second-Order Upwind and QUICK schemes also correlate well but differ considerably from the concentration contour plots of the First-Order Upwind and Power Law schemes. The computational results were then compared to the experimental findings. An average error of 140

  9. Experimental validation of convection-diffusion discretisation scheme employed for computational modelling of biological mass transport.

    Science.gov (United States)

    Carroll, Gráinne T; Devereux, Paul D; Ku, David N; McGloughlin, Timothy M; Walsh, Michael T

    2010-07-19

    The finite volume solver Fluent (Lebanon, NH, USA) is a computational fluid dynamics software employed to analyse biological mass-transport in the vasculature. A principal consideration for computational modelling of blood-side mass-transport is convection-diffusion discretisation scheme selection. Due to numerous discretisation schemes available when developing a mass-transport numerical model, the results obtained should either be validated against benchmark theoretical solutions or experimentally obtained results. An idealised aneurysm model was selected for the experimental and computational mass-transport analysis of species concentration due to its well-defined recirculation region within the aneurysmal sac, allowing species concentration to vary slowly with time. The experimental results were obtained from fluid samples extracted from a glass aneurysm model, using the direct spectrophometric concentration measurement technique. The computational analysis was conducted using the four convection-diffusion discretisation schemes available to the Fluent user, including the First-Order Upwind, the Power Law, the Second-Order Upwind and the Quadratic Upstream Interpolation for Convective Kinetics (QUICK) schemes. The fluid has a diffusivity of 3.125 x 10-10 m2/s in water, resulting in a Peclet number of 2,560,000, indicating strongly convection-dominated flow. The discretisation scheme applied to the solution of the convection-diffusion equation, for blood-side mass-transport within the vasculature, has a significant influence on the resultant species concentration field. The First-Order Upwind and the Power Law schemes produce similar results. The Second-Order Upwind and QUICK schemes also correlate well but differ considerably from the concentration contour plots of the First-Order Upwind and Power Law schemes. The computational results were then compared to the experimental findings. An average error of 140% and 116% was demonstrated between the experimental

  10. Learning a Continuous-Time Streaming Video QoE Model.

    Science.gov (United States)

    Ghadiyaram, Deepti; Pan, Janice; Bovik, Alan C

    2018-05-01

    Over-the-top adaptive video streaming services are frequently impacted by fluctuating network conditions that can lead to rebuffering events (stalling events) and sudden bitrate changes. These events visually impact video consumers' quality of experience (QoE) and can lead to consumer churn. The development of models that can accurately predict viewers' instantaneous subjective QoE under such volatile network conditions could potentially enable the more efficient design of quality-control protocols for media-driven services, such as YouTube, Amazon, Netflix, and so on. However, most existing models only predict a single overall QoE score on a given video and are based on simple global video features, without accounting for relevant aspects of human perception and behavior. We have created a QoE evaluator, called the time-varying QoE Indexer, that accounts for interactions between stalling events, analyzes the spatial and temporal content of a video, predicts the perceptual video quality, models the state of the client-side data buffer, and consequently predicts continuous-time quality scores that agree quite well with human opinion scores. The new QoE predictor also embeds the impact of relevant human cognitive factors, such as memory and recency, and their complex interactions with the video content being viewed. We evaluated the proposed model on three different video databases and attained standout QoE prediction performance.

  11. Using of Video Modeling in Teaching a Simple Meal Preparation Skill for Pupils of Down Syndrome

    Science.gov (United States)

    AL-Salahat, Mohammad Mousa

    2016-01-01

    The current study aimed to identify the impact of video modeling upon teaching three pupils with Down syndrome the skill of preparing a simple meal (sandwich), where the training was conducted in a separate classroom in schools of normal students. The training consisted of (i) watching the video of an intellectually disabled pupil, who is…

  12. Model-free 3D face shape reconstruction from video sequences

    NARCIS (Netherlands)

    van Dam, C.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    In forensic comparison of facial video data, often only the best quality frontal face frames are selected, and hence much video data is ignored. To improve 2D facial comparison for law enforcement and forensic investigation, we introduce a model-free 3D shape reconstruction algorithm based on 2D

  13. Landmark-based model-free 3D face shape reconstruction from video sequences

    NARCIS (Netherlands)

    van Dam, C.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan; Broemme, A.; Busch, C.

    2013-01-01

    In forensic comparison of facial video data, often only the best quality frontal face frames are selected, and hence potentially useful video data is ignored. To improve 2D facial comparison for law enforcement and forensic investigation, we introduce a model-free 3D shape reconstruction algorithm

  14. Playing with Process: Video Game Choice as a Model of Behavior

    Science.gov (United States)

    Waelchli, Paul

    2010-01-01

    Popular culture experience in video games creates avenues to practice information literacy skills and model research in a real-world setting. Video games create a unique popular culture experience where players can invest dozens of hours on one game, create characters to identify with, organize skill sets and plot points, collaborate with people…

  15. Modeling the video distribution link in the Next Generation Optical Access Networks

    DEFF Research Database (Denmark)

    Amaya, F.; Cárdenas, A.; Tafur Monroy, Idelfonso

    2011-01-01

    In this work we present a model for the design and optimization of the video distribution link in the next generation optical access network. We analyze the video distribution performance in a SCM-WDM link, including the noise, the distortion and the fiber optic nonlinearities. Additionally, we...

  16. Two Variations of Video Modeling Interventions for Teaching Play Skills to Children with Autism

    Science.gov (United States)

    Sancho, Kimberly; Sidener, Tina M.; Reeve, Sharon A.; Sidener, David W.

    2010-01-01

    The current study employed an adapted alternating treatments design with reversal and multiple probe across participants components to compare the effects of traditional video priming and simultaneous video modeling on the acquisition of play skills in two children diagnosed with autism. Generalization was programmed across play sets, instructors,…

  17. The Effects of Using a Model-Reinforced Video on Information-Seeking Behaviour

    Science.gov (United States)

    McHugh, Elizabeth A.; Lenz, Janet G.; Reardon, Robert C.; Peterson, Gary W.

    2012-01-01

    This study examined the effects of viewing a ten-minute model-reinforced video on careers information-seeking behaviour of 280 students in ten sections of a university careers course randomly assigned to treatment or control conditions. The video portrayed an undergraduate student seeking careers counselling services and a counsellor using…

  18. Impact of WRF model PBL schemes on air quality simulations over Catalonia, Spain.

    Science.gov (United States)

    Banks, R F; Baldasano, J M

    2016-12-01

    Here we analyze the impact of four planetary boundary-layer (PBL) parametrization schemes from the Weather Research and Forecasting (WRF) numerical weather prediction model on simulations of meteorological variables and predicted pollutant concentrations from an air quality forecast system (AQFS). The current setup of the Spanish operational AQFS, CALIOPE, is composed of the WRF-ARW V3.5.1 meteorological model tied to the Yonsei University (YSU) PBL scheme, HERMES v2 emissions model, CMAQ V5.0.2 chemical transport model, and dust outputs from BSC-DREAM8bv2. We test the performance of the YSU scheme against the Assymetric Convective Model Version 2 (ACM2), Mellor-Yamada-Janjic (MYJ), and Bougeault-Lacarrère (BouLac) schemes. The one-day diagnostic case study is selected to represent the most frequent synoptic condition in the northeast Iberian Peninsula during spring 2015; regional recirculations. It is shown that the ACM2 PBL scheme performs well with daytime PBL height, as validated against estimates retrieved using a micro-pulse lidar system (mean bias=-0.11km). In turn, the BouLac scheme showed WRF-simulated air and dew point temperature closer to METAR surface meteorological observations. Results are more ambiguous when simulated pollutant concentrations from CMAQ are validated against network urban, suburban, and rural background stations. The ACM2 scheme showed the lowest mean bias (-0.96μgm -3 ) with respect to surface ozone at urban stations, while the YSU scheme performed best with simulated nitrogen dioxide (-6.48μgm -3 ). The poorest results were with simulated particulate matter, with similar results found with all schemes tested. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  19. A time-varying subjective quality model for mobile streaming videos with stalling events

    Science.gov (United States)

    Ghadiyaram, Deepti; Pan, Janice; Bovik, Alan C.

    2015-09-01

    Over-the-top mobile video streaming is invariably influenced by volatile network conditions which cause playback interruptions (stalling events), thereby impairing users' quality of experience (QoE). Developing models that can accurately predict users' QoE could enable the more efficient design of quality-control protocols for video streaming networks that reduce network operational costs while still delivering high-quality video content to the customers. Existing objective models that predict QoE are based on global video features, such as the number of stall events and their lengths, and are trained and validated on a small pool of ad hoc video datasets, most of which are not publicly available. The model we propose in this work goes beyond previous models as it also accounts for the fundamental effect that a viewer's recent level of satisfaction or dissatisfaction has on their overall viewing experience. In other words, the proposed model accounts for and adapts to the recency, or hysteresis effect caused by a stall event in addition to accounting for the lengths, frequency of occurrence, and the positions of stall events - factors that interact in a complex way to affect a user's QoE. On the recently introduced LIVE-Avvasi Mobile Video Database, which consists of 180 distorted videos of varied content that are afflicted solely with over 25 unique realistic stalling events, we trained and validated our model to accurately predict the QoE, attaining standout QoE prediction performance.

  20. A hybrid convection scheme for use in non-hydrostatic numerical weather prediction models

    Directory of Open Access Journals (Sweden)

    Volker Kuell

    2008-12-01

    Full Text Available The correct representation of convection in numerical weather prediction (NWP models is essential for quantitative precipitation forecasts. Due to its small horizontal scale convection usually has to be parameterized, e.g. by mass flux convection schemes. Classical schemes originally developed for use in coarse grid NWP models assume zero net convective mass flux, because the whole circulation of a convective cell is confined to the local grid column and all convective mass fluxes cancel out. However, in contemporary NWP models with grid sizes of a few kilometers this assumption becomes questionable, because here convection is partially resolved on the grid. To overcome this conceptual problem we propose a hybrid mass flux convection scheme (HYMACS in which only the convective updrafts and downdrafts are parameterized. The generation of the larger scale environmental subsidence, which may cover several grid columns, is transferred to the grid scale equations. This means that the convection scheme now has to generate a net convective mass flux exerting a direct dynamical forcing to the grid scale model via pressure gradient forces. The hybrid convection scheme implemented into the COSMO model of Deutscher Wetterdienst (DWD is tested in an idealized simulation of a sea breeze circulation initiating convection in a realistic manner. The results are compared with analogous simulations with the classical Tiedtke and Kain-Fritsch convection schemes.

  1. Improved virtual channel noise model for transform domain Wyner-Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2009-01-01

    Distributed video coding (DVC) has been proposed as a new video coding paradigm to deal with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. A virtual channel noise model is utilized at the decoder to estimate...... the noise distribution between the side information frame and the original frame. This is one of the most important aspects influencing the coding performance of DVC. Noise models with different granularity have been proposed. In this paper, an improved noise model for transform domain Wyner-Ziv video...

  2. Visual saliency models for summarization of diagnostic hysteroscopy videos in healthcare systems.

    Science.gov (United States)

    Muhammad, Khan; Ahmad, Jamil; Sajjad, Muhammad; Baik, Sung Wook

    2016-01-01

    In clinical practice, diagnostic hysteroscopy (DH) videos are recorded in full which are stored in long-term video libraries for later inspection of previous diagnosis, research and training, and as an evidence for patients' complaints. However, a limited number of frames are required for actual diagnosis, which can be extracted using video summarization (VS). Unfortunately, the general-purpose VS methods are not much effective for DH videos due to their significant level of similarity in terms of color and texture, unedited contents, and lack of shot boundaries. Therefore, in this paper, we investigate visual saliency models for effective abstraction of DH videos by extracting the diagnostically important frames. The objective of this study is to analyze the performance of various visual saliency models with consideration of domain knowledge and nominate the best saliency model for DH video summarization in healthcare systems. Our experimental results indicate that a hybrid saliency model, comprising of motion, contrast, texture, and curvature saliency, is the more suitable saliency model for summarization of DH videos in terms of extracted keyframes and accuracy.

  3. Examining human behavior in video games: The development of a computational model to measure aggression.

    Science.gov (United States)

    Lamb, Richard; Annetta, Leonard; Hoston, Douglas; Shapiro, Marina; Matthews, Benjamin

    2018-06-01

    Video games with violent content have raised considerable concern in popular media and within academia. Recently, there has been considerable attention regarding the claim of the relationship between aggression and video game play. The authors of this study propose the use of a new class of tools developed via computational models to allow examination of the question of whether there is a relationship between violent video games and aggression. The purpose of this study is to computationally model and compare the General Aggression Model with the Diathesis Mode of Aggression related to the play of violent content in video games. A secondary purpose is to provide a method of measuring and examining individual aggression arising from video game play. Total participants examined for this study are N = 1065. This study occurs in three phases. Phase 1 is the development and quantification of the profile combination of traits via latent class profile analysis. Phase 2 is the training of the artificial neural network. Phase 3 is the comparison of each model as a computational model with and without the presence of video game violence. Results suggest that a combination of environmental factors and genetic predispositions trigger aggression related to video games.

  4. Transfer Scheme Evaluation Model for a Transportation Hub based on Vectorial Angle Cosine

    Directory of Open Access Journals (Sweden)

    Li-Ya Yao

    2014-07-01

    Full Text Available As the most important node in public transport network, efficiency of a transport hub determines the entire efficiency of the whole transport network. In order to put forward effective transfer schemes, a comprehensive evaluation index system of urban transport hubs’ transfer efficiency was built, evaluation indexes were quantified, and an evaluation model of a multi-objective decision hub transfer scheme was established based on vectorial angle cosine. Qualitative and quantitative analysis on factors affecting transfer efficiency is conducted, which discusses the passenger satisfaction, transfer coordination, transfer efficiency, smoothness, economy, etc. Thus, a new solution to transfer scheme utilization was proposed.

  5. Stable explicit coupling of the Yee scheme with a linear current model in fluctuating magnetized plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Filipe da, E-mail: tanatos@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal); Pinto, Martin Campos, E-mail: campos@ann.jussieu.fr [CNRS, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); Sorbonne Universités, UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); Després, Bruno, E-mail: despres@ann.jussieu.fr [Sorbonne Universités, UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); CNRS, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); Heuraux, Stéphane, E-mail: stephane.heuraux@univ-lorraine.fr [Institut Jean Lamour, UMR 7198, CNRS – University Lorraine, Vandoeuvre (France)

    2015-08-15

    This work analyzes the stability of the Yee scheme for non-stationary Maxwell's equations coupled with a linear current model with density fluctuations. We show that the usual procedure may yield unstable scheme for physical situations that correspond to strongly magnetized plasmas in X-mode (TE) polarization. We propose to use first order clustered discretization of the vectorial product that gives back a stable coupling. We validate the schemes on some test cases representative of direct numerical simulations of X-mode in a magnetic fusion plasma including turbulence.

  6. Modeling stable orographic precipitation at small scales. The impact of the autoconversion scheme

    Energy Technology Data Exchange (ETDEWEB)

    Zaengl, Guenther; Seifert, Axel [Deutscher Wetterdienst, Offenbach (Germany); Wobrock, Wolfram [Clermont Univ., Univ. Blaise Pascal, Lab. de Meteorologie Physique, Clermont-Ferrand (France); CNRS, INSU, UMR, LaMP, Aubiere (France)

    2010-10-15

    This study presents idealized numerical simulations of moist airflow over a narrow isolated mountain in order to investigate the impact of the autoconversion scheme on simulated precipitation. The default setup generates an isolated water cloud over the mountain, implying that autoconversion of cloud water into rain is the only process capable of initiating precipitation. For comparison, a set of sensitivity experiments considers the classical seeder-feeder configuration, which means that ambient precipitation generated by large-scale lifting is intensified within the orographic cloud. Most simulations have been performed with the nonhydrostatic COSMO model developed at the German Weather Service (DWD), comparing three different autoconversion schemes of varying sophistication. For reference, a subset of experiments has also been performed with a spectral (bin) microphysics model. While precipitation enhancement via the seeder-feeder mechanism turns out to be relatively insensitive against the autoconversion scheme because accretion is the leading process in this case, simulated precipitation amounts can vary by 1-2 orders of magnitude for purely orographic precipitation. By comparison to the reference experiments conducted with the bin model, the Seifert-Beheng autoconversion scheme (which is the default in the COSMO model) and the Berry-Reinhardt scheme are found to represent the nonlinear behaviour of orographic precipitation reasonably well, whereas the linear approach of the Kessler scheme appears to be less adequate. (orig.)

  7. Using video modeling with substitutable loops to teach varied play to children with autism.

    Science.gov (United States)

    Dupere, Sally; MacDonald, Rebecca P F; Ahearn, William H

    2013-01-01

    Children with autism often engage in repetitive play with little variation in the actions performed or items used. This study examined the use of video modeling with scripted substitutable loops on children's pretend play with trained and untrained characters. Three young children with autism were shown a video model of scripted toy play that included a substitutable loop that allowed various characters to perform the same actions and vocalizations. Three characters were modeled with the substitutable loop during training sessions, and 3 additional characters were present in the video but never modeled. Following video modeling, all the participants incorporated untrained characters into their play, but the extent to which they did so varied. © Society for the Experimental Analysis of Behavior.

  8. Post-processing scheme for modelling the lithospheric magnetic field

    Directory of Open Access Journals (Sweden)

    V. Lesur

    2013-03-01

    Full Text Available We investigated how the noise in satellite magnetic data affects magnetic lithospheric field models derived from these data in the special case where this noise is correlated along satellite orbit tracks. For this we describe the satellite data noise as a perturbation magnetic field scaled independently for each orbit, where the scaling factor is a random variable, normally distributed with zero mean. Under this assumption, we have been able to derive a model for errors in lithospheric models generated by the correlated satellite data noise. Unless the perturbation field is known, estimating the noise in the lithospheric field model is a non-linear inverse problem. We therefore proposed an iterative post-processing technique to estimate both the lithospheric field model and its associated noise model. The technique has been successfully applied to derive a lithospheric field model from CHAMP satellite data up to spherical harmonic degree 120. The model is in agreement with other existing models. The technique can, in principle, be extended to all sorts of potential field data with "along-track" correlated errors.

  9. No-reference pixel based video quality assessment for HEVC decoded video

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2017-01-01

    This paper proposes a No-Reference (NR) Video Quality Assessment (VQA) method for videos subject to the distortion given by the High Efficiency Video Coding (HEVC) scheme. The assessment is performed without access to the bitstream. The proposed analysis is based on the transform coefficients...... estimated from the decoded video pixels, which is used to estimate the level of quantization. The information from this analysis is exploited to assess the video quality. HEVC transform coefficients are modeled with a joint-Cauchy probability density function in the proposed method. To generate VQA features...... the quantization step used in the Intra coding is estimated. We map the obtained HEVC features using an Elastic Net to predict subjective video quality scores, Mean Opinion Scores (MOS). The performance is verified on a dataset consisting of HEVC coded 4 K UHD (resolution equal to 3840 x 2160) video sequences...

  10. Univariate time series modeling and an application to future claims amount in SOCSO's invalidity pension scheme

    Science.gov (United States)

    Chek, Mohd Zaki Awang; Ahmad, Abu Bakar; Ridzwan, Ahmad Nur Azam Ahmad; Jelas, Imran Md.; Jamal, Nur Faezah; Ismail, Isma Liana; Zulkifli, Faiz; Noor, Syamsul Ikram Mohd

    2012-09-01

    The main objective of this study is to forecast the future claims amount of Invalidity Pension Scheme (IPS). All data were derived from SOCSO annual reports from year 1972 - 2010. These claims consist of all claims amount from 7 benefits offered by SOCSO such as Invalidity Pension, Invalidity Grant, Survivors Pension, Constant Attendance Allowance, Rehabilitation, Funeral and Education. Prediction of future claims of Invalidity Pension Scheme will be made using Univariate Forecasting Models to predict the future claims among workforce in Malaysia.

  11. Adaptive multiresolution WENO schemes for multi-species kinematic flow models

    International Nuclear Information System (INIS)

    Buerger, Raimund; Kozakevicius, Alice

    2007-01-01

    Multi-species kinematic flow models lead to strongly coupled, nonlinear systems of first-order, spatially one-dimensional conservation laws. The number of unknowns (the concentrations of the species) may be arbitrarily high. Models of this class include a multi-species generalization of the Lighthill-Whitham-Richards traffic model and a model for the sedimentation of polydisperse suspensions. Their solutions typically involve kinematic shocks separating areas of constancy, and should be approximated by high resolution schemes. A fifth-order weighted essentially non-oscillatory (WENO) scheme is combined with a multiresolution technique that adaptively generates a sparse point representation (SPR) of the evolving numerical solution. Thus, computational effort is concentrated on zones of strong variation near shocks. Numerical examples from the traffic and sedimentation models demonstrate the effectiveness of the resulting WENO multiresolution (WENO-MRS) scheme

  12. Compositional Models for Video Event Detection: A Multiple Kernel Learning Latent Variable Approach (Open Access)

    Science.gov (United States)

    2014-03-03

    segments that correspond to scenes observed within the event category (e.g., for wed - ding ceremony videos, outdoor park scenes or people danc- ing...several subcategories (e.g., a wedding 1186 11, zt 22, zt SS zt , lφ lφ lφ Global Models Scene Type Models .... .... 2b Cb1b gφ gφ gφ XVideo Figure 2...in a video is denoted by ts. ceremony at a church, house, or park). Further, it is assumed that a particular video corresponds to only one subcategory

  13. A Lattice-Based Identity-Based Proxy Blind Signature Scheme in the Standard Model

    Directory of Open Access Journals (Sweden)

    Lili Zhang

    2014-01-01

    Full Text Available A proxy blind signature scheme is a special form of blind signature which allowed a designated person called proxy signer to sign on behalf of original signers without knowing the content of the message. It combines the advantages of proxy signature and blind signature. Up to date, most proxy blind signature schemes rely on hard number theory problems, discrete logarithm, and bilinear pairings. Unfortunately, the above underlying number theory problems will be solvable in the postquantum era. Lattice-based cryptography is enjoying great interest these days, due to implementation simplicity and provable security reductions. Moreover, lattice-based cryptography is believed to be hard even for quantum computers. In this paper, we present a new identity-based proxy blind signature scheme from lattices without random oracles. The new scheme is proven to be strongly unforgeable under the standard hardness assumption of the short integer solution problem (SIS and the inhomogeneous small integer solution problem (ISIS. Furthermore, the secret key size and the signature length of our scheme are invariant and much shorter than those of the previous lattice-based proxy blind signature schemes. To the best of our knowledge, our construction is the first short lattice-based identity-based proxy blind signature scheme in the standard model.

  14. A Scratchpad Memory Allocation Scheme for Dataflow Models

    Science.gov (United States)

    2008-08-25

    perform via static analysis of C/C++. We use the heterochronous dataflow (HDF) model of computation [16, 39] in Ptolemy II [11] as a means to specify the...buffer data) as the key memory requirements [9]. 4.1 Structure of an HDF Model We use Ptolemy II’s graphical interface and the HDF domain to specify...algorithm. The allocation algorithm was implemented in Ptolemy II [11], a Java-based framework for studying modeling, simulation and design of concurrent

  15. Bayesian Modeling of Temporal Coherence in Videos for Entity Discovery and Summarization.

    Science.gov (United States)

    Mitra, Adway; Biswas, Soma; Bhattacharyya, Chiranjib

    2017-03-01

    A video is understood by users in terms of entities present in it. Entity Discovery is the task of building appearance model for each entity (e.g., a person), and finding all its occurrences in the video. We represent a video as a sequence of tracklets, each spanning 10-20 frames, and associated with one entity. We pose Entity Discovery as tracklet clustering, and approach it by leveraging Temporal Coherence (TC): the property that temporally neighboring tracklets are likely to be associated with the same entity. Our major contributions are the first Bayesian nonparametric models for TC at tracklet-level. We extend Chinese Restaurant Process (CRP) to TC-CRP, and further to Temporally Coherent Chinese Restaurant Franchise (TC-CRF) to jointly model entities and temporal segments using mixture components and sparse distributions. For discovering persons in TV serial videos without meta-data like scripts, these methods show considerable improvement over state-of-the-art approaches to tracklet clustering in terms of clustering accuracy, cluster purity and entity coverage. The proposed methods can perform online tracklet clustering on streaming videos unlike existing approaches, and can automatically reject false tracklets. Finally we discuss entity-driven video summarization- where temporal segments of the video are selected based on the discovered entities, to create a semantically meaningful summary.

  16. A seawater desalination scheme for global hydrological models

    Science.gov (United States)

    Hanasaki, Naota; Yoshikawa, Sayaka; Kakinuma, Kaoru; Kanae, Shinjiro

    2016-10-01

    Seawater desalination is a practical technology for providing fresh water to coastal arid regions. Indeed, the use of desalination is rapidly increasing due to growing water demand in these areas and decreases in production costs due to technological advances. In this study, we developed a model to estimate the areas where seawater desalination is likely to be used as a major water source and the likely volume of production. The model was designed to be incorporated into global hydrological models (GHMs) that explicitly include human water usage. The model requires spatially detailed information on climate, income levels, and industrial and municipal water use, which represent standard input/output data in GHMs. The model was applied to a specific historical year (2005) and showed fairly good reproduction of the present geographical distribution and national production of desalinated water in the world. The model was applied globally to two periods in the future (2011-2040 and 2041-2070) under three distinct socioeconomic conditions, i.e., SSP (shared socioeconomic pathway) 1, SSP2, and SSP3. The results indicate that the usage of seawater desalination will have expanded considerably in geographical extent, and that production will have increased by 1.4-2.1-fold in 2011-2040 compared to the present (from 2.8 × 109 m3 yr-1 in 2005 to 4.0-6.0 × 109 m3 yr-1), and 6.7-17.3-fold in 2041-2070 (from 18.7 to 48.6 × 109 m3 yr-1). The estimated global costs for production for each period are USD 1.1-10.6 × 109 (0.002-0.019 % of the total global GDP), USD 1.6-22.8 × 109 (0.001-0.020 %), and USD 7.5-183.9 × 109 (0.002-0.100 %), respectively. The large spreads in these projections are primarily attributable to variations within the socioeconomic scenarios.

  17. Extension of the time-average model to Candu refueling schemes involving reshuffling

    International Nuclear Information System (INIS)

    Rouben, Benjamin; Nichita, Eleodor

    2008-01-01

    Candu reactors consist of a horizontal non-pressurized heavy-water-filled vessel penetrated axially by fuel channels, each containing twelve 50-cm-long fuel bundles cooled by pressurized heavy water. Candu reactors are refueled on-line and, as a consequence, the core flux and power distributions change continuously. For design purposes, a 'time-average' model was developed in the 1970's to calculate the average over time of the flux and power distribution and to study the effects of different refueling schemes. The original time-average model only allows treatment of simple push-through refueling schemes whereby fresh fuel is inserted at one end of the channel and irradiated fuel is removed from the other end. With the advent of advanced fuel cycles and new Candu designs, novel refueling schemes may be considered, such as reshuffling discharged fuel from some channels into other channels, to achieve better overall discharge burnup. Such reshuffling schemes cannot be handled by the original time-average model. This paper presents an extension of the time-average model to allow for the treatment of refueling schemes with reshuffling. Equations for the extended model are presented, together with sample results for a simple demonstration case. (authors)

  18. A Reconfiguration Control Scheme for a Quadrotor Helicopter via Combined Multiple Models

    Directory of Open Access Journals (Sweden)

    Fuyang Chen

    2014-08-01

    Full Text Available In this paper, an optimal reconfiguration control scheme is proposed for a quadrotor helicopter with actuator faults via adaptive control and combined multiple models. The combined models set contains several fixed models, an adaptive model and a reinitialized adaptive model. The fixed models and the adaptive model can describe the failure system under different fault conditions. Moreover, the proposed reinitialized adaptive model refers to the closest model of the current system and can improve the speed of convergence effectively. In addition, the reference model is designed in consideration of an optimal control performance index and the principle of the minimum cost to achieve perfect tracking performance. Finally, some simulation results demonstrate the effectiveness of the proposed reconfiguration control scheme for faulty cases.

  19. Developing model-making and model-breaking skills using direct measurement video-based activities

    Science.gov (United States)

    Vonk, Matthew; Bohacek, Peter; Militello, Cheryl; Iverson, Ellen

    2017-12-01

    This study focuses on student development of two important laboratory skills in the context of introductory college-level physics. The first skill, which we call model making, is the ability to analyze a phenomenon in a way that produces a quantitative multimodal model. The second skill, which we call model breaking, is the ability to critically evaluate if the behavior of a system is consistent with a given model. This study involved 116 introductory physics students in four different sections, each taught by a different instructor. All of the students within a given class section participated in the same instruction (including labs) with the exception of five activities performed throughout the semester. For those five activities, each class section was split into two groups; one group was scaffolded to focus on model-making skills and the other was scaffolded to focus on model-breaking skills. Both conditions involved direct measurement videos. In some cases, students could vary important experimental parameters within the video like mass, frequency, and tension. Data collected at the end of the semester indicate that students in the model-making treatment group significantly outperformed the other group on the model-making skill despite the fact that both groups shared a common physical lab experience. Likewise, the model-breaking treatment group significantly outperformed the other group on the model-breaking skill. This is important because it shows that direct measurement video-based instruction can help students acquire science-process skills, which are critical for scientists, and which are a key part of current science education approaches such as the Next Generation Science Standards and the Advanced Placement Physics 1 course.

  20. Developing model-making and model-breaking skills using direct measurement video-based activities

    Directory of Open Access Journals (Sweden)

    Matthew Vonk

    2017-08-01

    Full Text Available This study focuses on student development of two important laboratory skills in the context of introductory college-level physics. The first skill, which we call model making, is the ability to analyze a phenomenon in a way that produces a quantitative multimodal model. The second skill, which we call model breaking, is the ability to critically evaluate if the behavior of a system is consistent with a given model. This study involved 116 introductory physics students in four different sections, each taught by a different instructor. All of the students within a given class section participated in the same instruction (including labs with the exception of five activities performed throughout the semester. For those five activities, each class section was split into two groups; one group was scaffolded to focus on model-making skills and the other was scaffolded to focus on model-breaking skills. Both conditions involved direct measurement videos. In some cases, students could vary important experimental parameters within the video like mass, frequency, and tension. Data collected at the end of the semester indicate that students in the model-making treatment group significantly outperformed the other group on the model-making skill despite the fact that both groups shared a common physical lab experience. Likewise, the model-breaking treatment group significantly outperformed the other group on the model-breaking skill. This is important because it shows that direct measurement video-based instruction can help students acquire science-process skills, which are critical for scientists, and which are a key part of current science education approaches such as the Next Generation Science Standards and the Advanced Placement Physics 1 course.

  1. A novel interacting multiple model based network intrusion detection scheme

    Science.gov (United States)

    Xin, Ruichi; Venkatasubramanian, Vijay; Leung, Henry

    2006-04-01

    In today's information age, information and network security are of primary importance to any organization. Network intrusion is a serious threat to security of computers and data networks. In internet protocol (IP) based network, intrusions originate in different kinds of packets/messages contained in the open system interconnection (OSI) layer 3 or higher layers. Network intrusion detection and prevention systems observe the layer 3 packets (or layer 4 to 7 messages) to screen for intrusions and security threats. Signature based methods use a pre-existing database that document intrusion patterns as perceived in the layer 3 to 7 protocol traffics and match the incoming traffic for potential intrusion attacks. Alternately, network traffic data can be modeled and any huge anomaly from the established traffic pattern can be detected as network intrusion. The latter method, also known as anomaly based detection is gaining popularity for its versatility in learning new patterns and discovering new attacks. It is apparent that for a reliable performance, an accurate model of the network data needs to be established. In this paper, we illustrate using collected data that network traffic is seldom stationary. We propose the use of multiple models to accurately represent the traffic data. The improvement in reliability of the proposed model is verified by measuring the detection and false alarm rates on several datasets.

  2. Combining modelling tools to evaluate a goose management scheme

    NARCIS (Netherlands)

    Baveco, Hans; Bergjord, Anne Kari; Bjerke, Jarle W.; Chudzińska, Magda E.; Pellissier, Loïc; Simonsen, Caroline E.; Madsen, Jesper; Tombre, Ingunn M.; Nolet, Bart A.

    2017-01-01

    Many goose species feed on agricultural land, and with growing goose numbers, conflicts with agriculture are increasing. One possible solution is to designate refuge areas where farmers are paid to leave geese undisturbed. Here, we present a generic modelling tool that can be used to designate the

  3. Combining modelling tools to evaluate a goose management scheme.

    NARCIS (Netherlands)

    Baveco, J.M.; Bergjord, A.K.; Bjerke, J.W.; Chudzińska, M.E.; Pellissier, L.; Simonsen, C.E.; Madsen, J.; Tombre, Ingunn M.; Nolet, B.A.

    2017-01-01

    Many goose species feed on agricultural land, and with growing goose numbers, conflicts with agriculture are increasing. One possible solution is to designate refuge areas where farmers are paid to leave geese undisturbed. Here, we present a generic modelling tool that can be used to designate the

  4. Ensemble-based data assimilation schemes for atmospheric chemistry models

    NARCIS (Netherlands)

    Barbu, A.L.

    2010-01-01

    The atmosphere is a complex system which includes physical, chemical and biological processes. Many of these processes affecting the atmosphere are subject to various interactions and can be highly nonlinear. This complexity makes it necessary to apply computer models in order to understand the

  5. Multi-model ensemble schemes for predicting northeast monsoon ...

    Indian Academy of Sciences (India)

    Northeast monsoon; multi-model ensemble; rainfall; prediction; principal component regression; single value decomposition. J. Earth Syst. Sci. 120, No. 5, October 2011, pp. 795–805 c Indian Academy of Sciences. 795 ... Rakecha 1983; Krishnan 1984; Raj and Jamadar. 1990; Sridharan and Muthusamy 1990; Singh and.

  6. A seawater desalination scheme for global hydrological models

    Directory of Open Access Journals (Sweden)

    N. Hanasaki

    2016-10-01

    Full Text Available Seawater desalination is a practical technology for providing fresh water to coastal arid regions. Indeed, the use of desalination is rapidly increasing due to growing water demand in these areas and decreases in production costs due to technological advances. In this study, we developed a model to estimate the areas where seawater desalination is likely to be used as a major water source and the likely volume of production. The model was designed to be incorporated into global hydrological models (GHMs that explicitly include human water usage. The model requires spatially detailed information on climate, income levels, and industrial and municipal water use, which represent standard input/output data in GHMs. The model was applied to a specific historical year (2005 and showed fairly good reproduction of the present geographical distribution and national production of desalinated water in the world. The model was applied globally to two periods in the future (2011–2040 and 2041–2070 under three distinct socioeconomic conditions, i.e., SSP (shared socioeconomic pathway 1, SSP2, and SSP3. The results indicate that the usage of seawater desalination will have expanded considerably in geographical extent, and that production will have increased by 1.4–2.1-fold in 2011–2040 compared to the present (from 2.8  ×  109 m3 yr−1 in 2005 to 4.0–6.0  ×  109 m3 yr−1, and 6.7–17.3-fold in 2041–2070 (from 18.7 to 48.6  ×  109 m3 yr−1. The estimated global costs for production for each period are USD 1.1–10.6  ×  109 (0.002–0.019 % of the total global GDP, USD 1.6–22.8  ×  109 (0.001–0.020 %, and USD 7.5–183.9  ×  109 (0.002–0.100 %, respectively. The large spreads in these projections are primarily attributable to variations within the socioeconomic scenarios.

  7. Central upwind scheme for a compressible two-phase flow model.

    Science.gov (United States)

    Ahmed, Munshoor; Saleem, M Rehan; Zia, Saqib; Qamar, Shamsul

    2015-01-01

    In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS) and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme.

  8. Central upwind scheme for a compressible two-phase flow model.

    Directory of Open Access Journals (Sweden)

    Munshoor Ahmed

    Full Text Available In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme.

  9. Application of discriminative models for interactive query refinement in video retrieval

    Science.gov (United States)

    Srivastava, Amit; Khanwalkar, Saurabh; Kumar, Anoop

    2013-12-01

    The ability to quickly search for large volumes of videos for specific actions or events can provide a dramatic new capability to intelligence agencies. Example-based queries from video are a form of content-based information retrieval (CBIR) where the objective is to retrieve clips from a video corpus, or stream, using a representative query sample to find more like this. Often, the accuracy of video retrieval is largely limited by the gap between the available video descriptors and the underlying query concept, and such exemplar queries return many irrelevant results with relevant ones. In this paper, we present an Interactive Query Refinement (IQR) system which acts as a powerful tool to leverage human feedback and allow intelligence analyst to iteratively refine search queries for improved precision in the retrieved results. In our approach to IQR, we leverage discriminative models that operate on high dimensional features derived from low-level video descriptors in an iterative framework. Our IQR model solicits relevance feedback on examples selected from the region of uncertainty and updates the discriminating boundary to produce a relevance ranked results list. We achieved 358% relative improvement in Mean Average Precision (MAP) over initial retrieval list at a rank cutoff of 100 over 4 iterations. We compare our discriminative IQR model approach to a naïve IQR and show our model-based approach yields 49% relative improvement over the no model naïve system.

  10. Internal validation of risk models in clustered data: a comparison of bootstrap schemes

    NARCIS (Netherlands)

    Bouwmeester, W.; Moons, K.G.M.; Kappen, T.H.; van Klei, W.A.; Twisk, J.W.R.; Eijkemans, M.J.C.; Vergouwe, Y.

    2013-01-01

    Internal validity of a risk model can be studied efficiently with bootstrapping to assess possible optimism in model performance. Assumptions of the regular bootstrap are violated when the development data are clustered. We compared alternative resampling schemes in clustered data for the estimation

  11. The two-dimensional Godunov scheme and what it means for macroscopic pedestrian flow models

    NARCIS (Netherlands)

    Van Wageningen-Kessels, F.L.M.; Daamen, W.; Hoogendoorn, S.P.

    2015-01-01

    An efficient simulation method for two-dimensional continuum pedestrian flow models is introduced. It is a two-dimensional and multi-class extension of the Go-dunov scheme for one-dimensional road traffic flow models introduced in the mid 1990’s. The method can be applied to continuum pedestrian

  12. A low-bias simulation scheme for the SABR stochastic volatility model

    NARCIS (Netherlands)

    B. Chen (Bin); C.W. Oosterlee (Cornelis); J.A.M. van der Weide

    2012-01-01

    htmlabstractThe Stochastic Alpha Beta Rho Stochastic Volatility (SABR-SV) model is widely used in the financial industry for the pricing of fixed income instruments. In this paper we develop an lowbias simulation scheme for the SABR-SV model, which deals efficiently with (undesired)

  13. A dynamic neutral fluid model for the PIC scheme

    Science.gov (United States)

    Wu, Alan; Lieberman, Michael; Verboncoeur, John

    2010-11-01

    Fluid diffusion is an important aspect of plasma simulation. A new dynamic model is implemented using the continuity and boundary equations in OOPD1, an object oriented one-dimensional particle-in-cell code developed at UC Berkeley. The model is described and compared with analytical methods given in [1]. A boundary absorption parameter can be adjusted from ideal absorption to ideal reflection. Simulations exhibit good agreement with analytic time dependent solutions for the two ideal cases, as well as steady state solutions for mixed cases. For the next step, fluid sources and sinks due to particle-particle or particle-fluid collisions within the simulation volume and to surface reactions resulting in emission or absorption of fluid species will be implemented. The resulting dynamic interaction between particle and fluid species will be an improvement to the static fluid in the existing code. As the final step in the development, diffusion for multiple fluid species will be implemented. [4pt] [1] M.A. Lieberman and A.J. Lichtenberg, Principles of Plasma Discharges and Materials Processing, 2nd Ed, Wiley, 2005.

  14. Modelling of Substrate Noise and Mitigation Schemes for UWB Systems

    DEFF Research Database (Denmark)

    Shen, Ming; Mikkelsen, Jan H.; Larsen, Torben

    2012-01-01

    -mode designs, digital switching noise is an ever-present problem that needs to be taken into consideration. This is of particular importance when low cost implementation technologies, e.g. lightly doped substrates, are aimed for. For traditional narrow-band designs much of the issue can be mitigated using...... tuned elements in the signal paths. However, for UWB designs this is not a viable option and other means are therefore required. Moreover, owing to the ultra-wideband nature and low power spectral density of the signal, UWB mixed-signal integrated circuits are more sensitive to substrate noise compared...... with narrow-band circuits. This chapter presents a study on the modeling and mitigation of substrate noise in mixed-signal integrated circuits (ICs), focusing on UWB system/circuit designs. Experimental impact evaluation of substrate noise on UWB circuits is presented. It shows how a wide-band circuit can...

  15. A gradient stable scheme for a phase field model for the moving contact line problem

    KAUST Repository

    Gao, Min

    2012-02-01

    In this paper, an efficient numerical scheme is designed for a phase field model for the moving contact line problem, which consists of a coupled system of the Cahn-Hilliard and Navier-Stokes equations with the generalized Navier boundary condition [1,2,4]. The nonlinear version of the scheme is semi-implicit in time and is based on a convex splitting of the Cahn-Hilliard free energy (including the boundary energy) together with a projection method for the Navier-Stokes equations. We show, under certain conditions, the scheme has the total energy decaying property and is unconditionally stable. The linearized scheme is easy to implement and introduces only mild CFL time constraint. Numerical tests are carried out to verify the accuracy and stability of the scheme. The behavior of the solution near the contact line is examined. It is verified that, when the interface intersects with the boundary, the consistent splitting scheme [21,22] for the Navier Stokes equations has the better accuracy for pressure. © 2011 Elsevier Inc.

  16. A method of LED free-form tilted lens rapid modeling based on scheme language

    Science.gov (United States)

    Dai, Yidan

    2017-10-01

    According to nonimaging optical principle and traditional LED free-form surface lens, a new kind of LED free-form tilted lens was designed. And a method of rapid modeling based on Scheme language was proposed. The mesh division method was applied to obtain the corresponding surface configuration according to the character of the light source and the desired energy distribution on the illumination plane. Then 3D modeling software and the Scheme language programming are used to generate lens model respectively. With the help of optical simulation software, a light source with the size of 1mm*1mm*1mm in volume is used in experiment, and the lateral migration distance of illumination area is 0.5m, in which total one million rays are computed. We could acquire the simulated results of both models. The simulated output result shows that the Scheme language can prevent the model deformation problems caused by the process of the model transfer, and the degree of illumination uniformity is reached to 82%, and the offset angle is 26°. Also, the efficiency of modeling process is greatly increased by using Scheme language.

  17. Global-constrained hidden Markov model applied on wireless capsule endoscopy video segmentation

    Science.gov (United States)

    Wan, Yiwen; Duraisamy, Prakash; Alam, Mohammad S.; Buckles, Bill

    2012-06-01

    Accurate analysis of wireless capsule endoscopy (WCE) videos is vital but tedious. Automatic image analysis can expedite this task. Video segmentation of WCE into the four parts of the gastrointestinal tract is one way to assist a physician. The segmentation approach described in this paper integrates pattern recognition with statiscal analysis. Iniatially, a support vector machine is applied to classify video frames into four classes using a combination of multiple color and texture features as the feature vector. A Poisson cumulative distribution, for which the parameter depends on the length of segments, models a prior knowledge. A priori knowledge together with inter-frame difference serves as the global constraints driven by the underlying observation of each WCE video, which is fitted by Gaussian distribution to constrain the transition probability of hidden Markov model.Experimental results demonstrated effectiveness of the approach.

  18. Video Modeling and Prompting: A Comparison of Two Strategies for Teaching Cooking Skills to Students with Mild Intellectual Disabilities

    Science.gov (United States)

    Taber-Doughty, Teresa; Bouck, Emily C.; Tom, Kinsey; Jasper, Andrea D.; Flanagan, Sara M.; Bassette, Laura

    2011-01-01

    Self-operated video prompting and video modeling was compared when used by three secondary students with mild intellectual disabilities as they completed novel recipes during cooking activities. Alternating between video systems, students completed twelve recipes within their classroom kitchen. An alternating treatment design with a follow-up and…

  19. Evaluation of nourishment schemes based on long-term morphological modeling

    DEFF Research Database (Denmark)

    Grunnet, Nicholas; Kristensen, Sten Esbjørn; Drønen, Nils

    2012-01-01

    A recently developed long-term morphological modeling concept is applied to evaluate the impact of nourishment schemes. The concept combines detailed two-dimensional morphological models and simple one-line models for the coastline evolution and is particularly well suited for long-term simulatio...... site. This study strongly indicates that the hybrid model may be used as an engineering tool to predict shoreline response following the implementation of a nourishment project....

  20. Difference schemes for numerical solutions of lagging models of heat conduction

    OpenAIRE

    Cabrera Sánchez, Jesús; Castro López, María Ángeles; Rodríguez Mateo, Francisco; Martín Alustiza, José Antonio

    2013-01-01

    Non-Fourier models of heat conduction are increasingly being considered in the modeling of microscale heat transfer in engineering and biomedical heat transfer problems. The dual-phase-lagging model, incorporating time lags in the heat flux and the temperature gradient, and some of its particular cases and approximations, result in heat conduction modeling equations in the form of delayed or hyperbolic partial differential equations. In this work, the application of difference schemes for the...

  1. SOLVING FRACTIONAL-ORDER COMPETITIVE LOTKA-VOLTERRA MODEL BY NSFD SCHEMES

    Directory of Open Access Journals (Sweden)

    S.ZIBAEI

    2016-12-01

    Full Text Available In this paper, we introduce fractional-order into a model competitive Lotka- Volterra prey-predator system. We will discuss the stability analysis of this fractional system. The non-standard nite difference (NSFD scheme is implemented to study the dynamic behaviors in the fractional-order Lotka-Volterra system. Proposed non-standard numerical scheme is compared with the forward Euler and fourth order Runge-Kutta methods. Numerical results show that the NSFD approach is easy and accurate for implementing when applied to fractional-order Lotka-Volterra model.

  2. Alternating Direction Implicit (ADI) schemes for a PDE-based image osmosis model

    Science.gov (United States)

    Calatroni, L.; Estatico, C.; Garibaldi, N.; Parisotto, S.

    2017-10-01

    We consider Alternating Direction Implicit (ADI) splitting schemes to compute efficiently the numerical solution of the PDE osmosis model considered by Weickert et al. in [10] for several imaging applications. The discretised scheme is shown to preserve analogous properties to the continuous model. The dimensional splitting strategy traduces numerically into the solution of simple tridiagonal systems for which standard matrix factorisation techniques can be used to improve upon the performance of classical implicit methods, even for large time steps. Applications to the shadow removal problem are presented.

  3. Recurrent and Dynamic Models for Predicting Streaming Video Quality of Experience.

    Science.gov (United States)

    Bampis, Christos G; Li, Zhi; Katsavounidis, Ioannis; Bovik, Alan C

    2018-07-01

    Streaming video services represent a very large fraction of global bandwidth consumption. Due to the exploding demands of mobile video streaming services, coupled with limited bandwidth availability, video streams are often transmitted through unreliable, low-bandwidth networks. This unavoidably leads to two types of major streaming-related impairments: compression artifacts and/or rebuffering events. In streaming video applications, the end-user is a human observer; hence being able to predict the subjective Quality of Experience (QoE) associated with streamed videos could lead to the creation of perceptually optimized resource allocation strategies driving higher quality video streaming services. We propose a variety of recurrent dynamic neural networks that conduct continuous-time subjective QoE prediction. By formulating the problem as one of time-series forecasting, we train a variety of recurrent neural networks and non-linear autoregressive models to predict QoE using several recently developed subjective QoE databases. These models combine multiple, diverse neural network inputs, such as predicted video quality scores, rebuffering measurements, and data related to memory and its effects on human behavioral responses, using them to predict QoE on video streams impaired by both compression artifacts and rebuffering events. Instead of finding a single time-series prediction model, we propose and evaluate ways of aggregating different models into a forecasting ensemble that delivers improved results with reduced forecasting variance. We also deploy appropriate new evaluation metrics for comparing time-series predictions in streaming applications. Our experimental results demonstrate improved prediction performance that approaches human performance. An implementation of this work can be found at https://github.com/christosbampis/NARX_QoE_release.

  4. Evaluating the effect of Tikhonov regularization schemes on predictions in a variable‐density groundwater model

    Science.gov (United States)

    White, Jeremy T.; Langevin, Christian D.; Hughes, Joseph D.

    2010-01-01

    Calibration of highly‐parameterized numerical models typically requires explicit Tikhonovtype regularization to stabilize the inversion process. This regularization can take the form of a preferred parameter values scheme or preferred relations between parameters, such as the preferred equality scheme. The resulting parameter distributions calibrate the model to a user‐defined acceptable level of model‐to‐measurement misfit, and also minimize regularization penalties on the total objective function. To evaluate the potential impact of these two regularization schemes on model predictive ability, a dataset generated from a synthetic model was used to calibrate a highly-parameterized variable‐density SEAWAT model. The key prediction is the length of time a synthetic pumping well will produce potable water. A bi‐objective Pareto analysis was used to explicitly characterize the relation between two competing objective function components: measurement error and regularization error. Results of the Pareto analysis indicate that both types of regularization schemes affect the predictive ability of the calibrated model.

  5. An efficient numerical progressive diagonalization scheme for the quantum Rabi model revisited

    International Nuclear Information System (INIS)

    Pan, Feng; Bao, Lina; Dai, Lianrong; Draayer, Jerry P

    2017-01-01

    An efficient numerical progressive diagonalization scheme for the quantum Rabi model is revisited. The advantage of the scheme lies in the fact that the quantum Rabi model can be solved almost exactly by using the scheme that only involves a finite set of one variable polynomial equations. The scheme is especially efficient for a specified eigenstate of the model, for example, the ground state. Some low-lying level energies of the model for several sets of parameters are calculated, of which one set of the results is compared to that obtained from the Braak’s exact solution proposed recently. It is shown that the derivative of the entanglement measure defined in terms of the reduced von Neumann entropy with respect to the coupling parameter does reach the maximum near the critical point deduced from the classical limit of the Dicke model, which may provide a probe of the critical point of the crossover in finite quantum many-body systems, such as that in the quantum Rabi model. (paper)

  6. On the representation of immersion and condensation freezing in cloud models using different nucleation schemes

    Directory of Open Access Journals (Sweden)

    B. Ervens

    2012-07-01

    Full Text Available Ice nucleation in clouds is often observed at temperatures >235 K, pointing to heterogeneous freezing as a predominant mechanism. Many models deterministically predict the number concentration of ice particles as a function of temperature and/or supersaturation. Several laboratory experiments, at constant temperature and/or supersaturation, report heterogeneous freezing as a stochastic, time-dependent process that follows classical nucleation theory; this might appear to contradict deterministic models that predict singular freezing behavior.

    We explore the extent to which the choice of nucleation scheme (deterministic/stochastic, single/multiple contact angles θ affects the prediction of the fraction of frozen ice nuclei (IN and cloud evolution for a predetermined maximum IN concentration. A box model with constant temperature and supersaturation is used to mimic published laboratory experiments of immersion freezing of monodisperse (800 nm kaolinite particles (~243 K, and the fitness of different nucleation schemes. Sensitivity studies show that agreement of all five schemes is restricted to the narrow parameter range (time, temperature, IN diameter in the original laboratory studies, and that model results diverge for a wider range of conditions.

    The schemes are implemented in an adiabatic parcel model that includes feedbacks of the formation and growth of drops and ice particles on supersaturation during ascent. Model results for the monodisperse IN population (800 nm show that these feedbacks limit ice nucleation events, often leading to smaller differences in number concentration of ice particles and ice water content (IWC between stochastic and deterministic approaches than expected from the box model studies. However, because the different parameterizations of θ distributions and time-dependencies are highly sensitive to IN size, simulations using polydisperse IN result in great differences in predicted ice number

  7. Video modelling and reducing anxiety related to dental injections - a randomised clinical trial.

    Science.gov (United States)

    Al-Namankany, A; Petrie, A; Ashley, P

    2014-06-01

    This study was part of a successfully completed PhD and was presented at the IADR/AADR General Session (2013) in Seattle, Washington, USA. The report of this clinical trial conforms to the CONSORT statement. A randomised controlled trial to investigate if video modelling can influence a child's anxiety before the administration of local anaesthesia (LA). A sample of 180 (6- to 12-year-old) children due to have dental treatments under LA were randomly allocated to the modelling video or the control video (oral hygiene instruction). The level of anxiety was recorded before and after watching the video on the Abeer Children Dental Anxiety Scale (ACDAS) and the child's ability to cope with the subsequent procedure was assessed on the visual analogue scale (VAS). A two group chi-square test was used as the basis for the sample size calculation; a significance level of 0.025 was chosen rather than the conventional 0.05 to avoid spurious results arising from multiple testing. Children in the test group had significantly less anxiety after watching the video than children in the control group throughout the subsequent dental procedure; in particular at the time of the LA administration (p Video modelling appeared to be effective at reducing dental anxiety and has a significant impact on needle phobia in children.

  8. Change in Farm Production Structure Within Different CAP Schemes – an LP Modelling Approach

    Directory of Open Access Journals (Sweden)

    Jaka ŽGAJNAR

    2008-01-01

    Full Text Available After accession to European Union in 2004 direct payments became veryimportant income source also for farmers in Slovenia. But agricultural policy inplace at accession changed significantly in year 2007 as result of CAP reformimplementation. The objective of this study was to evaluate decision makingimpacts of direct payments scheme implemented with the reform: regional or morelikely hybrid scheme. The change in farm production structure was simulated withmodel, applying gross margin maximisation, based on static linear programmingapproach. The model has been developed in a spreadsheet framework in MS Excelplatform. A hypothetical farm has been chosen to analyse different scenarios andspecializations. Focus of the analysis was on cattle sector, since it is expected thatdecoupling is going to have significant influence on its optimal productionstructure. The reason is high level of direct payments that could in pre-reformscheme rise up to 70 % of total gross margin. Model results confirm that the reformshould have unfavourable impacts on cattle farms with intensive productionpractice. The results show that hybrid scheme has minor negative impacts in allcattle specializations, while regional scheme would be better option for sheepspecialized farm. Analysis has also shown growing importance of CAP pillar IIpayments, among them particularly agri-environmental measures. In all threeschemes budgetary payments enable farmers to improve financial results and inboth reform schemes they alleviate economic impacts of the CAP reform.

  9. Performance of the Goddard multiscale modeling framework with Goddard ice microphysical schemes

    Science.gov (United States)

    Chern, Jiun-Dar; Tao, Wei-Kuo; Lang, Stephen E.; Matsui, Toshihisa; Li, J.-L. F.; Mohr, Karen I.; Skofronick-Jackson, Gail M.; Peters-Lidard, Christa D.

    2016-03-01

    The multiscale modeling framework (MMF), which replaces traditional cloud parameterizations with cloud-resolving models (CRMs) within a host atmospheric general circulation model (GCM), has become a new approach for climate modeling. The embedded CRMs make it possible to apply CRM-based cloud microphysics directly within a GCM. However, most such schemes have never been tested in a global environment for long-term climate simulation. The benefits of using an MMF to evaluate rigorously and improve microphysics schemes are here demonstrated. Four one-moment microphysical schemes are implemented into the Goddard MMF and their results validated against three CloudSat/CALIPSO cloud ice products and other satellite data. The new four-class (cloud ice, snow, graupel, and frozen drops/hail) ice scheme produces a better overall spatial distribution of cloud ice amount, total cloud fractions, net radiation, and total cloud radiative forcing than earlier three-class ice schemes, with biases within the observational uncertainties. Sensitivity experiments are conducted to examine the impact of recently upgraded microphysical processes on global hydrometeor distributions. Five processes dominate the global distributions of cloud ice and snow amount in long-term simulations: (1) allowing for ice supersaturation in the saturation adjustment, (2) three additional correction terms in the depositional growth of cloud ice to snow, (3) accounting for cloud ice fall speeds, (4) limiting cloud ice particle size, and (5) new size-mapping schemes for snow and graupel. Despite the cloud microphysics improvements, systematic errors associated with subgrid processes, cyclic lateral boundaries in the embedded CRMs, and momentum transport remain and will require future improvement.

  10. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  11. Evaluating the Performance of the Goddard Multi-Scale Modeling Framework with Different Cloud Microphysical Schemes

    Science.gov (United States)

    Chern, J.; Tao, W.; Lang, S. E.; Matsui, T.

    2012-12-01

    The accurate representation of clouds and cloud processes in atmospheric general circulation models (GCMs) with relatively coarse resolution (~100 km) has been a long-standing challenge. With the rapid advancement in computational technology, new breed of GCMs that are capable of explicitly resolving clouds have been developed. Though still computationally very expensive, global cloud-resolving models (GCRMs) with horizontal resolutions of 3.5 to 14 km are already being run in an exploratory manner. Another less computationally demanding approach is the multi-scale modeling framework (MMF) that replaces conventional cloud parameterizations with a cloud-resolving model (CRM) in each grid column of a GCM. The Goddard MMF is based on the coupling of the Goddard Cumulus Ensemble (GCE), a CRM model, and the GEOS global model. In recent years a few new and improved microphysical schemes are developed and implemented to the GCE based on observations from field campaigns. It is important to evaluating these microphysical schemes for global applications such as the MMFs and GCRMs. Two-year (2007-2008) MMF sensitivity experiments have been carried out with different cloud microphysical schemes. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against TRMM, CloudSat and CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to evaluate the performance of different cloud microphysical schemes. We will assess the strengths and/or deficiencies in of these microphysics schemes and provide guidance on how to improve

  12. Fuzzy Multiple Criteria Decision Making Model with Fuzzy Time Weight Scheme

    OpenAIRE

    Chin-Yao Low; Sung-Nung Lin

    2013-01-01

    In this study, we purpose a common fuzzy multiple criteria decision making model. A brand new concept - fuzzy time weighted scheme is adopted for considering in the model to establish a fuzzy multiple criteria decision making with time weight (FMCDMTW) model. A real case of fuzzy multiple criteria decision making (FMCDM) problems to be considering in this study. The performance evaluation of auction websites based on all criteria proposed in related literature. Obviously, the problem under in...

  13. Modelling Detailed-Chemistry Effects on Turbulent Diffusion Flames using a Parallel Solution-Adaptive Scheme

    Science.gov (United States)

    Jha, Pradeep Kumar

    Capturing the effects of detailed-chemistry on turbulent combustion processes is a central challenge faced by the numerical combustion community. However, the inherent complexity and non-linear nature of both turbulence and chemistry require that combustion models rely heavily on engineering approximations to remain computationally tractable. This thesis proposes a computationally efficient algorithm for modelling detailed-chemistry effects in turbulent diffusion flames and numerically predicting the associated flame properties. The cornerstone of this combustion modelling tool is the use of parallel Adaptive Mesh Refinement (AMR) scheme with the recently proposed Flame Prolongation of Intrinsic low-dimensional manifold (FPI) tabulated-chemistry approach for modelling complex chemistry. The effect of turbulence on the mean chemistry is incorporated using a Presumed Conditional Moment (PCM) approach based on a beta-probability density function (PDF). The two-equation k-w turbulence model is used for modelling the effects of the unresolved turbulence on the mean flow field. The finite-rate of methane-air combustion is represented here by using the GRI-Mech 3.0 scheme. This detailed mechanism is used to build the FPI tables. A state of the art numerical scheme based on a parallel block-based solution-adaptive algorithm has been developed to solve the Favre-averaged Navier-Stokes (FANS) and other governing partial-differential equations using a second-order accurate, fully-coupled finite-volume formulation on body-fitted, multi-block, quadrilateral/hexahedral mesh for two-dimensional and three-dimensional flow geometries, respectively. A standard fourth-order Runge-Kutta time-marching scheme is used for time-accurate temporal discretizations. Numerical predictions of three different diffusion flames configurations are considered in the present work: a laminar counter-flow flame; a laminar co-flow diffusion flame; and a Sydney bluff-body turbulent reacting flow

  14. End-point parametrization and guaranteed stability for a model predictive control scheme

    NARCIS (Netherlands)

    Weiland, Siep; Stoorvogel, Antonie Arij; Tiagounov, Andrei A.

    2001-01-01

    In this paper we consider the closed-loop asymptotic stability of the model predictive control scheme which involves the minimization of a quadratic criterion with a varying weight on the end-point state. In particular, we investigate the stability properties of the (MPC-) controlled system as

  15. Efficient and stable model reduction scheme for the numerical simulation of broadband acoustic metamaterials

    DEFF Research Database (Denmark)

    Hyun, Jaeyub; Kook, Junghwan; Wang, Semyung

    2015-01-01

    and basis vectors for use according to the target system. The proposed model reduction scheme is applied to the numerical simulation of the simple mass-damping-spring system and the acoustic metamaterial systems (i.e., acoustic lens and acoustic cloaking device) for the first time. Through these numerical...

  16. RELAP5 two-phase fluid model and numerical scheme for economic LWR system simulation

    International Nuclear Information System (INIS)

    Ransom, V.H.; Wagner, R.J.; Trapp, J.A.

    1981-01-01

    The RELAP5 two-phase fluid model and the associated numerical scheme are summarized. The experience accrued in development of a fast running light water reactor system transient analysis code is reviewed and example of the code application are given

  17. Using video self- and peer modeling to facilitate reading fluency in children with learning disabilities.

    Science.gov (United States)

    Decker, Martha M; Buggey, Tom

    2014-01-01

    The authors compared the effects of video self-modeling and video peer modeling on oral reading fluency of elementary students with learning disabilities. A control group was also included to gauge general improvement due to reading instruction and familiarity with researchers. The results indicated that both interventions resulted in improved fluency. Students in both experimental groups improved their reading fluency. Two students in the self-modeling group made substantial and immediate gains beyond any of the other students. Discussion is included that focuses on the importance that positive imagery can have on student performance and the possible applications of both forms of video modeling with students who have had negative experiences in reading.

  18. A hybrid scheme for absorbing edge reflections in numerical modeling of wave propagation

    KAUST Repository

    Liu, Yang

    2010-03-01

    We propose an efficient scheme to absorb reflections from the model boundaries in numerical solutions of wave equations. This scheme divides the computational domain into boundary, transition, and inner areas. The wavefields within the inner and boundary areas are computed by the wave equation and the one-way wave equation, respectively. The wavefields within the transition area are determined by a weighted combination of the wavefields computed by the wave equation and the one-way wave equation to obtain a smooth variation from the inner area to the boundary via the transition zone. The results from our finite-difference numerical modeling tests of the 2D acoustic wave equation show that the absorption enforced by this scheme gradually increases with increasing width of the transition area. We obtain equally good performance using pseudospectral and finite-element modeling with the same scheme. Our numerical experiments demonstrate that use of 10 grid points for absorbing edge reflections attains nearly perfect absorption. © 2010 Society of Exploration Geophysicists.

  19. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks

    DEFF Research Database (Denmark)

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L

    2016-01-01

    With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical...... and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely...... on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network...

  20. A model linking video gaming, sleep quality, sweet drinks consumption and obesity among children and youth.

    Science.gov (United States)

    Turel, O; Romashkin, A; Morrison, K M

    2017-08-01

    There is a growing need to curb paediatric obesity. The aim of this study is to untangle associations between video-game-use attributes and obesity as a first step towards identifying and examining possible interventions. Cross-sectional time-lagged cohort study was employed using parent-child surveys (t1) and objective physical activity and physiological measures (t2) from 125 children/adolescents (mean age = 13.06, 9-17-year-olds) who play video games, recruited from two clinics at a Canadian academic children's hospital. Structural equation modelling and analysis of covariance were employed for inference. The results of the study are as follows: (i) self-reported video-game play duration in the 4-h window before bedtime is related to greater abdominal adiposity (waist-to-height ratio) and this association may be mediated through reduced sleep quality (measured with the Pittsburgh Sleep Quality Index); and (ii) self-reported average video-game session duration is associated with greater abdominal adiposity and this association may be mediated through higher self-reported sweet drinks consumption while playing video games and reduced sleep quality. Video-game play duration in the 4-h window before bedtime, typical video-game session duration, sweet drinks consumption while playing video games and poor sleep quality have aversive associations with abdominal adiposity. Paediatricians and researchers should further explore how these factors can be altered through behavioural or pharmacological interventions as a means to reduce paediatric obesity. © 2017 World Obesity Federation.

  1. A nonstandard finite difference scheme for a basic model of cellular immune response to viral infection

    Science.gov (United States)

    Korpusik, Adam

    2017-02-01

    We present a nonstandard finite difference scheme for a basic model of cellular immune response to viral infection. The main advantage of this approach is that it preserves the essential qualitative features of the original continuous model (non-negativity and boundedness of the solution, equilibria and their stability conditions), while being easy to implement. All of the qualitative features are preserved independently of the chosen step-size. Numerical simulations of our approach and comparison with other conventional simulation methods are presented.

  2. A new numerical scheme for bounding acceleration in the LWR model

    OpenAIRE

    LECLERCQ, L; ELSEVIER

    2005-01-01

    This paper deals with the numerical resolution of bounded acceleration extensions of the LWR model. Two different manners for bounding accelerations in the LWR model will be presented: introducing a moving boundary condition in front of an accelerating flow or defining a field of constraints on the maximum allowed speed in the (x,t) plane. Both extensions lead to the same solutions if the declining branch of the fundamental diagram is linear. The existing numerical scheme for the latter exte...

  3. Additive operator-difference schemes splitting schemes

    CERN Document Server

    Vabishchevich, Petr N

    2013-01-01

    Applied mathematical modeling isconcerned with solving unsteady problems. This bookshows how toconstruct additive difference schemes to solve approximately unsteady multi-dimensional problems for PDEs. Two classes of schemes are highlighted: methods of splitting with respect to spatial variables (alternating direction methods) and schemes of splitting into physical processes. Also regionally additive schemes (domain decomposition methods)and unconditionally stable additive schemes of multi-component splitting are considered for evolutionary equations of first and second order as well as for sy

  4. Training Self-Regulated Learning Skills with Video Modeling Examples: Do Task-Selection Skills Transfer?

    Science.gov (United States)

    Raaijmakers, Steven F.; Baars, Martine; Schaap, Lydia; Paas, Fred; van Merriënboer, Jeroen; van Gog, Tamara

    2018-01-01

    Self-assessment and task-selection skills are crucial in self-regulated learning situations in which students can choose their own tasks. Prior research suggested that training with video modeling examples, in which another person (the model) demonstrates and explains the cyclical process of problem-solving task performance, self-assessment, and…

  5. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.

    Science.gov (United States)

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T

    2016-12-01

    With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.

  6. Does a video displaying a stair climbing model increase stair use in a worksite setting?

    Science.gov (United States)

    Van Calster, L; Van Hoecke, A-S; Octaef, A; Boen, F

    2017-08-01

    This study evaluated the effects of improving the visibility of the stairwell and of displaying a video with a stair climbing model on climbing and descending stair use in a worksite setting. Intervention study. Three consecutive one-week intervention phases were implemented: (1) the visibility of the stairs was improved by the attachment of pictograms that indicated the stairwell; (2) a video showing a stair climbing model was sent to the employees by email; and (3) the same video was displayed on a television screen at the point-of-choice (POC) between the stairs and the elevator. The interventions took place in two buildings. The implementation of the interventions varied between these buildings and the sequence was reversed. Improving the visibility of the stairs increased both stair climbing (+6%) and descending stair use (+7%) compared with baseline. Sending the video by email yielded no additional effect on stair use. By contrast, displaying the video at the POC increased stair climbing in both buildings by 12.5% on average. One week after the intervention, the positive effects on stair climbing remained in one of the buildings, but not in the other. These findings suggest that improving the visibility of the stairwell and displaying a stair climbing model on a screen at the POC can result in a short-term increase in both climbing and descending stair use. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  7. Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme

    Science.gov (United States)

    Veljović, K.; Rajković, B.; Mesinger, F.

    2009-04-01

    Several points are made concerning the somewhat controversial issue of regional climate modeling: should a regional climate model (RCM) be expected to maintain the large scale skill of the driver global model that is supplying its lateral boundary condition (LBC)? Given that this is normally desired, is it able to do so without help via the fairly popular large scale nudging? Specifically, without such nudging, will the RCM kinetic energy necessarily decrease with time compared to that of the driver model or analysis data as suggested by a study using the Regional Atmospheric Modeling System (RAMS)? Finally, can the lateral boundary condition scheme make a difference: is the almost universally used but somewhat costly relaxation scheme necessary for a desirable RCM performance? Experiments are made to explore these questions running the Eta model in two versions differing in the lateral boundary scheme used. One of these schemes is the traditional relaxation scheme, and the other the Eta model scheme in which information is used at the outermost boundary only, and not all variables are prescribed at the outflow boundary. Forecast lateral boundary conditions are used, and results are verified against the analyses. Thus, skill of the two RCM forecasts can be and is compared not only against each other but also against that of the driver global forecast. A novel verification method is used in the manner of customary precipitation verification in that forecast spatial wind speed distribution is verified against analyses by calculating bias adjusted equitable threat scores and bias scores for wind speeds greater than chosen wind speed thresholds. In this way, focusing on a high wind speed value in the upper troposphere, verification of large scale features we suggest can be done in a manner that may be more physically meaningful than verifications via spectral decomposition that are a standard RCM verification method. The results we have at this point are somewhat

  8. A study of the spreading scheme for viral marketing based on a complex network model

    Science.gov (United States)

    Yang, Jianmei; Yao, Canzhong; Ma, Weicheng; Chen, Guanrong

    2010-02-01

    Buzzword-based viral marketing, known also as digital word-of-mouth marketing, is a marketing mode attached to some carriers on the Internet, which can rapidly copy marketing information at a low cost. Viral marketing actually uses a pre-existing social network where, however, the scale of the pre-existing network is believed to be so large and so random, so that its theoretical analysis is intractable and unmanageable. There are very few reports in the literature on how to design a spreading scheme for viral marketing on real social networks according to the traditional marketing theory or the relatively new network marketing theory. Complex network theory provides a new model for the study of large-scale complex systems, using the latest developments of graph theory and computing techniques. From this perspective, the present paper extends the complex network theory and modeling into the research of general viral marketing and develops a specific spreading scheme for viral marking and an approach to design the scheme based on a real complex network on the QQ instant messaging system. This approach is shown to be rather universal and can be further extended to the design of various spreading schemes for viral marketing based on different instant messaging systems.

  9. Observation Likelihood Model Design and Failure Recovery Scheme Toward Reliable Localization of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Chang-bae Moon

    2010-12-01

    Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.

  10. Observation Likelihood Model Design and Failure Recovery Scheme toward Reliable Localization of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Chang-bae Moon

    2011-01-01

    Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.

  11. Current use of impact models for agri-environment schemes and potential for improvements of policy design and assessment.

    Science.gov (United States)

    Primdahl, Jørgen; Vesterager, Jens Peter; Finn, John A; Vlahos, George; Kristensen, Lone; Vejre, Henrik

    2010-06-01

    Agri-Environment Schemes (AES) to maintain or promote environmentally-friendly farming practices were implemented on about 25% of all agricultural land in the EU by 2002. This article analyses and discusses the actual and potential use of impact models in supporting the design, implementation and evaluation of AES. Impact models identify and establish the causal relationships between policy objectives and policy outcomes. We review and discuss the role of impact models at different stages in the AES policy process, and present results from a survey of impact models underlying 60 agri-environmental schemes in seven EU member states. We distinguished among three categories of impact models (quantitative, qualitative or common sense), depending on the degree of evidence in the formal scheme description, additional documents, or key person interviews. The categories of impact models used mainly depended on whether scheme objectives were related to natural resources, biodiversity or landscape. A higher proportion of schemes dealing with natural resources (primarily water) were based on quantitative impact models, compared to those concerned with biodiversity or landscape. Schemes explicitly targeted either on particular parts of individual farms or specific areas tended to be based more on quantitative impact models compared to whole-farm schemes and broad, horizontal schemes. We conclude that increased and better use of impact models has significant potential to improve efficiency and effectiveness of AES. (c) 2009 Elsevier Ltd. All rights reserved.

  12. A gas dynamics scheme for a two moments model of radiative transfer

    International Nuclear Information System (INIS)

    Buet, Ch.; Despres, B.

    2007-01-01

    We address the discretization of the Levermore's two moments and entropy model of the radiative transfer equation. We present a new approach for the discretization of this model: first we rewrite the moment equations as a Compressible Gas Dynamics equation by introducing an additional quantity that plays the role of a density. After that we discretize using a Lagrange-projection scheme. The Lagrange-projection scheme permits us to incorporate the source terms in the fluxes of an acoustic solver in the Lagrange step, using the well-known piecewise steady approximation and thus to capture correctly the diffusion regime. Moreover we show that the discretization is entropic and preserve the flux-limited property of the moment model. Numerical examples illustrate the feasibility of our approach. (authors)

  13. The effects of video self-modeling on the decoding skills of children at risk for reading disabilities

    OpenAIRE

    Ayala, SM; O'Connor, R

    2013-01-01

    Ten first grade students who had responded poorly to a Tier 2 reading intervention in a response to intervention (RTI) model received an intervention of video self-modeling to improve decoding skills and sight word recognition. Students were video recorded blending and segmenting decodable words and reading sight words. Videos were edited and viewed a minimum of four times per week. Data were collected twice per week using curriculum-based measures. A single subject multiple baseline across p...

  14. The impact of thin models in music videos on adolescent girls' body dissatisfaction.

    Science.gov (United States)

    Bell, Beth T; Lawton, Rebecca; Dittmar, Helga

    2007-06-01

    Music videos are a particularly influential, new form of mass media for adolescents, which include the depiction of scantily clad female models whose bodies epitomise the ultra-thin sociocultural ideal for young women. The present study is the first exposure experiment that examines the impact of thin models in music videos on the body dissatisfaction of 16-19-year-old adolescent girls (n=87). First, participants completed measures of positive and negative affect, body image, and self-esteem. Under the guise of a memory experiment, they then either watched three music videos, listened to three songs (from the videos), or learned a list of words. Affect and body image were assessed afterwards. In contrast to the music listening and word-learning conditions, girls who watched the music videos reported significantly elevated scores on an adaptation of the Body Image States Scale after exposure, indicating increased body dissatisfaction. Self-esteem was not found to be a significant moderator of this relationship. Implications and future research are discussed.

  15. Impairment-Factor-Based Audiovisual Quality Model for IPTV: Influence of Video Resolution, Degradation Type, and Content Type

    Directory of Open Access Journals (Sweden)

    Garcia MN

    2011-01-01

    Full Text Available This paper presents an audiovisual quality model for IPTV services. The model estimates the audiovisual quality of standard and high definition video as perceived by the user. The model is developed for applications such as network planning and packet-layer quality monitoring. It mainly covers audio and video compression artifacts and impairments due to packet loss. The quality tests conducted for model development demonstrate a mutual influence of the perceived audio and video quality, and the predominance of the video quality for the overall audiovisual quality. The balance between audio quality and video quality, however, depends on the content, the video format, and the audio degradation type. The proposed model is based on impairment factors which quantify the quality-impact of the different degradations. The impairment factors are computed from parameters extracted from the bitstream or packet headers. For high definition video, the model predictions show a correlation with unknown subjective ratings of 95%. For comparison, we have developed a more classical audiovisual quality model which is based on the audio and video qualities and their interaction. Both quality- and impairment-factor-based models are further refined by taking the content-type into account. At last, the different model variants are compared with modeling approaches described in the literature.

  16. On the sub-model errors of a generalized one-way coupling scheme for linking models at different scales

    Science.gov (United States)

    Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong

    2017-11-01

    Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.

  17. Numerical schemes for solving and optimizing multiscale models with age of hepatitis C virus dynamics.

    Science.gov (United States)

    Reinharz, Vladimir; Dahari, Harel; Barash, Danny

    2018-03-15

    Age-structured PDE models have been developed to study viral infection and treatment. However, they are notoriously difficult to solve. Here, we investigate the numerical solutions of an age-based multiscale model of hepatitis C virus (HCV) dynamics during antiviral therapy and compare them with an analytical approximation, namely its long-term approximation. First, starting from a simple yet flexible numerical solution that also considers an integral approximated over previous iterations, we show that the long-term approximation is an underestimate of the PDE model solution as expected since some infection events are being ignored. We then argue for the importance of having a numerical solution that takes into account previous iterations for the associated integral, making problematic the use of canned solvers. Second, we demonstrate that the governing differential equations are stiff and the stability of the numerical scheme should be considered. Third, we show that considerable gain in efficiency can be achieved by using adaptive stepsize methods over fixed stepsize methods for simulating realistic scenarios when solving multiscale models numerically. Finally, we compare between several numerical schemes for the solution of the equations and demonstrate the use of a numerical optimization scheme for the parameter estimation performed directly from the equations. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Model Building by Coset Space Dimensional Reduction Scheme Using Ten-Dimensional Coset Spaces

    Science.gov (United States)

    Jittoh, T.; Koike, M.; Nomura, T.; Sato, J.; Shimomura, T.

    2008-12-01

    We investigate the gauge-Higgs unification models within the scheme of the coset space dimensional reduction, beginning with a gauge theory in a fourteen-dimensional spacetime where extra-dimensional space has the structure of a ten-dimensional compact coset space. We found seventeen phenomenologically acceptable models through an exhaustive search for the candidates of the coset spaces, the gauge group in fourteen dimension, and fermion representation. Of the seventeen, ten models led to {SO}(10) (× {U}(1)) GUT-like models after dimensional reduction, three models led to {SU}(5) × {U}(1) GUT-like models, and four to {SU}(3) × {SU}(2) × {U}(1) × {U}(1) Standard-Model-like models. The combinations of the coset space, the gauge group in the fourteen-dimensional spacetime, and the representation of the fermion contents of such models are listed.

  19. Formalization of hydrocarbon conversion scheme of catalytic cracking for mathematical model development

    Science.gov (United States)

    Nazarova, G.; Ivashkina, E.; Ivanchina, E.; Kiseleva, S.; Stebeneva, V.

    2015-11-01

    The issue of improving the energy and resource efficiency of advanced petroleum processing can be solved by the development of adequate mathematical model based on physical and chemical regularities of process reactions with a high predictive potential in the advanced petroleum refining. In this work, the development of formalized hydrocarbon conversion scheme of catalytic cracking was performed using thermodynamic parameters of reaction defined by the Density Functional Theory. The list of reaction was compiled according to the results of feedstock structural-group composition definition, which was done by the n-d-m-method, the Hazelvuda method, qualitative composition of feedstock defined by gas chromatography-mass spectrometry and individual composition of catalytic cracking gasoline fraction. Formalized hydrocarbon conversion scheme of catalytic cracking will become the basis for the development of the catalytic cracking kinetic model.

  20. A New Repeating Color Watermarking Scheme Based on Human Visual Model

    Directory of Open Access Journals (Sweden)

    Chang Chin-Chen

    2004-01-01

    Full Text Available This paper proposes a human-visual-model-based scheme that effectively protects the intellectual copyright of digital images. In the proposed method, the theory of the visual secret sharing scheme is used to create a master watermark share and a secret watermark share. The watermark share is kept secret by the owner. The master watermark share is embedded into the host image to generate a watermarked image based on the human visual model. The proposed method conforms to all necessary conditions of an image watermarking technique. After the watermarked image is put under various attacks such as lossy compression, rotating, sharpening, blurring, and cropping, the experimental results show that the extracted digital watermark from the attacked watermarked images can still be robustly detected using the proposed method.

  1. Model-based fault diagnosis techniques design schemes, algorithms, and tools

    CERN Document Server

    Ding, Steven

    2008-01-01

    The objective of this book is to introduce basic model-based FDI schemes, advanced analysis and design algorithms, and the needed mathematical and control theory tools at a level for graduate students and researchers as well as for engineers. This is a textbook with extensive examples and references. Most methods are given in the form of an algorithm that enables a direct implementation in a programme. Comparisons among different methods are included when possible.

  2. Circuit QED scheme for realization of the Lipkin-Meshkov-Glick model

    OpenAIRE

    Larson, Jonas

    2010-01-01

    We propose a scheme in which the Lipkin-Meshkov-Glick model is realized within a circuit QED system. An array of N superconducting qubits interacts with a driven cavity mode. In the dispersive regime, the cavity mode is adiabatically eliminated generating an effective model for the qubits alone. The characteristic long-range order of the Lipkin-Meshkov-Glick model is here mediated by the cavity field. For a closed qubit system, the inherent second order phase transition of the qubits is refle...

  3. Pedagogical models for video communication in massive open on-line courses (MOOCs: a success story

    Directory of Open Access Journals (Sweden)

    María Amata Garito

    2015-05-01

    Full Text Available The initiatives on MOOCs promoted in the United States by prestigious universities, such as Stanford, Harvard, MIT, and by private bodies such as Udacity, aroused great interest worldwide; however the teaching and learning models proposed with MOOCs do not appear to rely on solid theoretical bases and, therefore, on valuable psycho-pedagogical models. The aim of this paper is to analyze some pedagogical aspects related to video communication models in order to highlight the strong and weak points of the educational framework of these initiatives. The teaching models adopted by the International Telematic University UNINETTUNO for its video lessons, the distance assessment systems, the teacher/tutor and student distance interaction models reached such a quality level that it allows us to generalize this model and trigger teaching and learning processes of high quality and to lower the dropouts rates of the students enrolled in MOOCs.

  4. Impulsivity, self-regulation,and pathological video gaming among youth: testing a mediation model.

    Science.gov (United States)

    Liau, Albert K; Neo, Eng Chuan; Gentile, Douglas A; Choo, Hyekyung; Sim, Timothy; Li, Dongdong; Khoo, Angeline

    2015-03-01

    Given the potential negative mental health consequences of pathological video gaming, understanding its etiology may lead to useful treatment developments. The purpose of the study was to examine the influence of impulsive and regulatory processes on pathological video gaming. Study 1 involved 2154 students from 6 primary and 4 secondary schools in Singapore. Study 2 involved 191 students from 2 secondary schools. The results of study 1 and study 2 supported the hypothesis that self-regulation is a mediator between impulsivity and pathological video gaming. Specifically, higher levels of impulsivity was related to lower levels of self-regulation, which in turn was related to higher levels of pathological video gaming. The use of impulsivity and self-regulation in predicting pathological video gaming supports the dual-system model of incorporating both impulsive and reflective systems in the prediction of self-control outcomes. The study highlights the development of self-regulatory resources as a possible avenue for future prevention and treatment research. © 2011 APJPH.

  5. Implementation of a gust front head collapse scheme in the WRF numerical model

    Science.gov (United States)

    Lompar, Miloš; Ćurić, Mladjen; Romanic, Djordje

    2018-05-01

    Gust fronts are thunderstorm-related phenomena usually associated with severe winds which are of great importance in theoretical meteorology, weather forecasting, cloud dynamics and precipitation, and wind engineering. An important feature of gust fronts demonstrated through both theoretical and observational studies is the periodic collapse and rebuild of the gust front head. This cyclic behavior of gust fronts results in periodic forcing of vertical velocity ahead of the parent thunderstorm, which consequently influences the storm dynamics and microphysics. This paper introduces the first gust front pulsation parameterization scheme in the WRF-ARW model (Weather Research and Forecasting-Advanced Research WRF). The influence of this new scheme on model performances is tested through investigation of the characteristics of an idealized supercell cumulonimbus cloud, as well as studying a real case of thunderstorms above the United Arab Emirates. In the ideal case, WRF with the gust front scheme produced more precipitation and showed different time evolution of mixing ratios of cloud water and rain, whereas the mixing ratios of ice and graupel are almost unchanged when compared to the default WRF run without the parameterization of gust front pulsation. The included parameterization did not disturb the general characteristics of thunderstorm cloud, such as the location of updraft and downdrafts, and the overall shape of the cloud. New cloud cells in front of the parent thunderstorm are also evident in both ideal and real cases due to the included forcing of vertical velocity caused by the periodic collapse of the gust front head. Despite some differences between the two WRF simulations and satellite observations, the inclusion of the gust front parameterization scheme produced more cumuliform clouds and seem to match better with real observations. Both WRF simulations gave poor results when it comes to matching the maximum composite radar reflectivity from radar

  6. The Effects of Video Modeling on Skill Acquisition in Children with Autism Spectrum Disorder

    Science.gov (United States)

    Kaffer, Christine L.

    2010-01-01

    The current study examined the effectiveness of a video modeling procedure on a basic math skill acquisition in students with Autism Spectrum Disorder (ASD) using a multiple probe across students design. Participants were four kindergarten/first grade students in a self-contained classroom in an urban public school. All met the criteria for ASD…

  7. A procurement decision model for a video rental store — A case study

    African Journals Online (AJOL)

    ... still utilise his/her experience to acquire new movie titles. The procurement decision model, however, does assist the decision making process by presenting a point of departure from which procurement decisions may be made. Keywords: Inventory management, video rental industry, procurement strategy. ORiON Vol.

  8. Traffic characterization and modeling of wavelet-based VBR encoded video

    Energy Technology Data Exchange (ETDEWEB)

    Yu Kuo; Jabbari, B. [George Mason Univ., Fairfax, VA (United States); Zafar, S. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1997-07-01

    Wavelet-based video codecs provide a hierarchical structure for the encoded data, which can cater to a wide variety of applications such as multimedia systems. The characteristics of such an encoder and its output, however, have not been well examined. In this paper, the authors investigate the output characteristics of a wavelet-based video codec and develop a composite model to capture the traffic behavior of its output video data. Wavelet decomposition transforms the input video in a hierarchical structure with a number of subimages at different resolutions and scales. the top-level wavelet in this structure contains most of the signal energy. They first describe the characteristics of traffic generated by each subimage and the effect of dropping various subimages at the encoder on the signal-to-noise ratio at the receiver. They then develop an N-state Markov model to describe the traffic behavior of the top wavelet. The behavior of the remaining wavelets are then obtained through estimation, based on the correlations between these subimages at the same level of resolution and those wavelets located at an immediate higher level. In this paper, a three-state Markov model is developed. The resulting traffic behavior described by various statistical properties, such as moments and correlations, etc., is then utilized to validate their model.

  9. Video Modeling and Children with Autism Spectrum Disorder: A Survey of Caregiver Perspectives

    Science.gov (United States)

    Cardon, Teresa A.; Guimond, Amy; Smith-Treadwell, Amanda M.

    2015-01-01

    Video modeling (VM) has shown promise as an effective intervention for individuals with autism spectrum disorder (ASD); however, little is known about what may promote or prevent caregivers' use of this intervention. While VM is an effective tool to support skill development among a wide range of children in research and clinical settings, VM is…

  10. Video Modeling to Train Staff to Implement Discrete-Trial Instruction

    Science.gov (United States)

    Catania, Cynthia N.; Almeida, Daniel; Liu-Constant, Brian; Reed, Florence D. DiGennaro

    2009-01-01

    Three new direct-service staff participated in a program that used a video model to train target skills needed to conduct a discrete-trial session. Percentage accuracy in completing a discrete-trial teaching session was evaluated using a multiple baseline design across participants. During baseline, performances ranged from a mean of 12% to 63%…

  11. Modeling the Color Image and Video Quality on Liquid Crystal Displays with Backlight Dimming

    DEFF Research Database (Denmark)

    Korhonen, Jari; Mantel, Claire; Burini, Nino

    2013-01-01

    Objective image and video quality metrics focus mostly on the digital representation of the signal. However, the display characteristics are also essential for the overall Quality of Experience (QoE). In this paper, we use a model of a backlight dimming system for Liquid Crystal Display (LCD...

  12. The Effects of Video Self-Modeling on Children with Autism Spectrum Disorder

    Science.gov (United States)

    Schmidt, Casey; Bonds-Raacke, Jennifer

    2013-01-01

    Video self-modeling (VSM) is a type of intervention that has been developed to assist students in viewing themselves successfully in a wide variety of domains. The present study was designed to analyze the effects of VSM on children with autism spectrum disorder in an academic setting, with specific focus on improving on-task behavior and…

  13. Randomized Controlled Trial of Video Self-Modeling Following Speech Restructuring Treatment for Stuttering

    Science.gov (United States)

    Cream, Angela; O'Brian, Sue; Jones, Mark; Block, Susan; Harrison, Elisabeth; Lincoln, Michelle; Hewat, Sally; Packman, Ann; Menzies, Ross; Onslow, Mark

    2010-01-01

    Purpose: In this study, the authors investigated the efficacy of video self-modeling (VSM) following speech restructuring treatment to improve the maintenance of treatment effects. Method: The design was an open-plan, parallel-group, randomized controlled trial. Participants were 89 adults and adolescents who undertook intensive speech…

  14. Thermal Error Modeling of a Machine Tool Using Data Mining Scheme

    Science.gov (United States)

    Wang, Kun-Chieh; Tseng, Pai-Chang

    In this paper the knowledge discovery technique is used to build an effective and transparent mathematic thermal error model for machine tools. Our proposed thermal error modeling methodology (called KRL) integrates the schemes of K-means theory (KM), rough-set theory (RS), and linear regression model (LR). First, to explore the machine tool's thermal behavior, an integrated system is designed to simultaneously measure the temperature ascents at selected characteristic points and the thermal deformations at spindle nose under suitable real machining conditions. Second, the obtained data are classified by the KM method, further reduced by the RS scheme, and a linear thermal error model is established by the LR technique. To evaluate the performance of our proposed model, an adaptive neural fuzzy inference system (ANFIS) thermal error model is introduced for comparison. Finally, a verification experiment is carried out and results reveal that the proposed KRL model is effective in predicting thermal behavior in machine tools. Our proposed KRL model is transparent, easily understood by users, and can be easily programmed or modified for different machining conditions.

  15. A Coupled Hidden Markov Random Field Model for Simultaneous Face Clustering and Tracking in Videos

    KAUST Repository

    Wu, Baoyuan

    2016-10-25

    Face clustering and face tracking are two areas of active research in automatic facial video processing. They, however, have long been studied separately, despite the inherent link between them. In this paper, we propose to perform simultaneous face clustering and face tracking from real world videos. The motivation for the proposed research is that face clustering and face tracking can provide useful information and constraints to each other, thus can bootstrap and improve the performances of each other. To this end, we introduce a Coupled Hidden Markov Random Field (CHMRF) to simultaneously model face clustering, face tracking, and their interactions. We provide an effective algorithm based on constrained clustering and optimal tracking for the joint optimization of cluster labels and face tracking. We demonstrate significant improvements over state-of-the-art results in face clustering and tracking on several videos.

  16. Assessing the Tangent Linear Behaviour of Common Tracer Transport Schemes and Their Use in a Linearised Atmospheric General Circulation Model

    Science.gov (United States)

    Holdaway, Daniel; Kent, James

    2015-01-01

    The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.

  17. Analyzing numerics of bulk microphysics schemes in community models: warm rain processes

    Directory of Open Access Journals (Sweden)

    I. Sednev

    2012-08-01

    Full Text Available Implementation of bulk cloud microphysics (BLK parameterizations in atmospheric models of different scales has gained momentum in the last two decades. Utilization of these parameterizations in cloud-resolving models when timesteps used for the host model integration are a few seconds or less is justified from the point of view of cloud physics. However, mechanistic extrapolation of the applicability of BLK schemes to the regional or global scales and the utilization of timesteps of hundreds up to thousands of seconds affect both physics and numerics.

    We focus on the mathematical aspects of BLK schemes, such as stability and positive-definiteness. We provide a strict mathematical definition for the problem of warm rain formation. We also derive a general analytical condition (SM-criterion that remains valid regardless of parameterizations for warm rain processes in an explicit Eulerian time integration framework used to advanced finite-difference equations, which govern warm rain formation processes in microphysics packages in the Community Atmosphere Model and the Weather Research and Forecasting model. The SM-criterion allows for the existence of a unique positive-definite stable mass-conserving numerical solution, imposes an additional constraint on the timestep permitted due to the microphysics (like the Courant-Friedrichs-Lewy condition for the advection equation, and prohibits use of any additional assumptions not included in the strict mathematical definition of the problem under consideration.

    By analyzing the numerics of warm rain processes in source codes of BLK schemes implemented in community models we provide general guidelines regarding the appropriate choice of time steps in these models.

  18. The Effects of Video Self-Modeling on the Decoding Skills of Children At Risk for Reading Disabilities

    OpenAIRE

    Ayala, Sandra M

    2010-01-01

    Ten first grade students, participating in a Tier II response to intervention (RTI) reading program received an intervention of video self modeling to improve decoding skills and sight word recognition. The students were video recorded blending and segmenting decodable words, and reading sight words taken directly from their curriculum instruction. Individual videos were recorded and edited to show students successfully and accurately decoding words and practicing sight word recognition. Each...

  19. Conceptual Model for the Design of a Serious Video Game Promoting Self-Management among Youth with Type 1 Diabetes

    OpenAIRE

    Thompson, Debbe; Baranowski, Tom; Buday, Richard

    2010-01-01

    Video games are a popular form of entertainment. Serious video games for health attempt to use entertainment to promote health behavior change. When designed within a framework informed by behavioral science and supported by commercial game-design principles, serious video games for health have the potential to be an effective method for promoting self-management behaviors among youth with diabetes. This article presents a conceptual model of how this may be achieved. It concludes by identify...

  20. A Secure and Robust Compressed Domain Video Steganography for Intra- and Inter-Frames Using Embedding-Based Byte Differencing (EBBD Scheme.

    Directory of Open Access Journals (Sweden)

    Tarik Idbeaa

    Full Text Available This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD. Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.

  1. A Secure and Robust Compressed Domain Video Steganography for Intra- and Inter-Frames Using Embedding-Based Byte Differencing (EBBD) Scheme.

    Science.gov (United States)

    Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah

    2016-01-01

    This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.

  2. Intention to Purchase Products under Volume Discount Scheme: A Conceptual Model and Research Propositions

    Directory of Open Access Journals (Sweden)

    Mohammad Iranmanesh

    2014-12-01

    Full Text Available Many standard brands sell products under the volume discount scheme (VDS as more and more consumers are fond of purchasing products under this scheme. Despite volume discount being commonly practiced, there is a dearth of research, both conceptual and empirical, focusing on purchase characteristics factors and consumer internal evaluation concerning the purchase of products under VDS. To attempt to fill this void, this article develops a conceptual model on VDS with the intention of delineating the influence of the purchase characteristics factors on the consumer intention to purchase products under VDS and provides an explanation of their effects through consumer internal evaluation. Finally, the authors discuss the managerial implications of their research and offer guidelines for future empirical research.

  3. Relaxation approximations to second-order traffic flow models by high-resolution schemes

    International Nuclear Information System (INIS)

    Nikolos, I.K.; Delis, A.I.; Papageorgiou, M.

    2015-01-01

    A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reported demonstrate the simplicity and versatility of relaxation schemes as numerical solvers

  4. Relaxation approximations to second-order traffic flow models by high-resolution schemes

    Energy Technology Data Exchange (ETDEWEB)

    Nikolos, I.K.; Delis, A.I.; Papageorgiou, M. [School of Production Engineering and Management, Technical University of Crete, University Campus, Chania 73100, Crete (Greece)

    2015-03-10

    A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reported demonstrate the simplicity and versatility of relaxation schemes as numerical solvers.

  5. A New Family of Interpolatory Non-Stationary Subdivision Schemes for Curve Design in Geometric Modeling

    Science.gov (United States)

    Conti, Costanza; Romani, Lucia

    2010-09-01

    Univariate subdivision schemes are efficient iterative methods to generate smooth limit curves starting from a sequence of arbitrary points. Aim of this paper is to present and investigate a new family of 6-point interpolatory non-stationary subdivision schemes capable of reproducing important curves of great interest in geometric modeling and engineering applications, if starting from uniformly spaced initial samples. This new family can reproduce conic sections since it is obtained by a parameter depending affine combination of the cubic exponential B-spline symbol generating functions in the space V4,γ = {1,x,etx,e-tx} with t∈{0,s,is|s>0}. Moreover, the free parameter can be chosen to reproduce also other interesting analytic curves by imposing the algebraic conditions for the reproduction of an additional pair of exponential polynomials giving rise to different extensions of the space V4,γ.

  6. Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models

    NARCIS (Netherlands)

    van Elburg, R.A.J.; van Ooyen, A.

    2009-01-01

    An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on

  7. Boosting flood warning schemes with fast emulator of detailed hydrodynamic models

    Science.gov (United States)

    Bellos, V.; Carbajal, J. P.; Leitao, J. P.

    2017-12-01

    Floods are among the most destructive catastrophic events and their frequency has incremented over the last decades. To reduce flood impact and risks, flood warning schemes are installed in flood prone areas. Frequently, these schemes are based on numerical models which quickly provide predictions of water levels and other relevant observables. However, the high complexity of flood wave propagation in the real world and the need of accurate predictions in urban environments or in floodplains hinders the use of detailed simulators. This sets the difficulty, we need fast predictions that meet the accuracy requirements. Most physics based detailed simulators although accurate, will not fulfill the speed demand. Even if High Performance Computing techniques are used (the magnitude of required simulation time is minutes/hours). As a consequence, most flood warning schemes are based in coarse ad-hoc approximations that cannot take advantage a detailed hydrodynamic simulation. In this work, we present a methodology for developing a flood warning scheme using an Gaussian Processes based emulator of a detailed hydrodynamic model. The methodology consists of two main stages: 1) offline stage to build the emulator; 2) online stage using the emulator to predict and generate warnings. The offline stage consists of the following steps: a) definition of the critical sites of the area under study, and the specification of the observables to predict at those sites, e.g. water depth, flow velocity, etc.; b) generation of a detailed simulation dataset to train the emulator; c) calibration of the required parameters (if measurements are available). The online stage is carried on using the emulator to predict the relevant observables quickly, and the detailed simulator is used in parallel to verify key predictions of the emulator. The speed gain given by the emulator allows also to quantify uncertainty in predictions using ensemble methods. The above methodology is applied in real

  8. Cognitive Modeling of Video Game Player User Experience

    Science.gov (United States)

    Bohil, Corey J.; Biocca, Frank A.

    2010-01-01

    This paper argues for the use of cognitive modeling to gain a detailed and dynamic look into user experience during game play. Applying cognitive models to game play data can help researchers understand a player's attentional focus, memory status, learning state, and decision strategies (among other things) as these cognitive processes occurred throughout game play. This is a stark contrast to the common approach of trying to assess the long-term impact of games on cognitive functioning after game play has ended. We describe what cognitive models are, what they can be used for and how game researchers could benefit by adopting these methods. We also provide details of a single model - based on decision field theory - that has been successfUlly applied to data sets from memory, perception, and decision making experiments, and has recently found application in real world scenarios. We examine possibilities for applying this model to game-play data.

  9. Video Modeling of SBIRT for Alcohol Use Disorders Increases Student Empathy in Standardized Patient Encounters.

    Science.gov (United States)

    Crisafio, Anthony; Anderson, Victoria; Frank, Julia

    2017-05-08

    The purpose of this study was to assess the usefulness of adding video models of brief alcohol assessment and counseling to a standardized patient (SP) curriculum that covers and tests acquisition of this skill. The authors conducted a single-center, retrospective cohort study of third- and fourth-year medical students between 2013 and 2015. All students completed a standardized patient (SP) encounter illustrating the diagnosis of alcohol use disorder, followed by an SP exam on the same topic. Beginning in August 2014, the authors supplemented the existing formative SP exercise on problem drinking with one of two 5-min videos demonstrating screening, brief intervention, and referral for treatment (SBIRT). P values and Z tests were performed to evaluate differences between students who did and did not see the video in knowledge and skills related to alcohol use disorders. One hundred ninety-four students were included in this analysis. Compared to controls, subjects did not differ in their ability to uncover and accurately characterize an alcohol problem during a standardized encounter (mean exam score 41.29 vs 40.93, subject vs control, p = 0.539). However, the SPs' rating of students' expressions of empathy were significantly higher for the group who saw the video (81.63 vs 69.79%, p < 0.05). The findings did not confirm the original hypothesis that the videos would improve students' recognition and knowledge of alcohol-related conditions. However, feedback from the SPs produced the serendipitous finding that the communication skills demonstrated in the videos had a sustained effect in enhancing students' professional behavior.

  10. SMAFS, Steady-state analysis Model for Advanced Fuel cycle Schemes

    International Nuclear Information System (INIS)

    LEE, Kwang-Seok

    2006-01-01

    1 - Description of program or function: The model was developed as a part of the study, 'Advanced Fuel Cycles and Waste Management', which was performed during 2003-2005 by an ad-hoc expert group under the Nuclear Development Committee in the OECD/NEA. The model was designed for an efficient conduct of nuclear fuel cycle scheme cost analyses. It is simple, transparent and offers users the capability to track down the cost analysis results. All the fuel cycle schemes considered in the model are represented in a graphic format and all values related to a fuel cycle step are shown in the graphic interface, i.e., there are no hidden values embedded in the calculations. All data on the fuel cycle schemes considered in the study including mass flows, waste generation, cost data, and other data such as activities, decay heat and neutron sources of spent fuel and high-level waste along time are included in the model and can be displayed. The user can modify easily the values of mass flows and/or cost parameters and see the corresponding changes in the results. The model calculates: front-end fuel cycle mass flows such as requirements of enrichment and conversion services and natural uranium; mass of waste based on the waste generation parameters and the mass flow; and all costs. It performs Monte Carlo simulations with changing the values of all unit costs within their respective ranges (from lower to upper bounds). 2 - Methods: In Monte Carlo simulation, it is assumed that all unit costs follow a triangular probability distribution function, i.e., the probability that the unit cost has a value increases linearly from its lower bound to the nominal value and then decreases linearly to its upper bound. 3 - Restrictions on the complexity of the problem: The limit for the Monte Carlo iterations is the one of an Excel worksheet, i.e. 65,536

  11. Conceptual model for the design of a serious video game promoting self-management among youth with type 1 diabetes.

    Science.gov (United States)

    Thompson, Debbe; Baranowski, Tom; Buday, Richard

    2010-05-01

    Video games are a popular form of entertainment. Serious video games for health attempt to use entertainment to promote health behavior change. When designed within a framework informed by behavioral science and supported by commercial game-design principles, serious video games for health have the potential to be an effective method for promoting self-management behaviors among youth with diabetes. This article presents a conceptual model of how this may be achieved. It concludes by identifying research needed to refine our knowledge regarding how to develop effective serious video games for health. (c) 2010 Diabetes Technology Society.

  12. APC-PC Combined Scheme in Gilbert Two State Model: Proposal and Study

    Science.gov (United States)

    Bulo, Yaka; Saring, Yang; Bhunia, Chandan Tilak

    2017-04-01

    In an automatic repeat request (ARQ) scheme, a packet is retransmitted if it gets corrupted due to transmission errors caused by the channel. However, an erroneous packet may contain both erroneous bits and correct bits and hence it may still contain useful information. The receiver may be able to combine this information from multiple erroneous copies to recover the correct packet. Packet combining (PC) is a simple and elegant scheme of error correction in transmitted packet, in which two received copies are XORed to obtain the bit location of erroneous bits. Thereafter, the packet is corrected by bit inversion of bit located as erroneous. Aggressive packet combining (APC) is a logic extension of PC primarily designed for wireless communication with objective of correcting error with low latency. PC offers higher throughput than APC, but PC does not correct double bit errors if occur in same bit location of erroneous copies of the packet. A hybrid technique is proposed to utilize the advantages of both APC and PC while attempting to remove the limitation of both. In the proposed technique, applications of APC-PC on Gilbert two state model has been studied. The simulation results show that the proposed technique offers better throughput than the conventional APC and lesser packet error rate than PC scheme.

  13. Hybrid advection scheme for 3-dimensional atmospheric models. Testing and application for a study of NO{sub x} transport

    Energy Technology Data Exchange (ETDEWEB)

    Zubov, V.A.; Rozanov, E.V. [Main Geophysical Observatory, St.Petersburg (Russian Federation); Schlesinger, M.E.; Andronova, N.G. [Illinois Univ., Urbana-Champaign, IL (United States). Dept. of Atmospheric Sciences

    1997-12-31

    The problems of ozone depletion, climate change and atmospheric pollution strongly depend on the processes of production, destruction and transport of chemical species. A hybrid transport scheme was developed, consisting of the semi-Lagrangian scheme for horizontal advection and the Prather scheme for vertical transport, which have been used for the Atmospheric Chemical Transport model to calculate the distributions of different chemical species. The performance of the new hybrid scheme has been evaluated in comparison with other transport schemes on the basis of specially designed tests. The seasonal cycle of the distribution of N{sub 2}O simulated by the model, as well as the dispersion of NO{sub x} exhausted from subsonic aircraft, are in a good agreement with published data. (author) 8 refs.

  14. A land surface scheme for atmospheric and hydrologic models: SEWAB (Surface Energy and Water Balance)

    Energy Technology Data Exchange (ETDEWEB)

    Mengelkamp, H.T.; Warrach, K.; Raschke, E. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Atmosphaerenphysik

    1997-12-31

    A soil-vegetation-atmosphere-transfer scheme is presented here which solves the coupled system of the Surface Energy and Water Balance (SEWAB) equations considering partly vegetated surfaces. It is based on the one-layer concept for vegetation. In the soil the diffusion equations for heat and moisture are solved on a multi-layer grid. SEWAB has been developed to serve as a land-surface scheme for atmospheric circulation models. Being forced with atmospheric data from either simulations or measurements it calculates surface and subsurface runoff that can serve as input to hydrologic models. The model has been validated with field data from the FIFE experiment and has participated in the PILPS project for intercomparison of land-surface parameterization schemes. From these experiments we feel that SEWAB reasonably well partitions the radiation and precipitation into sensible and latent heat fluxes as well as into runoff and soil moisture Storage. (orig.) [Deutsch] Ein Landoberflaechenschema wird vorgestellt, das den Transport von Waerme und Wasser zwischen dem Erdboden, der Vegetation und der Atmosphaere unter Beruecksichtigung von teilweise bewachsenem Boden beschreibt. Im Erdboden werden die Diffusionsgleichungen fuer Waerme und Feuchte auf einem Gitter mit mehreren Schichten geloest. Das Schema SEWAB (Surface Energy and Water Balance) beschreibt die Landoberflaechenprozesse in atmosphaerischen Modellen und berechnet den Oberflaechenabfluss und den Basisabfluss, die als Eingabedaten fuer hydrologische Modelle genutzt werden koennen. Das Modell wurde mit Daten des FIFE-Experiments kalibriert und hat an Vergleichsexperimenten fuer Landoberflaechen-Schemata im Rahmen des PILPS-Projektes teilgenommen. Dabei hat sich gezeigt, dass die Aufteilung der einfallenden Strahlung und des Niederschlages in den sensiblen und latenten Waermefluss und auch in Abfluss und Speicherung der Bodenfeuchte in SEWAB den beobachteten Daten recht gut entspricht. (orig.)

  15. Impact of an improved shortwave radiation scheme in the MAECHAM5 General Circulation Model

    Directory of Open Access Journals (Sweden)

    J. J. Morcrette

    2007-05-01

    Full Text Available In order to improve the representation of ozone absorption in the stratosphere of the MAECHAM5 general circulation model, the spectral resolution of the shortwave radiation parameterization used in the model has been increased from 4 to 6 bands. Two 20-years simulations with the general circulation model have been performed, one with the standard and the other with the newly introduced parameterization respectively, to evaluate the temperature and dynamical changes arising from the two different representations of the shortwave radiative transfer. In the simulation with the increased spectral resolution in the radiation parameterization, a significant warming of almost the entire model domain is reported. At the summer stratopause the temperature increase is about 6 K and alleviates the cold bias present in the model when the standard radiation scheme is used. These general circulation model results are consistent both with previous validation of the radiation scheme and with the offline clear-sky comparison performed in the current work with a discrete ordinate 4 stream scattering line by line radiative transfer model. The offline validation shows a substantial reduction of the daily averaged shortwave heating rate bias (1–2 K/day cooling that occurs for the standard radiation parameterization in the upper stratosphere, present under a range of atmospheric conditions. Therefore, the 6 band shortwave radiation parameterization is considered to be better suited for the representation of the ozone absorption in the stratosphere than the 4 band parameterization. Concerning the dynamical response in the general circulation model, it is found that the reported warming at the summer stratopause induces stronger zonal mean zonal winds in the middle atmosphere. These stronger zonal mean zonal winds thereafter appear to produce a dynamical feedback that results in a dynamical warming (cooling of the polar winter (summer mesosphere, caused by an

  16. Using Video Modeling to Teach Reciprocal Pretend Play to Children with Autism

    Science.gov (United States)

    MacDonald, Rebecca; Sacramone, Shelly; Mansfield, Renee; Wiltz, Kristine; Ahearn, William H

    2009-01-01

    The purpose of the present study was to use video modeling to teach children with autism to engage in reciprocal pretend play with typically developing peers. Scripted play scenarios involving various verbalizations and play actions with adults as models were videotaped. Two children with autism were each paired with a typically developing child, and a multiple-probe design across three play sets was used to evaluate the effects of the video modeling procedure. Results indicated that both children with autism and the typically developing peers acquired the sequences of scripted verbalizations and play actions quickly and maintained this performance during follow-up probes. In addition, probes indicated an increase in the mean number of unscripted verbalizations as well as reciprocal verbal interactions and cooperative play. These findings are discussed as they relate to the development of reciprocal pretend-play repertoires in young children with autism. PMID:19721729

  17. Procedures and Compliance of a Video Modeling Applied Behavior Analysis Intervention for Brazilian Parents of Children with Autism Spectrum Disorders

    Science.gov (United States)

    Bagaiolo, Leila F.; Mari, Jair de J.; Bordini, Daniela; Ribeiro, Tatiane C.; Martone, Maria Carolina C.; Caetano, Sheila C.; Brunoni, Decio; Brentani, Helena; Paula, Cristiane S.

    2017-01-01

    Video modeling using applied behavior analysis techniques is one of the most promising and cost-effective ways to improve social skills for parents with autism spectrum disorder children. The main objectives were: (1) To elaborate/describe videos to improve eye contact and joint attention, and to decrease disruptive behaviors of autism spectrum…

  18. Towards social autonomous vehicles: Efficient collision avoidance scheme using Richardson's arms race model.

    Science.gov (United States)

    Riaz, Faisal; Niazi, Muaz A

    2017-01-01

    This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson's arms race model has also been presented. The performance of the proposed social agent has been validated at two levels-firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme.

  19. Towards social autonomous vehicles: Efficient collision avoidance scheme using Richardson’s arms race model

    Science.gov (United States)

    Niazi, Muaz A.

    2017-01-01

    This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson’s arms race model has also been presented. The performance of the proposed social agent has been validated at two levels–firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme. PMID:29040294

  20. Towards social autonomous vehicles: Efficient collision avoidance scheme using Richardson's arms race model.

    Directory of Open Access Journals (Sweden)

    Faisal Riaz

    Full Text Available This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs, which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM level of the Cognitive Agent Based Computing (CABC framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson's arms race model has also been presented. The performance of the proposed social agent has been validated at two levels-firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme.

  1. A design of mathematical modelling for the mudharabah scheme in shariah insurance

    Science.gov (United States)

    Cahyandari, R.; Mayaningsih, D.; Sukono

    2017-01-01

    Indonesian Shariah Insurance Association (AASI) believes that 2014 is the year of Indonesian Shariah insurance, since its growth was above the conventional insurance. In December 2013, 43% growth was recorded for shariah insurance, while the conventional insurance was only hit 20%. This means that shariah insurance has tremendous potential to remain growing in the future. In addition, the growth can be predicted from the number of conventional insurance companies who open sharia division, along with the development of Islamic banking development which automatically demand the role of shariah insurance to protect assets and banking transactions. The development of shariah insurance should be accompanied by the development of premium fund management mechanism, in order to create innovation on shariah insurance products which beneficial for the society. The development of premium fund management model shows a positive progress through the emergence of Mudharabah, Wakala, Hybrid (Mudharabah-Wakala), and Wakala-Waqf. However, ‘model’ term that referred in this paper is regarded as an operational model in form of a scheme of management mechanism. Therefore, this paper will describe a mathematical modeling for premium fund management scheme, especially for Mudharabah concept. Mathematical modeling is required for an analysis process that can be used to predict risks that could be faced by a company in the future, so that the company could take a precautionary policy to minimize those risks.

  2. Multimodal Semantic Analysis and Annotation for Basketball Video

    Directory of Open Access Journals (Sweden)

    Liu Song

    2006-01-01

    Full Text Available This paper presents a new multiple-modality method for extracting semantic information from basketball video. The visual, motion, and audio information are extracted from video to first generate some low-level video segmentation and classification. Domain knowledge is further exploited for detecting interesting events in the basketball video. For video, both visual and motion prediction information are utilized for shot and scene boundary detection algorithm; this will be followed by scene classification. For audio, audio keysounds are sets of specific audio sounds related to semantic events and a classification method based on hidden Markov model (HMM is used for audio keysound identification. Subsequently, by analyzing the multimodal information, the positions of potential semantic events, such as "foul" and "shot at the basket," are located with additional domain knowledge. Finally, a video annotation is generated according to MPEG-7 multimedia description schemes (MDSs. Experimental results demonstrate the effectiveness of the proposed method.

  3. Numerical Modeling of Deep Mantle Convection: Advection and Diffusion Schemes for Marker Methods

    Science.gov (United States)

    Mulyukova, Elvira; Dabrowski, Marcin; Steinberger, Bernhard

    2013-04-01

    Thermal and chemical evolution of Earth's deep mantle can be studied by modeling vigorous convection in a chemically heterogeneous fluid. Numerical modeling of such a system poses several computational challenges. Dominance of heat advection over the diffusive heat transport, and a negligible amount of chemical diffusion results in sharp gradients of thermal and chemical fields. The exponential dependence of the viscosity of mantle materials on temperature also leads to high gradients of the velocity field. The accuracy of many numerical advection schemes degrades quickly with increasing gradient of the solution, while the computational effort, in terms of the scheme complexity and required resolution, grows. Additional numerical challenges arise due to a large range of length-scales characteristic of a thermochemical convection system with highly variable viscosity. To examplify, the thickness of the stem of a rising thermal plume may be a few percent of the mantle thickness. An even thinner filament of an anomalous material that is entrained by that plume may consitute less than a tenth of a percent of the mantle thickness. We have developed a two-dimensional FEM code to model thermochemical convection in a hollow cylinder domain, with a depth- and temperature-dependent viscosity representative of the mantle (Steinberger and Calderwood, 2006). We use marker-in-cell method for advection of chemical and thermal fields. The main advantage of perfoming advection using markers is absence of numerical diffusion during the advection step, as opposed to the more diffusive field-methods. However, in the common implementation of the marker-methods, the solution of the momentum and energy equations takes place on a computational grid, and nodes do not generally coincide with the positions of the markers. Transferring velocity-, temperature-, and chemistry- information between nodes and markers introduces errors inherent to inter- and extrapolation. In the numerical scheme

  4. Smoking, ADHD, and Problematic Video Game Use: A Structural Modeling Approach.

    Science.gov (United States)

    Lee, Hyo Jin; Tran, Denise D; Morrell, Holly E R

    2018-04-13

    Problematic video game use (PVGU), or addiction-like use of video games, is associated with physical and mental health problems and problems in social and occupational functioning. Possible correlates of PVGU include frequency of play, cigarette smoking, and attention deficit hyperactivity disorder (ADHD). The aim of the current study was to explore simultaneously the relationships among these variables as well as test whether two separate measures of PVGU measure the same construct, using a structural modeling approach. Secondary data analysis was conducted on 2,801 video game users (M age  = 22.43 years, standard deviation [SD] age  = 4.7; 93 percent male) who completed an online survey. The full model fit the data well: χ 2 (2) = 2.017, p > 0.05; root mean square error of approximation (RMSEA) = 0.002 (90% CI [0.000-0.038]); comparative fit index (CFI) = 1.000; standardized root mean square residual (SRMR) = 0.004; and all standardized residuals video game use explained 41.8 percent of variance in PVGU. Tracking these variables may be useful for PVGU prevention and assessment. Young's Internet Addiction Scale, adapted for video game use, and the Problem Videogame Playing Scale both loaded strongly onto a PVGU factor, suggesting that they measure the same construct, that studies using either measure may be compared to each other, and that both measures may be used as a screener of PVGU.

  5. The critical role of the routing scheme in simulating peak river discharge in global hydrological models

    Science.gov (United States)

    Zhao, F.; Veldkamp, T.; Frieler, K.; Schewe, J.; Ostberg, S.; Willner, S. N.; Schauberger, B.; Gosling, S.; Mueller Schmied, H.; Portmann, F. T.; Leng, G.; Huang, M.; Liu, X.; Tang, Q.; Hanasaki, N.; Biemans, H.; Gerten, D.; Satoh, Y.; Pokhrel, Y. N.; Stacke, T.; Ciais, P.; Chang, J.; Ducharne, A.; Guimberteau, M.; Wada, Y.; Kim, H.; Yamazaki, D.

    2017-12-01

    Global hydrological models (GHMs) have been applied to assess global flood hazards, but their capacity to capture the timing and amplitude of peak river discharge—which is crucial in flood simulations—has traditionally not been the focus of examination. Here we evaluate to what degree the choice of river routing scheme affects simulations of peak discharge and may help to provide better agreement with observations. To this end we use runoff and discharge simulations of nine GHMs forced by observational climate data (1971-2010) within the ISIMIP2a project. The runoff simulations were used as input for the global river routing model CaMa-Flood. The simulated daily discharge was compared to the discharge generated by each GHM using its native river routing scheme. For each GHM both versions of simulated discharge were compared to monthly and daily discharge observations from 1701 GRDC stations as a benchmark. CaMa-Flood routing shows a general reduction of peak river discharge and a delay of about two to three weeks in its occurrence, likely induced by the buffering capacity of floodplain reservoirs. For a majority of river basins, discharge produced by CaMa-Flood resulted in a better agreement with observations. In particular, maximum daily discharge was adjusted, with a multi-model averaged reduction in bias over about 2/3 of the analysed basin area. The increase in agreement was obtained in both managed and near-natural basins. Overall, this study demonstrates the importance of routing scheme choice in peak discharge simulation, where CaMa-Flood routing accounts for floodplain storage and backwater effects that are not represented in most GHMs. Our study provides important hints that an explicit parameterisation of these processes may be essential in future impact studies.

  6. Development and evaluation of a building energy model integrated in the TEB scheme

    Directory of Open Access Journals (Sweden)

    B. Bueno

    2012-03-01

    Full Text Available The use of air-conditioning systems is expected to increase as a consequence of global-scale and urban-scale climate warming. In order to represent future scenarios of urban climate and building energy consumption, the Town Energy Balance (TEB scheme must be improved. This paper presents a new building energy model (BEM that has been integrated in the TEB scheme. BEM-TEB makes it possible to represent the energy effects of buildings and building systems on the urban climate and to estimate the building energy consumption at city scale (~10 km with a resolution of a neighbourhood (~100 m. The physical and geometric definition of buildings in BEM has been intentionally kept as simple as possible, while maintaining the required features of a comprehensive building energy model. The model considers a single thermal zone, where the thermal inertia of building materials associated with multiple levels is represented by a generic thermal mass. The model accounts for heat gains due to transmitted solar radiation, heat conduction through the enclosure, infiltration, ventilation, and internal heat gains. BEM allows for previously unavailable sophistication in the modelling of air-conditioning systems. It accounts for the dependence of the system capacity and efficiency on indoor and outdoor air temperatures and solves the dehumidification of the air passing through the system. Furthermore, BEM includes specific models for passive systems, such as window shadowing devices and natural ventilation. BEM has satisfactorily passed different evaluation processes, including testing its modelling assumptions, verifying that the chosen equations are solved correctly, and validating the model with field data.

  7. Development and evaluation of a building energy model integrated in the TEB scheme

    Science.gov (United States)

    Bueno, B.; Pigeon, G.; Norford, L. K.; Zibouche, K.; Marchadier, C.

    2012-03-01

    The use of air-conditioning systems is expected to increase as a consequence of global-scale and urban-scale climate warming. In order to represent future scenarios of urban climate and building energy consumption, the Town Energy Balance (TEB) scheme must be improved. This paper presents a new building energy model (BEM) that has been integrated in the TEB scheme. BEM-TEB makes it possible to represent the energy effects of buildings and building systems on the urban climate and to estimate the building energy consumption at city scale (~10 km) with a resolution of a neighbourhood (~100 m). The physical and geometric definition of buildings in BEM has been intentionally kept as simple as possible, while maintaining the required features of a comprehensive building energy model. The model considers a single thermal zone, where the thermal inertia of building materials associated with multiple levels is represented by a generic thermal mass. The model accounts for heat gains due to transmitted solar radiation, heat conduction through the enclosure, infiltration, ventilation, and internal heat gains. BEM allows for previously unavailable sophistication in the modelling of air-conditioning systems. It accounts for the dependence of the system capacity and efficiency on indoor and outdoor air temperatures and solves the dehumidification of the air passing through the system. Furthermore, BEM includes specific models for passive systems, such as window shadowing devices and natural ventilation. BEM has satisfactorily passed different evaluation processes, including testing its modelling assumptions, verifying that the chosen equations are solved correctly, and validating the model with field data.

  8. Particular solutions of a problem resulting from Huang's model by averaging according to Fatou's scheme

    Science.gov (United States)

    Shirmin, G. I.

    1980-08-01

    In the present paper, an averaging on the basis of Fatou's (1931) scheme is obtained within the framework of a version of the doubly restricted problem of four bodies. A proof is obtained for the existence of particular solutions that are analogous to the Eulerian and Lagrangian solutions. The solutions are applied to an analysis of first-order secular disturbances in the positions of libration points, caused by the influence of a body whose attraction is neglected in the classical model of the restricted three-body problem. These disturbances are shown to lead to continuous displacements of the libration points.

  9. A Certificateless Ring Signature Scheme with High Efficiency in the Random Oracle Model

    Directory of Open Access Journals (Sweden)

    Yingying Zhang

    2017-01-01

    Full Text Available Ring signature is a kind of digital signature which can protect the identity of the signer. Certificateless public key cryptography not only overcomes key escrow problem but also does not lose some advantages of identity-based cryptography. Certificateless ring signature integrates ring signature with certificateless public key cryptography. In this paper, we propose an efficient certificateless ring signature; it has only three bilinear pairing operations in the verify algorithm. The scheme is proved to be unforgeable in the random oracle model.

  10. Development of orthogonal 2-dimensional numerical code TFC2D for fluid flow with various turbulence models and numerical schemes

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ju Yeop; In, Wang Kee; Chun, Tae Hyun; Oh, Dong Seok [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-02-01

    The development of orthogonal 2-dimensional numerical code is made. The present code contains 9 kinds of turbulence models that are widely used. They include a standard k-{epsilon} model and 8 kinds of low Reynolds number ones. They also include 6 kinds of numerical schemes including 5 kinds of low order schemes and 1 kind of high order scheme such as QUICK. To verify the present numerical code, pipe flow, channel flow and expansion pipe flow are solved by this code with various options of turbulence models and numerical schemes and the calculated outputs are compared to experimental data. Furthermore, the discretization error that originates from the use of standard k-{epsilon} turbulence model with wall function is much more diminished by introducing a new grid system than a conventional one in the present code. 23 refs., 58 figs., 6 tabs. (Author)

  11. Comprehending isospin breaking effects of X (3872 ) in a Friedrichs-model-like scheme

    Science.gov (United States)

    Zhou, Zhi-Yong; Xiao, Zhiguang

    2018-02-01

    Recently, we have shown that the X (3872 ) state can be naturally generated as a bound state by incorporating the hadron interactions into the Godfrey-Isgur quark model using a Friedrichs-like model combined with the quark pair creation model, in which the wave function for the X (3872 ) as a combination of the bare c c ¯ state and the continuum states can also be obtained. Under this scheme, we now investigate the isospin-breaking effect of X (3872 ) in its decays to J /ψ π+π- and J /ψ π+π-π0. By coupling its dominant continuum parts to J /ψ ρ and J /ψ ω through the quark rearrangement process, one could obtain the reasonable ratio of B (X (3872 )→J /ψ π+π-π0)/B (X (3872 )→J /ψ π+π-)≃ (0.58 - 0.92 ) . It is also shown that the D ¯D* invariant mass distributions in the B →D ¯D*K decays could be understood qualitatively at the same time. This scheme may provide more insight into the enigmatic nature of the X (3872 ) state.

  12. The Interplay Between Transpiration and Runoff Formulations in Land Surface Schemes Used with Atmospheric Models

    Science.gov (United States)

    Koster, Rindal D.; Milly, P. C. D.

    1997-01-01

    The Project for Intercomparison of Land-surface Parameterization Schemes (PILPS) has shown that different land surface models (LSMS) driven by the same meteorological forcing can produce markedly different surface energy and water budgets, even when certain critical aspects of the LSMs (vegetation cover, albedo, turbulent drag coefficient, and snow cover) are carefully controlled. To help explain these differences, the authors devised a monthly water balance model that successfully reproduces the annual and seasonal water balances of the different PILPS schemes. Analysis of this model leads to the identification of two quantities that characterize an LSM's formulation of soil water balance dynamics: (1) the efficiency of the soil's evaporation sink integrated over the active soil moisture range, and (2) the fraction of this range over which runoff is generated. Regardless of the LSM's complexity, the combination of these two derived parameters with rates of interception loss, potential evaporation, and precipitation provides a reasonable estimate for the LSM's simulated annual water balance. The two derived parameters shed light on how evaporation and runoff formulations interact in an LSM, and the analysis as a whole underscores the need for compatibility in these formulations.

  13. Teaching Children with Autism to Play a Video Game Using Activity Schedules and Game-Embedded Simultaneous Video Modeling

    Science.gov (United States)

    Blum-Dimaya, Alyssa; Reeve, Sharon A.; Reeve, Kenneth F.; Hoch, Hannah

    2010-01-01

    Children with autism have severe and pervasive impairments in social interactions and communication that impact most areas of daily living and often limit independent engagement in leisure activities. We taught four children with autism to engage in an age-appropriate leisure skill, playing the video game Guitar Hero II[TM], through the use of (a)…

  14. A self-organized internal models architecture for coding sensory-motor schemes

    Directory of Open Access Journals (Sweden)

    Esaú eEscobar Juárez

    2016-04-01

    Full Text Available Cognitive robotics research draws inspiration from theories and models on cognition, as conceived by neuroscience or cognitive psychology, to investigate biologically plausible computational models in artificial agents. In this field, the theoretical framework of Grounded Cognition provides epistemological and methodological grounds for the computational modeling of cognition. It has been stressed in the literature that textit{simulation}, textit{prediction}, and textit{multi-modal integration} are key aspects of cognition and that computational architectures capable of putting them into play in a biologically plausible way are a necessity.Research in this direction has brought extensive empirical evidencesuggesting that textit{Internal Models} are suitable mechanisms forsensory-motor integration. However, current Internal Models architectures show several drawbacks, mainly due to the lack of a unified substrate allowing for a true sensory-motor integration space, enabling flexible and scalable ways to model cognition under the embodiment hypothesis constraints.We propose the Self-Organized Internal ModelsArchitecture (SOIMA, a computational cognitive architecture coded by means of a network of self-organized maps, implementing coupled internal models that allow modeling multi-modal sensory-motor schemes. Our approach addresses integrally the issues of current implementations of Internal Models.We discuss the design and features of the architecture, and provide empirical results on a humanoid robot that demonstrate the benefits and potentialities of the SOIMA concept for studying cognition in artificial agents.

  15. Video self-modeling as a post-treatment fluency recovery strategy for adults.

    Science.gov (United States)

    Harasym, Jessica; Langevin, Marilyn; Kully, Deborah

    2015-06-01

    This multiple-baseline across subjects study investigated the effectiveness of video self-modeling (VSM) in reducing stuttering and bringing about improvements in associated self-report measures. Participants' viewing practices and perceptions of the utility of VSM also were explored. Three adult males who had previously completed speech restructuring treatment viewed VSM recordings twice per week for 6 weeks. Weekly speech data, treatment viewing logs, and pre- and post-treatment self-report measures were obtained. An exit interview also was conducted. Two participants showed a decreasing trend in stuttering frequency. All participants appeared to engage in fewer avoidance behaviors and had less expectations to stutter. All participants perceived that, in different ways, the VSM treatment had benefited them and all participants had unique viewing practices. Given the increasing availability and ease in using portable audio-visual technology, VSM appears to offer an economical and clinically useful tool for clients who are motivated to use the technology to recover fluency. Readers will be able to describe: (a) the tenets of video-self modeling; (b) the main components of video-self modeling as a fluency recovery treatment as used in this study; and (c) speech and self-report outcomes. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Comparison of Spatial Interpolation Schemes for Rainfall Data and Application in Hydrological Modeling

    Directory of Open Access Journals (Sweden)

    Tao Chen

    2017-05-01

    Full Text Available The spatial distribution of precipitation is an important aspect of water-related research. The use of different interpolation schemes in the same catchment may cause large differences and deviations from the actual spatial distribution of rainfall. Our study analyzes different methods of spatial rainfall interpolation at annual, daily, and hourly time scales to provide a comprehensive evaluation. An improved regression-based scheme is proposed using principal component regression with residual correction (PCRR and is compared with inverse distance weighting (IDW and multiple linear regression (MLR interpolation methods. In this study, the meso-scale catchment of the Fuhe River in southeastern China was selected as a typical region. Furthermore, a hydrological model HEC-HMS was used to calculate streamflow and to evaluate the impact of rainfall interpolation methods on the results of the hydrological model. Results show that the PCRR method performed better than the other methods tested in the study and can effectively eliminate the interpolation anomalies caused by terrain differences between observation points and surrounding areas. Simulated streamflow showed different characteristics based on the mean, maximum, minimum, and peak flows. The results simulated by PCRR exhibited the lowest streamflow error and highest correlation with measured values at the daily time scale. The application of the PCRR method is found to be promising because it considers multicollinearity among variables.

  17. Development of a Multi-Model Ensemble Scheme for the Tropical Cyclone Forecast

    Science.gov (United States)

    Jun, S.; Lee, W. J.; Kang, K.; Shin, D. H.

    2015-12-01

    A Multi-Model Ensemble (MME) prediction scheme using selected and weighted method was developed and evaluated for tropical cyclone forecast. The analyzed tropical cyclone track and intensity data set provided by Korea Meteorological Administration and 11 numerical model outputs - GDAPS, GEPS, GFS (data resolution; 50 and 100 km), GFES, HWRF, IFS(data resolution; 50 and 100 km), IFS EPS, JGSM, and TEPS - during 2011-2014 were used for this study. The procedure suggested in this study was divided into two stages: selecting and weighting process. First several numerical models were chosen based on the past model's performances in the selecting stage. Next, weights, referred to as regression coefficients, for each model forecasts were calculated by applying the linear and nonlinear regression technique to past model forecast data in the weighting stage. Finally, tropical cyclone forecasts were determined by using both selected and weighted multi-model values at that forecast time. The preliminary result showed that selected MME's improvement rate (%) was more than 5% comparing with non-selected MME at 72 h track forecast.

  18. Current use of impact models for agri-environment schemes and potential for improvements of policy design and assessment

    DEFF Research Database (Denmark)

    Primdahl, Jorgen; Vesterager, Jens Peter; Finn, John A.

    2010-01-01

    Agri-Environment Schemes (AES) to maintain or promote environmentally-friendly farming practices were implemented on about 25% of all agricultural land in the EU by 2002. This article analyses and discusses the actual and potential use of impact models in supporting the design, implementation...... and evaluation of AES. Impact models identify and establish the causal relationships between policy objectives and policy outcomes. We review and discuss the role of impact models at different stages in the AES policy process, and present results from a survey of impact models underlying 60 agri-environmental...... schemes in seven EU member states. We distinguished among three categories of impact models (quantitative, qualitative or common sense), depending on the degree of evidence in the formal scheme description, additional documents, or key person interviews. The categories of impact models used mainly...

  19. A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

    KAUST Repository

    Dutta, Aritra

    2017-07-02

    Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.

  20. An adaptive hybrid EnKF-OI scheme for efficient state-parameter estimation of reactive contaminant transport models

    KAUST Repository

    El Gharamti, Mohamad

    2014-09-01

    Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.

  1. Sensitivity analysis of WRF model PBL schemes in simulating boundary-layer variables in southern Italy: An experimental campaign

    DEFF Research Database (Denmark)

    Avolio, E.; Federico, S.; Miglietta, M.

    2017-01-01

    the surface, where the model uncertainties are, usually, smaller than at the surface. A general anticlockwise rotation of the simulated flow with height is found at all levels. The mixing height is overestimated by all schemes and a possible role of the simulated sensible heat fluxes for this mismatching......The sensitivity of boundary layer variables to five (two non-local and three local) planetary boundary-layer (PBL) parameterization schemes, available in the Weather Research and Forecasting (WRF) mesoscale meteorological model, is evaluated in an experimental site in Calabria region (southern...... is investigated. On a single-case basis, significantly better results are obtained when the atmospheric conditions near the measurement site are dominated by synoptic forcing rather than by local circulations. From this study, it follows that the two first order non-local schemes, ACM2 and YSU, are the schemes...

  2. Simulating dam-break over an erodible embankment using SWE-Exner model and semi-implicit staggered scheme

    Science.gov (United States)

    Ambara, M. D.; Gunawan, P. H.

    2018-03-01

    The impact of a dam-break wave on an erodible embankment with a steep slope has been studied recently using both experimental and numerical approaches. In this paper, the semi-implicit staggered scheme for approximating the shallow water-Exner model will be elaborated to describe the erodible sediment on a steep slope. This scheme is known as a robust scheme to approximate shallow water-Exner model. The results are shown in a good agreement with the experimental data. The comparisons of numerical results with data experiment using slopes Φ = 59.04 and Φ = 41.42 by coefficient of Grass formula Ag = 2 × 10‑5 and Ag = 10‑5 respectively are found the closest results to the experiment. This paper can be seen as the additional validation of semi-implicit staggered scheme in the paper of Gunawan, et al (2015).

  3. Multi-model finite element scheme for static and free vibration analyses of composite laminated beams

    Directory of Open Access Journals (Sweden)

    U.N. Band

    Full Text Available Abstract A transition element is developed for the local global analysis of laminated composite beams. It bridges one part of the domain modelled with a higher order theory and other with a 2D mixed layerwise theory (LWT used at critical zone of the domain. The use of developed transition element makes the analysis for interlaminar stresses possible with significant accuracy. The mixed 2D model incorporates the transverse normal and shear stresses as nodal degrees of freedom (DOF which inherently ensures continuity of these stresses. Non critical zones are modelled with higher order equivalent single layer (ESL theory leading to the global mesh with multiple models applied simultaneously. Use of higher order ESL in non critical zones reduces the total number of elements required to map the domain. A substantial reduction in DOF as compared to a complete 2D mixed model is obvious. This computationally economical multiple modelling scheme using the transition element is applied to static and free vibration analyses of laminated composite beams. Results obtained are in good agreement with benchmarks available in literature.

  4. Modeling mixed retention and early arrivals in multidimensional heterogeneous media using an explicit Lagrangian scheme

    Science.gov (United States)

    Zhang, Yong; Meerschaert, Mark M.; Baeumer, Boris; LaBolle, Eric M.

    2015-08-01

    This study develops an explicit two-step Lagrangian scheme based on the renewal-reward process to capture transient anomalous diffusion with mixed retention and early arrivals in multidimensional media. The resulting 3-D anomalous transport simulator provides a flexible platform for modeling transport. The first step explicitly models retention due to mass exchange between one mobile zone and any number of parallel immobile zones. The mobile component of the renewal process can be calculated as either an exponential random variable or a preassigned time step, and the subsequent random immobile time follows a Hyper-exponential distribution for finite immobile zones or a tempered stable distribution for infinite immobile zones with an exponentially tempered power-law memory function. The second step describes well-documented early arrivals which can follow streamlines due to mechanical dispersion using the method of subordination to regional flow. Applicability and implementation of the Lagrangian solver are further checked against transport observed in various media. Results show that, although the time-nonlocal model parameters are predictable for transport with retention in alluvial settings, the standard time-nonlocal model cannot capture early arrivals. Retention and early arrivals observed in porous and fractured media can be efficiently modeled by our Lagrangian solver, allowing anomalous transport to be incorporated into 2-D/3-D models with irregular flow fields. Extensions of the particle-tracking approach are also discussed for transport with parameters conditioned on local aquifer properties, as required by transient flow and nonstationary media.

  5. Transition point prediction in a multicomponent lattice Boltzmann model: Forcing scheme dependencies

    Science.gov (United States)

    Küllmer, Knut; Krämer, Andreas; Joppich, Wolfgang; Reith, Dirk; Foysi, Holger

    2018-02-01

    Pseudopotential-based lattice Boltzmann models are widely used for numerical simulations of multiphase flows. In the special case of multicomponent systems, the overall dynamics are characterized by the conservation equations for mass and momentum as well as an additional advection diffusion equation for each component. In the present study, we investigate how the latter is affected by the forcing scheme, i.e., by the way the underlying interparticle forces are incorporated into the lattice Boltzmann equation. By comparing two model formulations for pure multicomponent systems, namely the standard model [X. Shan and G. D. Doolen, J. Stat. Phys. 81, 379 (1995), 10.1007/BF02179985] and the explicit forcing model [M. L. Porter et al., Phys. Rev. E 86, 036701 (2012), 10.1103/PhysRevE.86.036701], we reveal that the diffusion characteristics drastically change. We derive a generalized, potential function-dependent expression for the transition point from the miscible to the immiscible regime and demonstrate that it is shifted between the models. The theoretical predictions for both the transition point and the mutual diffusion coefficient are validated in simulations of static droplets and decaying sinusoidal concentration waves, respectively. To show the universality of our analysis, two common and one new potential function are investigated. As the shift in the diffusion characteristics directly affects the interfacial properties, we additionally show that phenomena related to the interfacial tension such as the modeling of contact angles are influenced as well.

  6. Third Order Reconstruction of the KP Scheme for Model of River Tinnelva

    Directory of Open Access Journals (Sweden)

    Susantha Dissanayake

    2017-01-01

    Full Text Available The Saint-Venant equation/Shallow Water Equation is used to simulate flow of river, flow of liquid in an open channel, tsunami etc. The Kurganov-Petrova (KP scheme which was developed based on the local speed of discontinuity propagation, can be used to solve hyperbolic type partial differential equations (PDEs, hence can be used to solve the Saint-Venant equation. The KP scheme is semi discrete: PDEs are discretized in the spatial domain, resulting in a set of Ordinary Differential Equations (ODEs. In this study, the common 2nd order KP scheme is extended into 3rd order scheme while following the Weighted Essentially Non-Oscillatory (WENO and Central WENO (CWENO reconstruction steps. Both the 2nd order and 3rd order schemes have been used in simulation in order to check the suitability of the KP schemes to solve hyperbolic type PDEs. The simulation results indicated that the 3rd order KP scheme shows some better stability compared to the 2nd order scheme. Computational time for the 3rd order KP scheme for variable step-length ode solvers in MATLAB is less compared to the computational time of the 2nd order KP scheme. In addition, it was confirmed that the order of the time integrators essentially should be lower compared to the order of the spatial discretization. However, for computation of abrupt step changes, the 2nd order KP scheme shows a more accurate solution.

  7. The effects of different types of video modelling on undergraduate students’ motivation and learning in an academic writing course

    Directory of Open Access Journals (Sweden)

    Mariet Raedts

    2017-02-01

    Full Text Available This study extends previous research on observational learning in writing. It was our objective to enhance students’ motivation and learning in an academic writing course on research synthesis writing. Participants were 162 first-year college students who had no experience with the writing task. Based on Bandura’s Social Cognitive Theory we developed two videos. In the first video a manager (prestige model elaborated on how synthesizing information is important in professional life. In the second video a peer model demonstrated a five-step writing strategy for writing up a research synthesis. We compared two versions of this video. In the explicit-strategy-instruction-video we added visual cues to channel learners’ attention to critical features of the demonstrated task using an acronym in which each letter represented a step of the model’s strategy. In the implicit-strategy-instruction-video these cues were absent. The effects of the videos were tested using a 2x2 factorial between-subjects design with video of the prestige model (yes/no and type of instructional video (implicit versus explicit strategy instruction as factors. Four post-test measures were obtained: task value, self-efficacy beliefs, task knowledge and writing performances. Path analyses revealed that the prestige model did not affect students’ task value. Peer-mediated explicit strategy instruction had no effect on self-efficacy, but a strong effect on task knowledge. Task knowledge – in turn – was found to be predictive of writing performance.

  8. Multi-Model Estimation Based Moving Object Detection for Aerial Video

    Directory of Open Access Journals (Sweden)

    Yanning Zhang

    2015-04-01

    Full Text Available With the wide development of UAV (Unmanned Aerial Vehicle technology, moving target detection for aerial video has become a popular research topic in the computer field. Most of the existing methods are under the registration-detection framework and can only deal with simple background scenes. They tend to go wrong in the complex multi background scenarios, such as viaducts, buildings and trees. In this paper, we break through the single background constraint and perceive the complex scene accurately by automatic estimation of multiple background models. First, we segment the scene into several color blocks and estimate the dense optical flow. Then, we calculate an affine transformation model for each block with large area and merge the consistent models. Finally, we calculate subordinate degree to multi-background models pixel to pixel for all small area blocks. Moving objects are segmented by means of energy optimization method solved via Graph Cuts. The extensive experimental results on public aerial videos show that, due to multi background models estimation, analyzing each pixel’s subordinate relationship to multi models by energy minimization, our method can effectively remove buildings, trees and other false alarms and detect moving objects correctly.

  9. Modeling the mechanics of HMX detonation using a Taylor–Galerkin scheme

    Directory of Open Access Journals (Sweden)

    Adam V. Duran

    2016-05-01

    Full Text Available Design of energetic materials is an exciting area in mechanics and materials science. Energetic composite materials are used as propellants, explosives, and fuel cell components. Energy release in these materials are accompanied by extreme events: shock waves travel at typical speeds of several thousand meters per second and the peak pressures can reach hundreds of gigapascals. In this paper, we develop a reactive dynamics code for modeling detonation wave features in one such material. The key contribution in this paper is an integrated algorithm to incorporate equations of state, Arrhenius kinetics, and mixing rules for particle detonation in a Taylor–Galerkin finite element simulation. We show that the scheme captures the distinct features of detonation waves, and the detonation velocity compares well with experiments reported in literature.

  10. Incorporation of UK Met Office's radiation scheme into CPTEC's global model

    Science.gov (United States)

    Chagas, Júlio C. S.; Barbosa, Henrique M. J.

    2009-03-01

    Current parameterization of radiation in the CPTEC's (Center for Weather Forecast and Climate Studies, Cachoeira Paulista, SP, Brazil) operational AGCM has its origins in the work of Harshvardhan et al. (1987) and uses the formulation of Ramaswamy and Freidenreich (1992) for the short-wave absorption by water vapor. The UK Met Office's radiation code (Edwards and Slingo, 1996) was incorporated into CPTEC's global model, initially for short-wave only, and some impacts of that were shown by Chagas and Barbosa (2006). Current paper presents some impacts of the complete incorporation (both short-wave and long-wave) of UK Met Office's scheme. Selected results from off-line comparisons with line-by-line benchmark calculations are shown. Impacts on the AGCM's climate are assessed by comparing output of climate runs of current and modified AGCM with products from GEWEX/SRB (Surface Radiation Budget) project.

  11. Investigation of thermalization in giant-spin models by different Lindblad schemes

    Energy Technology Data Exchange (ETDEWEB)

    Beckmann, Christian; Schnack, Jürgen, E-mail: jschnack@uni-bielefeld.de

    2017-09-01

    Highlights: • The non-equilibrium magnetization is investigated with quantum master equations that rest on Lindblad schemes. • It is studied how different couplings to the bath modify the magnetization. • Various field protocols are employed; relaxation times are deduced. • Result: the time evolution depends strongly on the details of the transition operator used in the Lindblad term. - Abstract: The theoretical understanding of time-dependence in magnetic quantum systems is of great importance in particular for cases where a unitary time evolution is accompanied by relaxation processes. A key example is given by the dynamics of single-molecule magnets where quantum tunneling of the magnetization competes with thermal relaxation over the anisotropy barrier. In this article we investigate how good a Lindblad approach describes the relaxation in giant spin models and how the result depends on the employed operator that transmits the action of the thermal bath.

  12. Hyperbolic reformulation of a 1D viscoelastic blood flow model and ADER finite volume schemes

    International Nuclear Information System (INIS)

    Montecinos, Gino I.; Müller, Lucas O.; Toro, Eleuterio F.

    2014-01-01

    The applicability of ADER finite volume methods to solve hyperbolic balance laws with stiff source terms in the context of well-balanced and non-conservative schemes is extended to solve a one-dimensional blood flow model for viscoelastic vessels, reformulated as a hyperbolic system, via a relaxation time. A criterion for selecting relaxation times is found and an empirical convergence rate assessment is carried out to support this result. The proposed methodology is validated by applying it to a network of viscoelastic vessels for which experimental and numerical results are available. The agreement between the results obtained in the present paper and those available in the literature is satisfactory. Key features of the present formulation and numerical methodologies, such as accuracy, efficiency and robustness, are fully discussed in the paper

  13. An implicit turbulence model for low-Mach Roe scheme using truncated Navier-Stokes equations

    Science.gov (United States)

    Li, Chung-Gang; Tsubokura, Makoto

    2017-09-01

    The original Roe scheme is well-known to be unsuitable in simulations of turbulence because the dissipation that develops is unsatisfactory. Simulations of turbulent channel flow for Reτ = 180 show that, with the 'low-Mach-fix for Roe' (LMRoe) proposed by Rieper [J. Comput. Phys. 230 (2011) 5263-5287], the Roe dissipation term potentially equates the simulation to an implicit large eddy simulation (ILES) at low Mach number. Thus inspired, a new implicit turbulence model for low Mach numbers is proposed that controls the Roe dissipation term appropriately. Referred to as the automatic dissipation adjustment (ADA) model, the method of solution follows procedures developed previously for the truncated Navier-Stokes (TNS) equations and, without tuning of parameters, uses the energy ratio as a criterion to automatically adjust the upwind dissipation. Turbulent channel flow at two different Reynold numbers and the Taylor-Green vortex were performed to validate the ADA model. In simulations of turbulent channel flow for Reτ = 180 at Mach number of 0.05 using the ADA model, the mean velocity and turbulence intensities are in excellent agreement with DNS results. With Reτ = 950 at Mach number of 0.1, the result is also consistent with DNS results, indicating that the ADA model is also reliable at higher Reynolds numbers. In simulations of the Taylor-Green vortex at Re = 3000, the kinetic energy is consistent with the power law of decaying turbulence with -1.2 exponents for both LMRoe with and without the ADA model. However, with the ADA model, the dissipation rate can be significantly improved near the dissipation peak region and the peak duration can be also more accurately captured. With a firm basis in TNS theory, applicability at higher Reynolds number, and ease in implementation as no extra terms are needed, the ADA model offers to become a promising tool for turbulence modeling.

  14. Hybrid Modeling of Intra-DCT Coefficients for Real-Time Video Encoding

    Directory of Open Access Journals (Sweden)

    Li Jin

    2008-01-01

    Full Text Available Abstract The two-dimensional discrete cosine transform (2-D DCT and its subsequent quantization are widely used in standard video encoders. However, since most DCT coefficients become zeros after quantization, a number of redundant computations are performed. This paper proposes a hybrid statistical model used to predict the zeroquantized DCT (ZQDCT coefficients for intratransform and to achieve better real-time performance. First, each pixel block at the input of DCT is decomposed into a series of mean values and a residual block. Subsequently, a statistical model based on Gaussian distribution is used to predict the ZQDCT coefficients of the residual block. Then, a sufficient condition under which each quantized coefficient becomes zero is derived from the mean values. Finally, a hybrid model to speed up the DCT and quantization calculations is proposed. Experimental results show that the proposed model can reduce more redundant computations and achieve better real-time performance than the reference in the literature at the cost of negligible video quality degradation. Experiments also show that the proposed model significantly reduces multiplications for DCT and quantization. This is particularly suitable for processors in portable devices where multiplications consume more power than additions. Computational reduction implies longer battery lifetime and energy economy.

  15. DEVELOPMENT OF AN ANNOTATION SCHEME FOR STANDARDIZED DESCRIPTION OF MATHEMATICAL MODELS IN THE FIELD OF PLANT PROTECTION.

    Science.gov (United States)

    Günther, T; Büttner, C; Käsbohrer, A; Filter, M

    2015-01-01

    Mathematical models on properties and behavior of harmful organisms in the food chain are an increas- ingly relevant approach of the agriculture and food industry. As a consequence, there are many efforts to develop biological models in science, economics and risk assessment nowadays. However, there is a lack of international harmonized standards on model annotation and model formats, which would be neces- sary to set up efficient tools supporting broad model application and information exchange. There are some established standards in the field of systems biology, but there is currently no corresponding provi- sion in the area of plant protection. This work therefore aimed at the development of an annotation scheme using domain-specific metadata. The proposed scheme has been validated in a prototype implementation of a web-database model repository. This prototypic community resource currently contains models on aflatoxin secreting fungal Aspergillus flavus in maize, as these models have a high relevance to food safety and economic impact. Specifically, models describing biological processes of the fungus (growth, Aflatoxin secreting), as well as dose-response- and carry over models were included. Furthermore, phenological models for maize were integrated as well. The developed annotation scheme is based on the well-established data exchange format SBML, which is broadly applied in the field of systems biology. The identified example models were annotated according to the developed scheme and entered into a Web-table (Google Sheets), which was transferred to a web based demonstrator available at https://sites.google.com/site/test782726372685/. By implementation of a software demonstrator it became clear that the proposed annotation scheme can be applied to models on plant pathogens and that broad adoption within the domain could promote communication and application of mathematical models.

  16. Modelling tools for managing Induced RiverBank Filtration MAR schemes

    Science.gov (United States)

    De Filippis, Giovanna; Barbagli, Alessio; Marchina, Chiara; Borsi, Iacopo; Mazzanti, Giorgio; Nardi, Marco; Vienken, Thomas; Bonari, Enrico; Rossetto, Rudy

    2017-04-01

    Induced RiverBank Filtration (IRBF) is a widely used technique in Managed Aquifer Recharge (MAR) schemes, when aquifers are hydraulically connected with surface water bodies, with proven positive effects on quality and quantity of groundwater. IRBF allows abstraction of a large volume of water, avoiding large decrease in groundwater heads. Moreover, thanks to the filtration process through the soil, the concentration of chemical species in surface water can be reduced, thus becoming an excellent resource for the production of drinking water. Within the FP7 MARSOL project (demonstrating Managed Aquifer Recharge as a SOLution to water scarcity and drought; http://www.marsol.eu/), the Sant'Alessio IRBF (Lucca, Italy) was used to demonstrate the feasibility and technical and economic benefits of managing IRBF schemes (Rossetto et al., 2015a). The Sant'Alessio IRBF along the Serchio river allows to abstract an overall amount of about 0.5 m3/s providing drinking water for 300000 people of the coastal Tuscany (mainly to the town of Lucca, Pisa and Livorno). The supplied water is made available by enhancing river bank infiltration into a high yield (10-2 m2/s transmissivity) sandy-gravelly aquifer by rising the river head and using ten vertical wells along the river embankment. A Decision Support System, consisting in connected measurements from an advanced monitoring network and modelling tools was set up to manage the IRBF. The modelling system is based on spatially distributed and physically based coupled ground-/surface-water flow and solute transport models integrated in the FREEWAT platform (developed within the H2020 FREEWAT project - FREE and Open Source Software Tools for WATer Resource Management; Rossetto et al., 2015b), an open source and public domain GIS-integrated modelling environment for the simulation of the hydrological cycle. The platform aims at improving water resource management by simplifying the application of EU water-related Directives and at

  17. Monitoring road traffic congestion using a macroscopic traffic model and a statistical monitoring scheme

    KAUST Repository

    Zeroual, Abdelhafid

    2017-08-19

    Monitoring vehicle traffic flow plays a central role in enhancing traffic management, transportation safety and cost savings. In this paper, we propose an innovative approach for detection of traffic congestion. Specifically, we combine the flexibility and simplicity of a piecewise switched linear (PWSL) macroscopic traffic model and the greater capacity of the exponentially-weighted moving average (EWMA) monitoring chart. Macroscopic models, which have few, easily calibrated parameters, are employed to describe a free traffic flow at the macroscopic level. Then, we apply the EWMA monitoring chart to the uncorrelated residuals obtained from the constructed PWSL model to detect congested situations. In this strategy, wavelet-based multiscale filtering of data has been used before the application of the EWMA scheme to improve further the robustness of this method to measurement noise and reduce the false alarms due to modeling errors. The performance of the PWSL-EWMA approach is successfully tested on traffic data from the three lane highway portion of the Interstate 210 (I-210) highway of the west of California and the four lane highway portion of the State Route 60 (SR60) highway from the east of California, provided by the Caltrans Performance Measurement System (PeMS). Results show the ability of the PWSL-EWMA approach to monitor vehicle traffic, confirming the promising application of this statistical tool to the supervision of traffic flow congestion.

  18. Incorporation of the Mass Concentration and the New Snow Albedo Schemes into the Global Forecasting Model, GEOS-5 and the Impact of the New Schemes over Himalayan Glaciers

    Science.gov (United States)

    Yasunari, Teppei

    2012-01-01

    Recently the issue on glacier retreats comes up and many factors should be relevant to the issue. The absorbing aerosols such as dust and black carbon (BC) are considered to be one of the factors. After they deposited onto the snow surface, it will reduce snow albedo (called snow darkening effect) and probably contribute to further melting of glacier. The Goddard Earth Observing System version 5 (GEOS-5) has developed at NASA/GSFC. However, the original snowpack model used in the land surface model in the GEOS-5 did not consider the snow darkening effect. Here we developed the new snow albedo scheme which can consider the snow darkening effect. In addition, another scheme on calculating mass concentrations on the absorbing aerosols in snowpack was also developed, in which the direct aerosol depositions from the chemical transport model in the GEOS-5 were used. The scheme has been validated with the observed data obtained at backyard of the Institute of Low Temperature Science, Hokkaido University, by Dr. Teruo Aoki (Meteorological Research Institute) et aL including me. The observed data was obtained when I was Ph.D. candidate. The original GEOS-5during 2007-2009 over the Himalayas and Tibetan Plateau region showed more reductions of snow than that of the new GEOS-5 because the original one used lower albedo settings. On snow cover fraction, the new GEOS-5 simulated more realistic snow-covered area comparing to the MODIS snow cover fraction. The reductions on snow albedo, snow cover fraction, and snow water equivalent were seen with statistically significance if we consider the snow darkening effect comparing to the results without the snow darkening effect. In the real world, debris cover, inside refreezing process, surface flow of glacier, etc. affect glacier mass balance and the simulated results immediately do not affect whole glacier retreating. However, our results indicate that some surface melting over non debris covered parts of the glacier would be

  19. Stochastic Actuarial Modelling of a Defined-Benefit Social Security Pension Scheme: An Analytical Approach

    OpenAIRE

    Iyer, Subramaniam

    2017-01-01

    Among the systems in place in different countries for the protection of the population against the long-term contingencies of old-age (or retirement), disability and death (or survivorship), defined-benefit social security pension schemes, i.e. social insurance pension schemes, by far predominate, despite the recent trend towards defined-contribution arrangements in social security reforms. Actuarial valuations of these schemes, unlike other branches of insurance, continue to be carried out a...

  20. Communication Improvement for the LU NAS Parallel Benchmark: A Model for Efficient Parallel Relaxation Schemes

    Science.gov (United States)

    Yarrow, Maurice; VanderWijngaart, Rob; Kutler, Paul (Technical Monitor)

    1997-01-01

    The first release of the MPI version of the LU NAS Parallel Benchmark (NPB2.0) performed poorly compared to its companion NPB2.0 codes. The later LU release (NPB2.1 & 2.2) runs up to two and a half times faster, thanks to a revised point access scheme and related communications scheme. The new scheme sends substantially fewer messages. is cache "friendly", and has a better load balance. We detail the, observations and modifications that resulted in this efficiency improvement, and show that the poor behavior of the original code resulted from deriving a message passing scheme from an algorithm originally devised for a vector architecture.

  1. Distortion-Based Link Adaptation for Wireless Video Transmission

    Directory of Open Access Journals (Sweden)

    Andrew Nix

    2008-06-01

    Full Text Available Wireless local area networks (WLANs such as IEEE 802.11a/g utilise numerous transmission modes, each providing different throughputs and reliability levels. Most link adaptation algorithms proposed in the literature (i maximise the error-free data throughput, (ii do not take into account the content of the data stream, and (iii rely strongly on the use of ARQ. Low-latency applications, such as real-time video transmission, do not permit large numbers of retransmission. In this paper, a novel link adaptation scheme is presented that improves the quality of service (QoS for video transmission. Rather than maximising the error-free throughput, our scheme minimises the video distortion of the received sequence. With the use of simple and local rate distortion measures and end-to-end distortion models at the video encoder, the proposed scheme estimates the received video distortion at the current transmission rate, as well as on the adjacent lower and higher rates. This allows the system to select the link-speed which offers the lowest distortion and to adapt to the channel conditions. Simulation results are presented using the MPEG-4/AVC H.264 video compression standard over IEEE 802.11g. The results show that the proposed system closely follows the optimum theoretic solution.

  2. Optimally Accurate Second-Order Time-Domain Finite-Difference Scheme for Acoustic, Electromagnetic, and Elastic Wave Modeling

    Directory of Open Access Journals (Sweden)

    C. Bommaraju

    2005-01-01

    Full Text Available Numerical methods are extremely useful in solving real-life problems with complex materials and geometries. However, numerical methods in the time domain suffer from artificial numerical dispersion. Standard numerical techniques which are second-order in space and time, like the conventional Finite Difference 3-point (FD3 method, Finite-Difference Time-Domain (FDTD method, and Finite Integration Technique (FIT provide estimates of the error of discretized numerical operators rather than the error of the numerical solutions computed using these operators. Here optimally accurate time-domain FD operators which are second-order in time as well as in space are derived. Optimal accuracy means the greatest attainable accuracy for a particular type of scheme, e.g., second-order FD, for some particular grid spacing. The modified operators lead to an implicit scheme. Using the first order Born approximation, this implicit scheme is transformed into a two step explicit scheme, namely predictor-corrector scheme. The stability condition (maximum time step for a given spatial grid interval for the various modified schemes is roughly equal to that for the corresponding conventional scheme. The modified FD scheme (FDM attains reduction of numerical dispersion almost by a factor of 40 in 1-D case, compared to the FD3, FDTD, and FIT. The CPU time for the FDM scheme is twice of that required by the FD3 method. The simulated synthetic data for a 2-D P-SV (elastodynamics problem computed using the modified scheme are 30 times more accurate than synthetics computed using a conventional scheme, at a cost of only 3.5 times as much CPU time. The FDM is of particular interest in the modeling of large scale (spatial dimension is more or equal to one thousand wave lengths or observation time interval is very high compared to reference time step wave propagation and scattering problems, for instance, in ultrasonic antenna and synthetic scattering data modeling for Non

  3. Performance evaluation of land surface models and cumulus convection schemes in the simulation of Indian summer monsoon using a regional climate model

    Science.gov (United States)

    Maity, S.; Satyanarayana, A. N. V.; Mandal, M.; Nayak, S.

    2017-11-01

    In this study, an attempt has been made to investigate the sensitivity of land surface models (LSM) and cumulus convection schemes (CCS) using a regional climate model, RegCM Version-4.1 in simulating the Indian Summer Monsoon (ISM). Numerical experiments were conducted in seasonal scale (May-September) for three consecutive years: 2007, 2008, 2009 with two LSMs (Biosphere Atmosphere Transfer Scheme (BATS), Community Land Model (CLM 3.5) and five CCSs (MIT, KUO, GRELL, GRELL over land and MIT over ocean (GL_MO), GRELL over ocean and MIT over land (GO_ML)). Important synoptic features are validated using various reanalysis datasets and satellite derived products from TRMM and CRU data. Seasonally averaged surface temperature is reasonably well simulated by the model using both the LSMs along with CCSs namely, MIT, GO_ML and GL_MO schemes. Model simulations reveal slight warm bias using these schemes whereas significant cold bias is seen with KUO and GRELL schemes during all three years. It is noticed that the simulated Somali Jet (SJ) is weak in all simulations except MIT scheme in the simulations with (both BATS and CLM) in which the strength of SJ reasonably well captured. Although the model is able to simulate the Tropical Easterly Jet (TEJ) and Sub-Tropical Westerly Jet (STWJ) with all the CCSs in terms of their location and strength, the performance of MIT scheme seems to be better than the rest of the CCSs. Seasonal rainfall is not well simulated by the model. Significant underestimation of Indian Summer Monsoon Rainfall (ISMR) is observed over Central and North West India. Spatial distribution of seasonal ISMR is comparatively better simulated by the model with MIT followed by GO_ML scheme in combination with CLM although it overestimates rainfall over heavy precipitation zones. On overall statistical analysis, it is noticed that RegCM4 shows better skill in simulating ISM with MIT scheme using CLM.

  4. Development and evaluation of a physics-based windblown dust emission scheme implemented in the CMAQ modeling system

    Science.gov (United States)

    A new windblown dust emission treatment was incorporated in the Community Multiscale Air Quality (CMAQ) modeling system. This new model treatment has been built upon previously developed physics-based parameterization schemes from the literature. A distinct and novel feature of t...

  5. Progress on Implementing Additional Physics Schemes into MPAS-A v5.1 for Next Generation Air Quality Modeling

    Science.gov (United States)

    The U.S. Environmental Protection Agency (USEPA) has a team of scientists developing a next generation air quality modeling system employing the Model for Prediction Across Scales – Atmosphere (MPAS-A) as its meteorological foundation. Several preferred physics schemes and ...

  6. Multiplicative noise removal through fractional order tv-based model and fast numerical schemes for its approximation

    Science.gov (United States)

    Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad

    2017-07-01

    This paper introduces a fractional order total variation (FOTV) based model with three different weights in the fractional order derivative definition for multiplicative noise removal purpose. The fractional-order Euler Lagrange equation which is a highly non-linear partial differential equation (PDE) is obtained by the minimization of the energy functional for image restoration. Two numerical schemes namely an iterative scheme based on the dual theory and majorization- minimization algorithm (MMA) are used. To improve the restoration results, we opt for an adaptive parameter selection procedure for the proposed model by applying the trial and error method. We report numerical simulations which show the validity and state of the art performance of the fractional-order model in visual improvement as well as an increase in the peak signal to noise ratio comparing to corresponding methods. Numerical experiments also demonstrate that MMAbased methodology is slightly better than that of an iterative scheme.

  7. Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools

    CERN Document Server

    Ding, Steven X

    2013-01-01

    Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: ·         new material on fault isolation and identification, and fault detection in feedback control loops; ·         extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and ·         enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...

  8. Incompressible Turbulent Flow Simulation Using the κ-ɛ Model and Upwind Schemes

    Directory of Open Access Journals (Sweden)

    V. G. Ferreira

    2007-01-01

    Full Text Available In the computation of turbulent flows via turbulence modeling, the treatment of the convective terms is a key issue. In the present work, we present a numerical technique for simulating two-dimensional incompressible turbulent flows. In particular, the performance of the high Reynolds κ-ɛ model and a new high-order upwind scheme (adaptative QUICKEST by Kaibara et al. (2005 is assessed for 2D confined and free-surface incompressible turbulent flows. The model equations are solved with the fractional-step projection method in primitive variables. Solutions are obtained by using an adaptation of the front tracking GENSMAC (Tomé and McKee (1994 methodology for calculating fluid flows at high Reynolds numbers. The calculations are performed by using the 2D version of the Freeflow simulation system (Castello et al. (2000. A specific way of implementing wall functions is also tested and assessed. The numerical procedure is tested by solving three fluid flow problems, namely, turbulent flow over a backward-facing step, turbulent boundary layer over a flat plate under zero-pressure gradients, and a turbulent free jet impinging onto a flat surface. The numerical method is then applied to solve the flow of a horizontal jet penetrating a quiescent fluid from an entry port beneath the free surface.

  9. An iterative representer-based scheme for data inversion in reservoir modeling

    International Nuclear Information System (INIS)

    Iglesias, Marco A; Dawson, Clint

    2009-01-01

    In this paper, we develop a mathematical framework for data inversion in reservoir models. A general formulation is presented for the identification of uncertain parameters in an abstract reservoir model described by a set of nonlinear equations. Given a finite number of measurements of the state and prior knowledge of the uncertain parameters, an iterative representer-based scheme (IRBS) is proposed to find improved parameters. In this approach, the representer method is used to solve a linear data assimilation problem at each iteration of the algorithm. We apply the theory of iterative regularization to establish conditions for which the IRBS will converge to a stable approximation of a solution to the parameter identification problem. These theoretical results are applied to the identification of the second-order coefficient of a forward model described by a parabolic boundary value problem. Numerical results are presented to show the capabilities of the IRBS for the reconstruction of hydraulic conductivity from the steady-state of groundwater flow, as well as the absolute permeability in the single-phase Darcy flow through porous media

  10. Hawkes process as a model of social interactions: a view on video dynamics

    International Nuclear Information System (INIS)

    Mitchell, Lawrence; Cates, Michael E

    2010-01-01

    We study by computer simulation the 'Hawkes process' that was proposed in a recent paper by Crane and Sornette (2008 Proc. Natl Acad. Sci. USA 105 15649) as a plausible model for the dynamics of YouTube video viewing numbers. We test the claims made there that robust identification is possible for classes of dynamic response following activity bursts. Our simulated time series for the Hawkes process indeed fall into the different categories predicted by Crane and Sornette. However, the Hawkes process gives a much narrower spread of decay exponents than the YouTube data, suggesting limits to the universality of the Hawkes-based analysis.

  11. Modeling the Subjective Quality of Highly Contrasted Videos Displayed on LCD With Local Backlight Dimming

    DEFF Research Database (Denmark)

    Mantel, Claire; Bech, Søren; Korhonen, Jari

    2015-01-01

    Local backlight dimming is a technology aiming at both saving energy and improving visual quality on television sets. As the rendition of the image is specified locally, the numerical signal corresponding to the displayed image needs to be computed through a model of the display. This simulated...... signal can then be used as input to objective quality metrics. The focus of this paper is on determining which characteristics of locally backlit displays influence quality assessment. A subjective experiment assessing the quality of highly contrasted videos displayed with various local backlight...

  12. Intercomparison of shortwave radiative transfer schemes in global aerosol modeling: results from the AeroCom Radiative Transfer Experiment

    Directory of Open Access Journals (Sweden)

    C. A. Randles

    2013-03-01

    Full Text Available In this study we examine the performance of 31 global model radiative transfer schemes in cloud-free conditions with prescribed gaseous absorbers and no aerosols (Rayleigh atmosphere, with prescribed scattering-only aerosols, and with more absorbing aerosols. Results are compared to benchmark results from high-resolution, multi-angular line-by-line radiation models. For purely scattering aerosols, model bias relative to the line-by-line models in the top-of-the atmosphere aerosol radiative forcing ranges from roughly −10 to 20%, with over- and underestimates of radiative cooling at lower and higher solar zenith angle, respectively. Inter-model diversity (relative standard deviation increases from ~10 to 15% as solar zenith angle decreases. Inter-model diversity in atmospheric and surface forcing decreases with increased aerosol absorption, indicating that the treatment of multiple-scattering is more variable than aerosol absorption in the models considered. Aerosol radiative forcing results from multi-stream models are generally in better agreement with the line-by-line results than the simpler two-stream schemes. Considering radiative fluxes, model performance is generally the same or slightly better than results from previous radiation scheme intercomparisons. However, the inter-model diversity in aerosol radiative forcing remains large, primarily as a result of the treatment of multiple-scattering. Results indicate that global models that estimate aerosol radiative forcing with two-stream radiation schemes may be subject to persistent biases introduced by these schemes, particularly for regional aerosol forcing.

  13. Online Video-Based Training in the Use of Hydrologic Models: A Case Example Using SWAT

    Science.gov (United States)

    Frankenberger, J.

    2009-12-01

    , and posing questions to the web-based group. Although excellent model documentation is available (http://www.brc.tamus.edu/swat/), the extent to which users conduct careful study to ensure that their understanding goes beyond the superficial level is unknown. Online video-on-demand technology provides a way for users to learn consistent content while saving travel resources, and allows for training to fit into people’s schedule. Online videos were created to teach the basics of setting up the model, acquiring input data, parameterizing, calibrating and evaluating model results, and analyzing outputs, as well as model science, uncertainty, and appropriate use. Feedback was sought from the SWAT modeling community to determine typical backgrounds of model users and how the video training fits into the available means of learning SWAT. Evaluation is being undertaken to assess results. The processes used in developing and evaluating the online SWAT training could be applied to other computer models for which use among broader groups of people is increasing or has the potential to grow.

  14. Optimization of the scheme for natural ecology planning of urban rivers based on ANP (analytic network process) model.

    Science.gov (United States)

    Zhang, Yichuan; Wang, Jiangping

    2015-07-01

    Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.

  15. Depth no-synthesis-error model for view synthesis in 3-D video.

    Science.gov (United States)

    Zhao, Yin; Zhu, Ce; Chen, Zhenzhong; Yu, Lu

    2011-08-01

    Currently, 3-D Video targets at the application of disparity-adjustable stereoscopic video, where view synthesis based on depth-image-based rendering (DIBR) is employed to generate virtual views. Distortions in depth information may introduce geometry changes or occlusion variations in the synthesized views. In practice, depth information is stored in 8-bit grayscale format, whereas the disparity range for a visually comfortable stereo pair is usually much less than 256 levels. Thus, several depth levels may correspond to the same integer (or sub-pixel) disparity value in the DIBR-based view synthesis such that some depth distortions may not result in geometry changes in the synthesized view. From this observation, we develop a depth no-synthesis-error (D-NOSE) model to examine the allowable depth distortions in rendering a virtual view without introducing any geometry changes. We further show that the depth distortions prescribed by the proposed D-NOSE profile also do not compromise the occlusion order in view synthesis. Therefore, a virtual view can be synthesized losslessly if depth distortions follow the D-NOSE specified thresholds. Our simulations validate the proposed D-NOSE model in lossless view synthesis and demonstrate the gain with the model in depth coding.

  16. Content-Aware Video Adaptation under Low-Bitrate Constraint

    Directory of Open Access Journals (Sweden)

    Hsiao Ming-Ho

    2007-01-01

    Full Text Available With the development of wireless network and the improvement of mobile device capability, video streaming is more and more widespread in such an environment. Under the condition of limited resource and inherent constraints, appropriate video adaptations have become one of the most important and challenging issues in wireless multimedia applications. In this paper, we propose a novel content-aware video adaptation in order to effectively utilize resource and improve visual perceptual quality. First, the attention model is derived from analyzing the characteristics of brightness, location, motion vector, and energy features in compressed domain to reduce computation complexity. Then, through the integration of attention model, capability of client device and correlational statistic model, attractive regions of video scenes are derived. The information object- (IOB- weighted rate distortion model is used for adjusting the bit allocation. Finally, the video adaptation scheme dynamically adjusts video bitstream in frame level and object level. Experimental results validate that the proposed scheme achieves better visual quality effectively and efficiently.

  17. Optimal Physics Parameterization Scheme Combination of the Weather Research and Forecasting Model for Seasonal Precipitation Simulation over Ghana

    Directory of Open Access Journals (Sweden)

    Richard Yao Kuma Agyeman

    2017-01-01

    Full Text Available Seasonal predictions of precipitation, among others, are important to help mitigate the effects of drought and floods on agriculture, hydropower generation, disasters, and many more. This work seeks to obtain a suitable combination of physics schemes of the Weather Research and Forecasting (WRF model for seasonal precipitation simulation over Ghana. Using the ERA-Interim reanalysis as forcing data, simulation experiments spanning eight months (from April to November were performed for two different years: a dry year (2001 and a wet year (2008. A double nested approach was used with the outer domain at 50 km resolution covering West Africa and the inner domain covering Ghana at 10 km resolution. The results suggest that the WRF model generally overestimated the observed precipitation by a mean value between 3% and 64% for both years. Most of the scheme combinations overestimated (underestimated precipitation over coastal (northern zones of Ghana for both years but estimated precipitation reasonably well over forest and transitional zones. On the whole, the combination of WRF Single-Moment 6-Class Microphysics Scheme, Grell-Devenyi Ensemble Cumulus Scheme, and Asymmetric Convective Model Planetary Boundary Layer Scheme simulated the best temporal pattern and temporal variability with the least relative bias for both years and therefore is recommended for Ghana.

  18. Watermarking textures in video games

    Science.gov (United States)

    Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin

    2014-02-01

    Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.

  19. A Bilingual Child Learns Social Communication Skills through Video Modeling--A Single Case Study in a Norwegian School Setting

    Science.gov (United States)

    Özerk, Meral; Özerk, Kamil

    2015-01-01

    "Video modeling" is one of the recognized methods used in the training and teaching of children with Autism Spectrum Disorders (ASD). The model's theoretical base stems from Albert Bandura's (1977; 1986) social learning theory in which he asserts that children can learn many skills and behaviors observationally through modeling. One can…

  20. Novel Congestion-Free Alternate Routing Path Scheme using Stackelberg Game Theory Model in Wireless Networks

    Directory of Open Access Journals (Sweden)

    P. Chitra

    2017-04-01

    Full Text Available Recently, wireless network technologies were designed for most of the applications. Congestion raised in the wireless network degrades the performance and reduces the throughput. Congestion-free network is quit essen- tial in the transport layer to prevent performance degradation in a wireless network. Game theory is a branch of applied mathematics and applied sciences that used in wireless network, political science, biology, computer science, philosophy and economics. e great challenges of wireless network are their congestion by various factors. E ective congestion-free alternate path routing is pretty essential to increase network performance. Stackelberg game theory model is currently employed as an e ective tool to design and formulate conges- tion issues in wireless networks. is work uses a Stackelberg game to design alternate path model to avoid congestion. In this game, leaders and followers are selected to select an alternate routing path. e correlated equilibrium is used in Stackelberg game for making better decision between non-cooperation and cooperation. Congestion was continuously monitored to increase the throughput in the network. Simulation results show that the proposed scheme could extensively improve the network performance by reducing congestion with the help of Stackelberg game and thereby enhance throughput.

  1. A new single-moment microphysics scheme for cloud-resolving models using observed dependence of ice concentration on temperature.

    Science.gov (United States)

    Khairoutdinov, M.

    2015-12-01

    The representation of microphysics, especially ice microphysics, remains one of the major uncertainties in cloud-resolving models (CRMs). Most of the cloud schemes use the so-called bulk microphysics approach, in which a few moments of such distributions are used as the prognostic variables. The System for Atmospheric Modeling (SAM) is the CRM that employs two such schemes. The single-moment scheme, which uses only mass for each of the water phases, and the two-moment scheme, which adds the particle concentration for each of the hydrometeor category. Of the two, the single-moment scheme is much more computationally efficient as it uses only two prognostic microphysics variables compared to ten variables used by the two-moment scheme. The efficiency comes from a rather considerable oversimplification of the microphysical processes. For instance, only a sum of the liquid and icy cloud water is predicted with the temperature used to diagnose the mixing ratios of different hydrometeors. The main motivation for using such simplified microphysics has been computational efficiency, especially in the applications of SAM as the super-parameterization in global climate models. Recently, we have extended the single-moment microphysics by adding only one additional prognostic variable, which has, nevertheless, allowed us to separate the cloud ice from liquid water. We made use of some of the recent observations of ice microphysics collected at various parts of the world to parameterize several aspects of ice microphysics that have not been explicitly represented before in our sing-moment scheme. For example, we use the observed broad dependence of ice concentration on temperature to diagnose the ice concentration in addition to prognostic mass. Also, there is no artificial separation between the pristine ice and snow, often used by bulk models. Instead we prescribed the ice size spectrum as the gamma distribution, with the distribution shape parameter controlled by the

  2. Revisiting Intel Xeon Phi optimization of Thompson cloud microphysics scheme in Weather Research and Forecasting (WRF) model

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen

    2015-10-01

    The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.

  3. A regional calibration scheme for a distributed hydrologic model based on a copula dissimilarity measure

    Science.gov (United States)

    Kumar, R.; Samaniego, L. E.

    2011-12-01

    Spatially distributed hydrologic models at mesoscale level are based on the conceptualization and generalization of hydrological processes. Therefore, such models require parameter adjustment for its successful application at a given scale. Automatic computer-based algorithms are commonly used for the calibration purpose. While such algorithms can provide much faster and efficient results as compared to the traditional manual calibration method, they are also prone to overtraining of a parameter set for a given catchment. As a result, the transferability of model parameters from a calibration site to un-calibrated site is limited. In this study, we propose a regional multi-basin calibration scheme to prevent the overtraining of model parameters in a specific catchment. The idea is to split the available catchments into two disjoint groups in such a way that catchments belonging to the first group can be used for calibration (i.e. for minimization or maximization of objective functions), and catchments belonging to other group are used to cross-validation of the model performance for each generated parameter set. The calibration process should be stopped if the model shows a significant decrease in its performance at cross-validation catchments while increasing performance at calibration sites. Hydrologically diverse catchments were selected as members of each calibration and cross-validation groups to obtain a regional set of robust parameter. A dissimilarity measure based on runoff and antecedent precipitation copulas was used for the selection of the disjoint sets. The proposed methodology was used to calibrate transfer function parameters of a distributed mesoscale hydrologic model (mHM), whose parameter fields are linked to catchment characteristics through a set of transfer functions using a multiscale parameter regionalisation method. This study was carried out in 106 south German catchments ranging in size from 4 km2 to 12 700 km2. Initial test results

  4. Robust spatiotemporal matching of electronic slides to presentation videos.

    Science.gov (United States)

    Fan, Quanfu; Barnard, Kobus; Amir, Arnon; Efrat, Alon

    2011-08-01

    We describe a robust and efficient method for automatically matching and time-aligning electronic slides to videos of corresponding presentations. Matching electronic slides to videos provides new methods for indexing, searching, and browsing videos in distance-learning applications. However, robust automatic matching is challenging due to varied frame composition, slide distortion, camera movement, low-quality video capture, and arbitrary slides sequence. Our fully automatic approach combines image-based matching of slide to video frames with a temporal model for slide changes and camera events. To address these challenges, we begin by extracting scale-invariant feature-transformation (SIFT) keypoints from both slides and video frames, and matching them subject to a consistent projective transformation (homography) by using random sample consensus (RANSAC). We use the initial set of matches to construct a background model and a binary classifier for separating video frames showing slides from those without. We then introduce a new matching scheme for exploiting less distinctive SIFT keypoints that enables us to tackle more difficult images. Finally, we improve upon the matching based on visual information by using estimated matching probabilities as part of a hidden Markov model (HMM) that integrates temporal information and detected camera operations. Detailed quantitative experiments characterize each part of our approach and demonstrate an average accuracy of over 95% in 13 presentation videos.

  5. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    Science.gov (United States)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  6. Modeling the subjective quality of highly contrasted videos displayed on LCD with local backlight dimming.

    Science.gov (United States)

    Mantel, Claire; Bech, Søren; Korhonen, Jari; Forchhammer, Søren; Pedersen, Jesper Melgaard

    2015-02-01

    Local backlight dimming is a technology aiming at both saving energy and improving visual quality on television sets. As the rendition of the image is specified locally, the numerical signal corresponding to the displayed image needs to be computed through a model of the display. This simulated signal can then be used as input to objective quality metrics. The focus of this paper is on determining which characteristics of locally backlit displays influence quality assessment. A subjective experiment assessing the quality of highly contrasted videos displayed with various local backlight-dimming algorithms is set up. Subjective results are then compared with both objective measures and objective quality metrics using different display models. The first analysis indicates that the most significant objective features are temporal variations, power consumption (probably representing leakage), and a contrast measure. The second analysis shows that modeling of leakage is necessary for objective quality assessment of sequences displayed with local backlight dimming.

  7. MAC-Layer Active Dropping for Real-Time Video Streaming in 4G Access Networks

    KAUST Repository

    She, James

    2010-12-01

    This paper introduces a MAC-layer active dropping scheme to achieve effective resource utilization, which can satisfy the application-layer delay for real-time video streaming in time division multiple access based 4G broadband wireless access networks. When a video frame is not likely to be reconstructed within the application-layer delay bound at a receiver for the minimum decoding requirement, the MAC-layer protocol data units of such video frame will be proactively dropped before the transmission. An analytical model is developed to evaluate how confident a video frame can be delivered within its application-layer delay bound by jointly considering the effects of time-varying wireless channel, minimum decoding requirement of each video frame, data retransmission, and playback buffer. Extensive simulations with video traces are conducted to prove the effectiveness of the proposed scheme. When compared to conventional cross-layer schemes using prioritized-transmission/retransmission, the proposed scheme is practically implementable for more effective resource utilization, avoiding delay propagation, and achieving better video qualities under certain conditions.

  8. Evaluation of European air quality modelled by CAMx including the volatility basis set scheme

    Directory of Open Access Journals (Sweden)

    G. Ciarelli

    2016-08-01

    Full Text Available Four periods of EMEP (European Monitoring and Evaluation Programme intensive measurement campaigns (June 2006, January 2007, September–October 2008 and February–March 2009 were modelled using the regional air quality model CAMx with VBS (volatility basis set approach for the first time in Europe within the framework of the EURODELTA-III model intercomparison exercise. More detailed analysis and sensitivity tests were performed for the period of February–March 2009 and June 2006 to investigate the uncertainties in emissions as well as to improve the modelling of organic aerosol (OA. Model performance for selected gas phase species and PM2.5 was evaluated using the European air quality database AirBase. Sulfur dioxide (SO2 and ozone (O3 were found to be overestimated for all the four periods, with O3 having the largest mean bias during June 2006 and January–February 2007 periods (8.9 pbb and 12.3 ppb mean biases respectively. In contrast, nitrogen dioxide (NO2 and carbon monoxide (CO were found to be underestimated for all the four periods. CAMx reproduced both total concentrations and monthly variations of PM2.5 for all the four periods with average biases ranging from −2.1 to 1.0 µg m−3. Comparisons with AMS (aerosol mass spectrometer measurements at different sites in Europe during February–March 2009 showed that in general the model overpredicts the inorganic aerosol fraction and underpredicts the organic one, such that the good agreement for PM2.5 is partly due to compensation of errors. The effect of the choice of VBS scheme on OA was investigated as well. Two sensitivity tests with volatility distributions based on previous chamber and ambient measurements data were performed. For February–March 2009 the chamber case reduced the total OA concentrations by about 42 % on average. In contrast, a test based on ambient measurement data increased OA concentrations by about 42 % for the same period bringing

  9. Normal tissue complication probabilities: dependence on choice of biological model and dose-volume histogram reduction scheme

    International Nuclear Information System (INIS)

    Moiseenko, Vitali; Battista, Jerry; Van Dyk, Jake

    2000-01-01

    Purpose: To evaluate the impact of dose-volume histogram (DVH) reduction schemes and models of normal tissue complication probability (NTCP) on ranking of radiation treatment plans. Methods and Materials: Data for liver complications in humans and for spinal cord in rats were used to derive input parameters of four different NTCP models. DVH reduction was performed using two schemes: 'effective volume' and 'preferred Lyman'. DVHs for competing treatment plans were derived from a sample DVH by varying dose uniformity in a high dose region so that the obtained cumulative DVHs intersected. Treatment plans were ranked according to the calculated NTCP values. Results: Whenever the preferred Lyman scheme was used to reduce the DVH, competing plans were indistinguishable as long as the mean dose was constant. The effective volume DVH reduction scheme did allow us to distinguish between these competing treatment plans. However, plan ranking depended on the radiobiological model used and its input parameters. Conclusions: Dose escalation will be a significant part of radiation treatment planning using new technologies, such as 3-D conformal radiotherapy and tomotherapy. Such dose escalation will depend on how the dose distributions in organs at risk are interpreted in terms of expected complication probabilities. The present study indicates considerable variability in predicted NTCP values because of the methods used for DVH reduction and radiobiological models and their input parameters. Animal studies and collection of standardized clinical data are needed to ascertain the effects of non-uniform dose distributions and to test the validity of the models currently in use

  10. The role of residence time in diagnostic models of global carbon storage capacity: model decomposition based on a traceable scheme.

    Science.gov (United States)

    Yizhao, Chen; Jianyang, Xia; Zhengguo, Sun; Jianlong, Li; Yiqi, Luo; Chengcheng, Gang; Zhaoqi, Wang

    2015-11-06

    As a key factor that determines carbon storage capacity, residence time (τE) is not well constrained in terrestrial biosphere models. This factor is recognized as an important source of model uncertainty. In this study, to understand how τE influences terrestrial carbon storage prediction in diagnostic models, we introduced a model decomposition scheme in the Boreal Ecosystem Productivity Simulator (BEPS) and then compared it with a prognostic model. The result showed that τE ranged from 32.7 to 158.2 years. The baseline residence time (τ'E) was stable for each biome, ranging from 12 to 53.7 years for forest biomes and 4.2 to 5.3 years for non-forest biomes. The spatiotemporal variations in τE were mainly determined by the environmental scalar (ξ). By comparing models, we found that the BEPS uses a more detailed pool construction but rougher parameterization for carbon allocation and decomposition. With respect to ξ comparison, the global difference in the temperature scalar (ξt) averaged 0.045, whereas the moisture scalar (ξw) had a much larger variation, with an average of 0.312. We propose that further evaluations and improvements in τ'E and ξw predictions are essential to reduce the uncertainties in predicting carbon storage by the BEPS and similar diagnostic models.

  11. The role of residence time in diagnostic models of global carbon storage capacity: model decomposition based on a traceable scheme

    Science.gov (United States)

    Yizhao, Chen; Jianyang, Xia; Zhengguo, Sun; Jianlong, Li; Yiqi, Luo; Chengcheng, Gang; Zhaoqi, Wang

    2015-01-01

    As a key factor that determines carbon storage capacity, residence time (τE) is not well constrained in terrestrial biosphere models. This factor is recognized as an important source of model uncertainty. In this study, to understand how τE influences terrestrial carbon storage prediction in diagnostic models, we introduced a model decomposition scheme in the Boreal Ecosystem Productivity Simulator (BEPS) and then compared it with a prognostic model. The result showed that τE ranged from 32.7 to 158.2 years. The baseline residence time (τ′E) was stable for each biome, ranging from 12 to 53.7 years for forest biomes and 4.2 to 5.3 years for non-forest biomes. The spatiotemporal variations in τE were mainly determined by the environmental scalar (ξ). By comparing models, we found that the BEPS uses a more detailed pool construction but rougher parameterization for carbon allocation and decomposition. With respect to ξ comparison, the global difference in the temperature scalar (ξt) averaged 0.045, whereas the moisture scalar (ξw) had a much larger variation, with an average of 0.312. We propose that further evaluations and improvements in τ′E and ξw predictions are essential to reduce the uncertainties in predicting carbon storage by the BEPS and similar diagnostic models. PMID:26541245

  12. A model of R-D performance evaluation for Rate-Distortion-Complexity evaluation of H.264 video coding

    DEFF Research Database (Denmark)

    Wu, Mo; Forchhammer, Søren

    2007-01-01

    This paper considers a method for evaluation of Rate-Distortion-Complexity (R-D-C) performance of video coding. A statistical model of the transformed coefficients is used to estimate the Rate-Distortion (R-D) performance. A model frame work for rate, distortion and slope of the R-D curve for inter...... and intra frame is presented. Assumptions are given for analyzing an R-D model for fast R-D-C evaluation. The theoretical expressions are combined with H.264 video coding, and confirmed by experimental results. The complexity frame work is applied to the integer motion estimation....

  13. Towards Next Generation Ocean Models: Novel Discontinuous Galerkin Schemes for 2D Unsteady Biogeochemical Models

    Science.gov (United States)

    2009-09-01

    FVCOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7 ICOM ...coastal re- gions featured with complex irregular geometry and steep bottom topography. ICOM Imperial College Ocean Model FEM (CG and DG) using...the ICOM group. This code is written in C. Finel was also developed by the same group, and it is a three-dimensional non- hydrostatic finite element

  14. Assessment of the WRF-ARW model during fog conditions in a coastal arid region using different PBL schemes

    Science.gov (United States)

    Temimi, Marouane; Chaouch, Naira; Weston, Michael; Ghedira, Hosni

    2017-04-01

    This study covers five fog events reported in 2014 at Abu Dhabi International Airport in the United Arab Emirates (UAE). We assess the performance of WRF-ARW model during fog conditions and we intercompare seven different PBL schemes and assess their impact on the performance of the simulations. Seven PBL schemes, namely, Yonsei University (YSU), Mellor-Yamada-Janjic (MYJ), Moller-Yamada Nakanishi and Niino (MYNN) level 2.5, Quasi-Normal Scale Elimination (QNSE-EDMF), Asymmetric Convective Model (ACM2), Grenier-Bretherton-McCaa (GBM) and MYNN level 3 were tested. Radiosonde data from the Abu Dhabi International Airport and surface measurements of relative humidity (RH), dew point temperature, wind speed, and temperature profiles were used to assess the performance of the model. All PBL schemes showed comparable skills with relatively higher performance with the QNSE scheme. The average RH Root Mean Square Error (RMSE) and BIAS for all PBLs were 15.75 % and -9.07 %, respectively, whereas the obtained RMSE and BIAS when QNSE was used were 14.65 % and -6.3 % respectively. Comparable skills were obtained for the rest of the variables. Local PBL schemes showed better performance than non-local schemes. Discrepancies between simulated and observed values were higher at the surface level compared to high altitude values. The sensitivity to lead time showed that best simulation performances were obtained when the lead time varies between 12 and 18 hours. In addition, the results of the simulations show that better performance is obtained when the starting condition is dry.

  15. Regional model simulation of summer rainfall over the Philippines: Effect of choice of driving fields and ocean flux schemes

    Science.gov (United States)

    Francisco, R. V.; Argete, J.; Giorgi, F.; Pal, J.; Bi, X.; Gutowski, W. J.

    2006-09-01

    The latest version of the Abdus Salam International Centre for Theoretical Physics (ICTP) regional model RegCM is used to investigate summer monsoon precipitation over the Philippine archipelago and surrounding ocean waters, a region where regional climate models have not been applied before. The sensitivity of simulated precipitation to driving lateral boundary conditions (NCEP and ERA40 reanalyses) and ocean surface flux scheme (BATS and Zeng) is assessed for 5 monsoon seasons. The ability of the RegCM to simulate the spatial patterns and magnitude of monsoon precipitation is demonstrated, both in response to the prominent large scale circulations over the region and to the local forcing by the physiographical features of the Philippine islands. This provides encouraging indications concerning the development of a regional climate modeling system for the Philippine region. On the other hand, the model shows a substantial sensitivity to the analysis fields used for lateral boundary conditions as well as the ocean surface flux schemes. The use of ERA40 lateral boundary fields consistently yields greater precipitation amounts compared to the use of NCEP fields. Similarly, the BATS scheme consistently produces more precipitation compared to the Zeng scheme. As a result, different combinations of lateral boundary fields and surface ocean flux schemes provide a good simulation of precipitation amounts and spatial structure over the region. The response of simulated precipitation to using different forcing analysis fields is of the same order of magnitude as the response to using different surface flux parameterizations in the model. As a result it is difficult to unambiguously establish which of the model configurations is best performing.

  16. Application of blocking diagnosis methods to general circulation models. Part I: a novel detection scheme

    Energy Technology Data Exchange (ETDEWEB)

    Barriopedro, D. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Ed. C-8, Lisbon (Portugal); Universidad de Extremadura, Departamento de Fisica, Facultad de Ciencias, Badajoz (Spain); Garcia-Herrera, R. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain); Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Ed. C-8, Lisbon (Portugal)

    2010-12-15

    This paper aims to provide a new blocking definition with applicability to observations and model simulations. An updated review of previous blocking detection indices is provided and some of their implications and caveats discussed. A novel blocking index is proposed by reconciling two traditional approaches based on anomaly and absolute flows. Blocks are considered from a complementary perspective as a signature in the anomalous height field capable of reversing the meridional jet-based height gradient in the total flow. The method succeeds in identifying 2-D persistent anomalies associated to a weather regime in the total flow with blockage of the westerlies. The new index accounts for the duration, intensity, extension, propagation, and spatial structure of a blocking event. In spite of its increased complexity, the detection efficiency of the method is improved without hampering the computational time. Furthermore, some misleading identification problems and artificial assumptions resulting from previous single blocking indices are avoided with the new approach. The characteristics of blocking for 40 years of reanalysis (1950-1989) over the Northern Hemisphere are described from the perspective of the new definition and compared to those resulting from two standard blocking indices and different critical thresholds. As compared to single approaches, the novel index shows a better agreement with reported proxies of blocking activity, namely climatological regions of simultaneous wave amplification and maximum band-pass filtered height standard deviation. An additional asset of the method is its adaptability to different data sets. As critical thresholds are specific of the data set employed, the method is useful for observations and model simulations of different resolutions, temporal lengths and time variant basic states, optimizing its value as a tool for model validation. Special attention has been paid on the devise of an objective scheme easily applicable

  17. An economical model for mastering the art of intubation with different video laryngoscopes

    Directory of Open Access Journals (Sweden)

    Jitin N Trivedi

    2014-01-01

    Full Text Available Video laryngoscope (VL provides excellent laryngeal exposure in patients when anaesthesiologists encounter difficulty with direct laryngoscopy. Videolaryngoscopy, like flexible fibreoptic laryngoscopy demands a certain level of training by practitioners to become dexterous at successful intubation with a given instrument. Due to their cost factors, VLs are not easily available for training purposes to all the students, paramedics and emergency medical services providers in developing countries. We tried to develop a cost-effective instrument, which can work analogous to various available VLs. An inexpensive and easily available instrument was used to create an Airtraq Model for VL guided intubation training on manikin. Using this technique, successful intubation of manikin could be achieved. The Airtraq Model mimics the Airtraq Avant ® and may be used for VL guided intubation training for students as well as paramedics, and decrease the time and shorten the learning curve for Airtraq ® as well as various other VLs.

  18. Video-Quality Estimation Based on Reduced-Reference Model Employing Activity-Difference

    Science.gov (United States)

    Yamada, Toru; Miyamoto, Yoshihiro; Senda, Yuzo; Serizawa, Masahiro

    This paper presents a Reduced-reference based video-quality estimation method suitable for individual end-user quality monitoring of IPTV services. With the proposed method, the activity values for individual given-size pixel blocks of an original video are transmitted to end-user terminals. At the end-user terminals, the video quality of a received video is estimated on the basis of the activity-difference between the original video and the received video. Psychovisual weightings and video-quality score adjustments for fatal degradations are applied to improve estimation accuracy. In addition, low-bit-rate transmission is achieved by using temporal sub-sampling and by transmitting only the lower six bits of each activity value. The proposed method achieves accurate video quality estimation using only low-bit-rate original video information (15kbps for SDTV). The correlation coefficient between actual subjective video quality and estimated quality is 0.901 with 15kbps side information. The proposed method does not need computationally demanding spatial and gain-and-offset registrations. Therefore, it is suitable for real-time video-quality monitoring in IPTV services.

  19. Rajiv Aarogyasri Community Health Insurance Scheme in Andhra Pradesh, India: a comprehensive analytic view of private public partnership model.

    Science.gov (United States)

    Reddy, Sunita; Mary, Immaculate

    2013-01-01

    The Rajiv Aarogyasri Community Health Insurance (RACHI) in Andhra Pradesh (AP) has been very popular social insurance scheme with a private public partnership model to deal with the problems of catastrophic medical expenditures at tertiary level care for the poor households. A brief analysis of the RACHI scheme based on officially available data and media reports has been undertaken from a public health perspective to understand the nature and financing of partnership and the lessons it provides. The analysis of the annual budget spent on the surgeries in private hospitals compared to tertiary public hospitals shows that the current scheme is not sustainable and pose huge burden on the state exchequers. The private hospital association's in AP, further acts as pressure groups to increase the budget or threaten to withdraw services. Thus, profits are privatized and losses are socialized.

  20. A fully discrete energy stable scheme for a phase filed moving contact line model with variable densities and viscosities

    KAUST Repository

    Zhu, Guangpu

    2018-01-26

    In this paper, a fully discrete scheme which considers temporal and spatial discretizations is presented for the coupled Cahn-Hilliard equation in conserved form with the dynamic contact line condition and the Navier-Stokes equation with the generalized Navier boundary condition. Variable densities and viscosities are incorporated in this model. A rigorous proof of energy stability is provided for the fully discrete scheme based on a semi-implicit temporal discretization and a finite difference method on the staggered grids for the spatial discretization. A splitting method based on the pressure stabilization is implemented to solve the Navier-Stokes equation, while the stabilization approach is also used for the Cahn-Hilliard equation. Numerical results in both 2-D and 3-D demonstrate the accuracy, efficiency and decaying property of discrete energy of the proposed scheme.

  1. Procedures and compliance of a video modeling applied behavior analysis intervention for Brazilian parents of children with autism spectrum disorders.

    Science.gov (United States)

    Bagaiolo, Leila F; Mari, Jair de J; Bordini, Daniela; Ribeiro, Tatiane C; Martone, Maria Carolina C; Caetano, Sheila C; Brunoni, Decio; Brentani, Helena; Paula, Cristiane S

    2017-07-01

    Video modeling using applied behavior analysis techniques is one of the most promising and cost-effective ways to improve social skills for parents with autism spectrum disorder children. The main objectives were: (1) To elaborate/describe videos to improve eye contact and joint attention, and to decrease disruptive behaviors of autism spectrum disorder children, (2) to describe a low-cost parental training intervention, and (3) to assess participant's compliance. This is a descriptive study of a clinical trial for autism spectrum disorder children. The parental training intervention was delivered over 22 weeks based on video modeling. Parents with at least 8 years of schooling with an autism spectrum disorder child between 3 and 6 years old with an IQ lower than 70 were invited to participate. A total of 67 parents fulfilled the study criteria and were randomized into two groups: 34 as the intervention and 33 as controls. In all, 14 videos were recorded covering management of disruptive behaviors, prompting hierarchy, preference assessment, and acquisition of better eye contact and joint attention. Compliance varied as follows: good 32.4%, reasonable 38.2%, low 5.9%, and 23.5% with no compliance. Video modeling parental training seems a promising, feasible, and low-cost way to deliver care for children with autism spectrum disorder, particularly for populations with scarce treatment resources.

  2. Techno-Economic Models for Optimised Utilisation of Jatropha curcas Linnaeus under an Out-Grower Farming Scheme in Ghana

    Directory of Open Access Journals (Sweden)

    Isaac Osei

    2016-11-01

    Full Text Available Techno-economic models for optimised utilisation of jatropha oil under an out-grower farming scheme were developed based on different considerations for oil and by-product utilisation. Model 1: Out-grower scheme where oil is exported and press cake utilised for compost. Model 2: Out-grower scheme with six scenarios considered for the utilisation of oil and by-products. Linear programming models were developed based on outcomes of the models to optimise the use of the oil through profit maximisation. The findings revealed that Model 1 was financially viable from the processors’ perspective but not for the farmer at seed price of $0.07/kg. All scenarios considered under Model 2 were financially viable from the processors perspective but not for the farmer at seed price of $0.07/kg; however, at seed price of $0.085/kg, financial viability was achieved for both parties. Optimising the utilisation of the oil resulted in an annual maximum profit of $123,300.

  3. Real-Time Human Detection for Aerial Captured Video Sequences via Deep Models

    Directory of Open Access Journals (Sweden)

    Nouar AlDahoul

    2018-01-01

    Full Text Available Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on utilizing handcrafted features which are problem-dependent and optimal for specific tasks. Moreover, they are highly susceptible to dynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature learning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically without the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow and three different deep models (i.e., supervised convolutional neural network (S-CNN, pretrained CNN feature extractor, and hierarchical extreme learning machine for human detection in videos captured using a nonstatic camera on an aerial platform with varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset. The comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running. Experimental results demonstrated that the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%. S-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM. H-ELM has an average accuracy of 95.9%. Using a normal Central Processing Unit (CPU, H-ELM’s training time takes 445 seconds. Learning in S-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU.

  4. Examples of Video to Communicate Scientific Findings to Non-Scientists-Bayesian Ecological Modeling

    Science.gov (United States)

    Moorman, M.; Harned, D. A.; Cuffney, T.; Qian, S.

    2011-12-01

    The U.S Geological Survey (USGS) National Water-Quality Assessment Program (NAWQA) provides information about (1) water-quality conditions and how those conditions vary locally, regionally, and nationally, (2) water-quality trends, and (3) factors that affect those conditions. As part of the NAWQA Program, the Effects of Urbanization on Stream Ecosystems (EUSE) study examined the vulnerability and resilience of streams to urbanization. Completion of the EUSE study has resulted in over 20 scientific publications. Video podcasts are being used in addition to these publications to communicate the relevance of these scientific findings to more general audiences such as resource managers, educational groups, public officials, and the general public. An example of one of the podcasts is a film about the results of modeling the effects urbanization on stream ecology. The film describes some of the results of the EUSE ecological modeling effort and the advantages of the Bayesian and multi-level statistical modeling approaches, while relating the science to fly fishing. The complex scientific discussion combined with the lighter, more popular activity of fly fishing leads to an entertaining forum while educating viewers about a complex topic. This approach is intended to represent the scientists as interesting people with diverse interests. Video can be an effective scientific communication tool for presenting scientific findings to a broad audience. The film is available for access from the EUSE website (http://water.usgs.gov/nawqa/urban/html/podcasts.html). Additional films are planned to be released in 2012 on other USGS project results and programs.

  5. Effects of changes in Italian bioenergy promotion schemes for agricultural biogas projects: Insights from a regional optimization model

    International Nuclear Information System (INIS)

    Chinese, D.; Patrizio, P.; Nardin, G.

    2014-01-01

    Italy has witnessed an extraordinary growth in biogas generation from livestock effluents and agricultural activities in the last few years as well as a severe isomorphic process, leading to a market dominance of 999 kW power plants owned by “entrepreneurial farms”. Under the pressure of the economic crisis in the country, the Italian government has restructured renewable energy support schemes, introducing a new program in 2013. In this paper, the effects of the previous and current support schemes on the optimal plant size, feedstock mix and profitability were investigated by introducing a spatially explicit biogas supply chain optimization model, which accounts for different incentive structures. By applying the model to a regional case study, homogenization observed to date is recognized as a result of former incentive structures. Considerable reductions in local economic potentials for agricultural biogas power plants without external heat use, are estimated. New plants are likely to be manure-based and due to the lower energy density of such feedstock, wider supply chains are expected although optimal plant size will be smaller. The new support scheme will therefore most likely eliminate past distortions but also slow down investments in agricultural biogas plants. - Highlights: • We review the evolution of agricultural biogas support schemes in Italy over last 20 years. • A biogas supply chain optimization model which accounts for feed-in-tariffs is introduced. • The model is applied to a regional case study under the two most recent support schemes. • Incentives in force until 2013 caused homogenization towards maize based 999 kW el plants. • Wider, manure based supply chains feeding smaller plants are expected with future incentives

  6. Application of the MacCormack scheme to overland flow routing for high-spatial resolution distributed hydrological model

    Science.gov (United States)

    Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia

    2018-03-01

    Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.

  7. Modelling horizontal steam generator with ATHLET. Verification of different nodalization schemes and implementation of verified constitutive equations

    Energy Technology Data Exchange (ETDEWEB)

    Beliaev, J.; Trunov, N.; Tschekin, I. [OKB Gidropress (Russian Federation); Luther, W. [GRS Garching (Germany); Spolitak, S. [RNC-KI (Russian Federation)

    1995-12-31

    Currently the ATHLET code is widely applied for modelling of several Power Plants of WWER type with horizontal steam generators. A main drawback of all these applications is the insufficient verification of the models for the steam generator. This paper presents the nodalization schemes for the secondary side of the steam generator, the results of stationary calculations, and preliminary comparisons to experimental data. The consideration of circulation in the water inventory of the secondary side is proved to be necessary. (orig.). 3 refs.

  8. A general coarse and fine mesh solution scheme for fluid flow modeling in VHTRS

    International Nuclear Information System (INIS)

    Clifford, I; Ivanov, K; Avramova, M.

    2011-01-01

    Coarse mesh Computational Fluid Dynamics (CFD) methods offer several advantages over traditional coarse mesh methods for the safety analysis of helium-cooled graphite-moderated Very High Temperature Reactors (VHTRs). This relatively new approach opens up the possibility for system-wide calculations to be carried out using a consistent set of field equations throughout the calculation, and subsequently the possibility for hybrid coarse/fine mesh or hierarchical multi scale CFD simulations. To date, a consistent methodology for hierarchical multi-scale CFD has not been developed. This paper describes work carried out in the initial development of a multi scale CFD solver intended to be used for the safety analysis of VHTRs. The VHTR is considered on any scale to consist of a homogenized two-phase mixture of fluid and stationary solid material of varying void fraction. A consistent set of conservation equations was selected such that they reduce to the single-phase conservation equations for the case where void fraction is unity. The discretization of the conservation equations uses a new pressure interpolation scheme capable of capturing the discontinuity in pressure across relatively large changes in void fraction. Based on this, a test solver was developed which supports fully unstructured meshes for three-dimensional time-dependent compressible flow problems, including buoyancy effects. For typical VHTR flow phenomena the new solver shows promise as an effective candidate for predicting the flow behavior on multiple scales, as it is capable of modeling both fine mesh single phase flows as well as coarse mesh flows in homogenized regions containing both fluid and solid materials. (author)

  9. Video Game Acceptance: A Meta-Analysis of the Extended Technology Acceptance Model.

    Science.gov (United States)

    Wang, Xiaohui; Goh, Dion Hoe-Lian

    2017-11-01

    The current study systematically reviews and summarizes the existing literature of game acceptance, identifies the core determinants, and evaluates the strength of the relationships in the extended technology acceptance model. Moreover, this study segments video games into two categories: hedonic and utilitarian and examines player acceptance of these two types separately. Through a meta-analysis of 50 articles, we find that perceived ease of use (PEOU), perceived usefulness (PU), and perceived enjoyment (PE) significantly associate with attitude and behavioral intention. PE is the dominant predictor of hedonic game acceptance, while PEOU and PU are the main determinants of utilitarian game acceptance. Furthermore, we find that respondent type and game platform are significant moderators. Findings of this study provide critical insights into the phenomenon of game acceptance and suggest directions for future research.

  10. Diagnosis and Modeling of the Explosive Development of Winter Storms: Sensitivity to PBL Schemes

    Science.gov (United States)

    Liberato, Margarida L. R.; Pradhan, Prabodha K.

    2014-05-01

    The correct representation of extreme windstorms in regional models is of great importance for impact studies of climate change. The Iberian Peninsula has recently witnessed major damage from winter extratropical intense cyclones like Klaus (January 2009), Xynthia (February 2010) and Gong (January 2013) which formed over the mid-Atlantic, experienced explosive intensification while travelling eastwards at lower latitudes than usual [Liberato et al. 2011; 2013]. In this paper the explosive development of these storms is simulated by the advanced mesoscale Weather Research and Forecasting Model (WRF v 3.4.1), initialized with NCEP Final Analysis (FNL) data as initial and lateral boundary conditions (boundary conditions updated in every 3 hours intervals). The simulation experiments are conducted with two domains, a coarser (25km) and nested (8.333km), covering the entire North Atlantic and Iberian Peninsula region. The characteristics of these storms (e.g. wind speed, precipitation) are studied from WRF model and compared with multiple observations. In this context simulations with different Planetary Boundary Layer (PBL) schemes are performed. This approach aims at understanding which mechanisms favor the explosive intensification of these storms at a lower than usual latitudes, thus improving the knowledge of atmospheric dynamics (including small-scale processes) on controlling the life cycle of midlatitude extreme storms and contributing to the improvement in predictability and in our ability to forecast storms' impacts over Iberian Peninsula. Acknowledgments: This work was partially supported by FEDER (Fundo Europeu de Desenvolvimento Regional) funds through the COMPETE (Programa Operacional Factores de Competitividade) and by national funds through FCT (Fundação para a Ciência e a Tecnologia, Portugal) under project STORMEx FCOMP-01-0124-FEDER- 019524 (PTDC/AAC-CLI/121339/2010). References: Liberato M.L.R., J.G. Pinto, I.F. Trigo, R.M. Trigo (2011) Klaus - an

  11. Hydrodynamic modelling of the shock ignition scheme for inertial confinement fusion

    International Nuclear Information System (INIS)

    Vallet, Alexandra

    2014-01-01

    . That significant pressure enhancement is explained by contribution of hot-electrons generated by non-linear laser/plasma interaction in the corona. The proposed analytical models allow to optimize the shock ignition scheme, including the influence of the implosion parameters. Analytical, numerical and experimental results are mutually consistent. (author) [fr

  12. Attention to the Model's Face When Learning from Video Modeling Examples in Adolescents with and without Autism Spectrum Disorder

    Science.gov (United States)

    van Wermeskerken, Margot; Grimmius, Bianca; van Gog, Tamara

    2018-01-01

    We investigated the effects of seeing the instructor's (i.e., the model's) face in video modeling examples on students' attention and their learning outcomes. Research with university students suggested that the model's face attracts students' attention away from what the model is doing, but this did not hamper learning. We aimed to investigate…

  13. Sensitivity analysis of WRF model PBL schemes in simulating boundary-layer variables in southern Italy: An experimental campaign

    Science.gov (United States)

    Avolio, E.; Federico, S.; Miglietta, M. M.; Lo Feudo, T.; Calidonna, C. R.; Sempreviva, A. M.

    2017-08-01

    The sensitivity of boundary layer variables to five (two non-local and three local) planetary boundary-layer (PBL) parameterization schemes, available in the Weather Research and Forecasting (WRF) mesoscale meteorological model, is evaluated in an experimental site in Calabria region (southern Italy), in an area characterized by a complex orography near the sea. Results of 1 km × 1 km grid spacing simulations are compared with the data collected during a measurement campaign in summer 2009, considering hourly model outputs. Measurements from several instruments are taken into account for the performance evaluation: near surface variables (2 m temperature and relative humidity, downward shortwave radiation, 10 m wind speed and direction) from a surface station and a meteorological mast; vertical wind profiles from Lidar and Sodar; also, the aerosol backscattering from a ceilometer to estimate the PBL height. Results covering the whole measurement campaign show a cold and moist bias near the surface, mostly during daytime, for all schemes, as well as an overestimation of the downward shortwave radiation and wind speed. Wind speed and direction are also verified at vertical levels above the surface, where the model uncertainties are, usually, smaller than at the surface. A general anticlockwise rotation of the simulated flow with height is found at all levels. The mixing height is overestimated by all schemes and a possible role of the simulated sensible heat fluxes for this mismatching is investigated. On a single-case basis, significantly better results are obtained when the atmospheric conditions near the measurement site are dominated by synoptic forcing rather than by local circulations. From this study, it follows that the two first order non-local schemes, ACM2 and YSU, are the schemes with the best performance in representing parameters near the surface and in the boundary layer during the analyzed campaign.

  14. A Bilingual Child Learns Social Communication Skills through Video Modeling-A Single Case Study in a Norwegian School Setting

    Directory of Open Access Journals (Sweden)

    Meral Özerk

    2015-09-01

    Full Text Available Video modeling is one of the recognized methods used in the training and teaching of children with Autism Spectrum Disorders (ASD. The model’s theoretical base stems from Albert Bandura's (1977; 1986 social learning theory in which he asserts that children can learn many skills and behaviors observationally through modeling. One can assume that by observing others, a child with ASD can construct an idea of how new behaviors are performed, and on later occasions this mentally and visually constructed information will serve as a guide for his/her way of behaving. There are two types of methods for model learning: 1 In Vivo Modeling and 2 Video Modeling. These can be used a to teach children with ASD skills that are not yet in their behavioral repertoire and / or b to improve the children's emerging behaviors or skills. In the case of linguistic minority children at any stage of their bilingual development, it has been presumed that some of their behaviors that can be interpreted as attitude or culture-related actions. This approach, however, can sometimes delay referral, diagnosis, and intervention. In our project, we used Video Modeling and positive targeted results with regard to teaching social communication skills and target behavior to an eleven year-old bilingual boy with ASD. Our study also reveals that through Video Modeling, children with ASD can learn desirable behavioral skills as by-products. Video Modeling can also contribute positively to the social inclusion of bilingual children with ASD in school settings. In other words, bilingual children with ASD can transfer the social communication skills and targeted behaviors they learn through second-language at school to a first-language milieu.

  15. A bilingual child learns social communication skills through video modeling-a single case study in a norwegian school setting

    Directory of Open Access Journals (Sweden)

    Meral Özerk

    2015-09-01

    Full Text Available Video modeling is one of the recognized methods used in the training and teaching of children with Autism Spectrum Disorders (ASD. The model’s theoretical base stems from Albert Bandura's (1977; 1986 social learning theory in which he asserts that children can learn many skills and behaviors observationally through modeling. One can assume that by observing others, a child with ASD can construct an idea of how new behaviors are performed, and on later occasions this mentally and visually constructed information will serve as a guide for his/her way of behaving. There are two types of methods for model learning: 1 In Vivo Modeling and 2 Video Modeling. These can be used a to teach children with ASD skills that are not yet in their behavioral repertoire and / or b to improve the children's emerging behaviors or skills. In the case of linguistic minority children at any stage of their bilingual development, it has been presumed that some of their behaviors that can be interpreted as attitude or culture-related actions. This approach, however, can sometimes delay referral, diagnosis, and intervention. In our project, we used Video Modeling and achieved positive results with regard to teaching social communication skills and target behavior to an eleven year-old bilingual boy with ASD. Our study also reveals that through Video Modeling, children with ASD can learn desirable behavioral skills as by-products. Video Modeling can also contribute positively to the social inclusion of bilingual children with ASD in school settings. In other words, bilingual children with ASD can transfer the social communication skills and targeted behaviors they learn through second-language at school to a first-language milieu.

  16. Modelling the mortality of members of group schemes in South Africa

    African Journals Online (AJOL)

    In this paper, the methodology underlying the graduation of the mortality of members of group schemes in South Africa underwritten by life insurance companies under group life-insurance arrangements is described and the results are presented. A multivariate parametric curve was fitted to the data for the working ages 25 ...

  17. Improvement in the Modeled Representation of North American Monsoon Precipitation Using a Modified Kain–Fritsch Convective Parameterization Scheme

    Directory of Open Access Journals (Sweden)

    Thang M. Luong

    2018-01-01

    Full Text Available A commonly noted problem in the simulation of warm season convection in the North American monsoon region has been the inability of atmospheric models at the meso-β scales (10 s to 100 s of kilometers to simulate organized convection, principally mesoscale convective systems. With the use of convective parameterization, high precipitation biases in model simulations are typically observed over the peaks of mountain ranges. To address this issue, the Kain–Fritsch (KF cumulus parameterization scheme has been modified with new diagnostic equations to compute the updraft velocity, the convective available potential energy closure assumption, and the convective trigger function. The scheme has been adapted for use in the Weather Research and Forecasting (WRF. A numerical weather prediction-type simulation is conducted for the North American Monsoon Experiment Intensive Observing Period 2 and a regional climate simulation is performed, by dynamically downscaling. In both of these applications, there are notable improvements in the WRF model-simulated precipitation due to the better representation of organized, propagating convection. The use of the modified KF scheme for atmospheric model simulations may provide a more computationally economical alternative to improve the representation of organized convection, as compared to convective-permitting simulations at the kilometer scale or a super-parameterization approach.

  18. A new Dynamic Dust-emission rate (DDR) scheme base on Satellite remote sensing data for air quality model

    Science.gov (United States)

    Tang, Yu Jia; Li, Ling Jun; Zhou, Yi Ming; Zhang, Da Wei; Yin, Wen Jun; Zhang, Meng; Xie, Bao Guo; Cheng, Nianliang

    2017-04-01

    Dust produced by wind erosion is a major source of atmospheric dust pollutions which have impacts on air quality, weather and climate. It is difficult to calculate dust concentration in the atmosphere with certainty unless the dust-emission rate can be estimated with accuracy. Hence, due to the unreliable estimation of dust-emission rate flux from ground surface, the dust forecast accuracy in air quality models is low. The main reason is that the parameter that describes the dust-emission rate in the regional air quality model is constant and cannot reflect the reality of surface dust-emission changes. A new scheme which uses the vegetation information from satellite remote sensing data and meteorological condition provided by meteorological forecast model is developed to estimate the actual dust-emission rete from the ground surface. The results shows that the new scheme can improve dust simulation and forecast performance significantly and reduce the root mean square error by 25% 68%. The DDR scheme can be coupled with any current air quality model (e.g. WRF-Chem, CMAQ, CAMx) and produce more accurate dust forecast.

  19. Improvement in the Modeled Representation of North American Monsoon Precipitation Using a Modified Kain–Fritsch Convective Parameterization Scheme

    KAUST Repository

    Luong, Thang

    2018-01-22

    A commonly noted problem in the simulation of warm season convection in the North American monsoon region has been the inability of atmospheric models at the meso-β scales (10 s to 100 s of kilometers) to simulate organized convection, principally mesoscale convective systems. With the use of convective parameterization, high precipitation biases in model simulations are typically observed over the peaks of mountain ranges. To address this issue, the Kain–Fritsch (KF) cumulus parameterization scheme has been modified with new diagnostic equations to compute the updraft velocity, the convective available potential energy closure assumption, and the convective trigger function. The scheme has been adapted for use in the Weather Research and Forecasting (WRF). A numerical weather prediction-type simulation is conducted for the North American Monsoon Experiment Intensive Observing Period 2 and a regional climate simulation is performed, by dynamically downscaling. In both of these applications, there are notable improvements in the WRF model-simulated precipitation due to the better representation of organized, propagating convection. The use of the modified KF scheme for atmospheric model simulations may provide a more computationally economical alternative to improve the representation of organized convection, as compared to convective-permitting simulations at the kilometer scale or a super-parameterization approach.

  20. The Use of Video Self-Modeling to Increase On-Task Behavior in Children with High-Functioning Autism

    Science.gov (United States)

    Schatz, Rochelle B.; Peterson, Rachel K.; Bellini, Scott

    2016-01-01

    In the present study, the researchers implemented a video self-modeling intervention for increasing on-task classroom behavior for three elementary school students diagnosed with an autism spectrum disorder. The researchers observed the students' on-task engagement three times a week during their respective math classes. A multiple baseline design…

  1. The Combined Use of Video Modeling and Social Stories in Teaching Social Skills for Individuals with Intellectual Disability

    Science.gov (United States)

    Gül, Seray Olçay

    2016-01-01

    There are many studies in the literature in which individuals with intellectual disabilities exhibit social skills deficits and which show the need for teaching these skills systematically. This study aims to investigate the effects of an intervention package of consisting computer-presented video modeling and Social Stories on individuals with…

  2. Effects of Mother-Delivered Social Stories and Video Modeling in Teaching Social Skills to Children with Autism Spectrum Disorders

    Science.gov (United States)

    Acar, Cimen; Tekin-Iftar, Elif; Yikmis, Ahmet

    2017-01-01

    An adapted alternating treatments design was used to compare mother-developed and delivered social stories and video modeling in teaching social skills to children with autism spectrum disorder (ASD). Mothers' opinions about the social validity of the study were also examined. Three mother-child dyads participated in the study. Results showed that…

  3. Brief Report: Remotely Delivered Video Modeling for Improving Oral Hygiene in Children with ASD: A Pilot Study

    Science.gov (United States)

    Popple, Ben; Wall, Carla; Flink, Lilli; Powell, Kelly; Discepolo, Keri; Keck, Douglas; Mademtzi, Marilena; Volkmar, Fred; Shic, Frederick

    2016-01-01

    Children with autism have heightened risk of developing oral health problems. Interventions targeting at-home oral hygiene habits may be the most effective means of improving oral hygiene outcomes in this population. This randomized control trial examined the effectiveness of a 3-week video-modeling brushing intervention delivered to patients over…

  4. New magnetic-field-based weighted-residual quasi-static finite element scheme for modeling bulk magnetostriction

    Science.gov (United States)

    Kannan, Kidambi S.; Dasgupta, Abhijit

    1998-04-01

    Deformation control of smart structures and damage detection in smart composites by magneto-mechanical tagging are just a few of the increasing number of applications of polydomain, polycrystalline magnetostrictive materials that are currently being researched. Robust computational models of bulk magnetostriction will be of great assistance to designers of smart structures for optimization of performance and development of control strategies. This paper discusses the limitations of existing tools, and reports on the work of the authors in developing a 3D nonlinear continuum finite element scheme for magnetostrictive structures, based on an appropriate Galerkin variational principle and incremental constitutive relations. The unique problems posed by the form of the equations governing magneto-mechanical interactions as well as their impact on the proper choice of variational and finite element discretization schemes are discussed. An adaptation of vectorial edge functions for interpolation of magnetic field in hexahedral elements is outlined. The differences between the proposed finite element scheme and available formations are also discussed in this paper. Computational results obtained from the newly proposed scheme will be presented in a future paper.

  5. New IES Scheme for Power Conditioning at Ultra-High Currents: from Concept to MHD Modeling and First Experiments

    Science.gov (United States)

    Chuvatin, Alexandre S.; Rudakov, Leonid I.; Kokshenev, Vladimir A.; Aranchuk, Leonid E.; Huet, Dominique; Gasilov, Vladimir A.; Krukovskii, Alexandre Yu.; Kurmaev, Nikolai E.; Fursov, Fiodor I.

    2002-12-01

    This work introduces an inductive energy storage (IES) scheme which aims pulsed-power conditioning at multi- MJ energies. The key element of the scheme represents an additional plasma volume, where a magnetically accelerated wire array is used for inductive current switching. This plasma acceleration volume is connected in parallel to a microsecond capacitor bank and to a 100-ns current ruse-time useful load. Simple estimates suggest that optimized scheme parameters could be reachable even when operating at ultra-high currents. We describe first proof-of-principle experiments carried out on GIT12 generator [1] at the wire-array current level of 2 MA. The obtained confirmation of the concept consists in generation of a 200 kV voltage directly at an inductive load. This load voltage value can be already sufficient to transfer the available magnetic energy into kinetic energy of a liner at this current level. Two-dimensional modeling with the radiational MHD numerical tool Marple [2] confirms the development of inductive voltage in the system. However, the average voltage increase is accompanied by short-duration voltage drops due to interception of the current by the low-density upstream plasma. Upon our viewpoint, this instability of the current distribution represents the main physical limitation to the scheme performance.

  6. Modeling and Analysis of Resonance in LCL-Type Grid-Connected Inverters under Different Control Schemes

    Directory of Open Access Journals (Sweden)

    Yanxue Yu

    2017-01-01

    Full Text Available As a basic building block in power systems, the three-phase voltage-source inverter (VSI connects the distributed energy to the grid. For the inductor-capacitor-inductor (LCL-filter three-phase VSI, according to different current sampling position and different reference frame, there mainly exist four control schemes. Different control schemes present different impedance characteristics in their corresponding determined frequency range. To analyze the existing resonance phenomena due to the variation of grid impedances, the sequence impedance models of LCL-type grid-connected three-phase inverters under different control schemes are presented using the harmonic linearization method. The impedance-based stability analysis approach is then applied to compare the relative stability issues due to the impedance differences at some frequencies and to choose the best control scheme and the better controller parameters regulating method for the LCL-type three-phase VSI. The simulation and experiments both validate the resonance analysis results.

  7. Dashboard Videos

    Science.gov (United States)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  8. Student Teachers' Modeling of Acceleration Using a Video-Based Laboratory in Physics Education: A Multimodal Case Study

    Directory of Open Access Journals (Sweden)

    Louis Trudel

    2016-06-01

    Full Text Available This exploratory study intends to model kinematics learning of a pair of student teachers when exposed to prescribed teaching strategies in a video-based laboratory. Two student teachers were chosen from the Francophone B.Ed. program of the Faculty of Education of a Canadian university. The study method consisted of having the participants interact with a video-based laboratory to complete two activities for learning properties of acceleration in rectilinear motion. Time limits were placed on the learning activities during which the researcher collected detailed multimodal information from the student teachers' answers to questions, the graphs they produced from experimental data, and the videos taken during the learning sessions. As a result, we describe the learning approach each one followed, the evidence of conceptual change and the difficulties they face in tackling various aspects of the accelerated motion. We then specify advantages and limits of our research and propose recommendations for further study.

  9. Video Quality Prediction over Wireless 4G

    KAUST Repository

    Lau, Chun Pong

    2013-04-14

    In this paper, we study the problem of video quality prediction over the wireless 4G network. Video transmission data is collected from a real 4G SCM testbed for investigating factors that affect video quality. After feature transformation and selection on video and network parameters, video quality is predicted by solving as regression problem. Experimental results show that the dominated factor on video quality is the channel attenuation and video quality can be well estimated by our models with small errors.

  10. Finite Volume schemes on unstructured grids for non-local models: Application to the simulation of heat transport in plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Goudon, Thierry, E-mail: thierry.goudon@inria.fr [Team COFFEE, INRIA Sophia Antipolis Mediterranee (France); Labo. J.A. Dieudonne CNRS and Univ. Nice-Sophia Antipolis (UMR 7351), Parc Valrose, 06108 Nice cedex 02 (France); Parisot, Martin, E-mail: martin.parisot@gmail.com [Project-Team SIMPAF, INRIA Lille Nord Europe, Park Plazza, 40 avenue Halley, F-59650 Villeneuve d' Ascq cedex (France)

    2012-10-15

    In the so-called Spitzer-Haerm regime, equations of plasma physics reduce to a nonlinear parabolic equation for the electronic temperature. Coming back to the derivation of this limiting equation through hydrodynamic regime arguments, one is led to construct a hierarchy of models where the heat fluxes are defined through a non-local relation which can be reinterpreted as well by introducing coupled diffusion equations. We address the question of designing numerical methods to simulate these equations. The basic requirement for the scheme is to be asymptotically consistent with the Spitzer-Haerm regime. Furthermore, the constraints of physically realistic simulations make the use of unstructured meshes unavoidable. We develop a Finite Volume scheme, based on Vertex-Based discretization, which reaches these objectives. We discuss on numerical grounds the efficiency of the method, and the ability of the generalized models in capturing relevant phenomena missed by the asymptotic problem.

  11. Improving Hydrological Models by Applying Air Mass Boundary Identification in a Precipitation Phase Determination Scheme

    Science.gov (United States)

    Feiccabrino, James; Lundberg, Angela; Sandström, Nils

    2013-04-01

    Many hydrological models determine precipitation phase using surface weather station data. However, there are a declining number of augmented weather stations reporting manually observed precipitation phases, and a large number of automated observing systems (AOS) which do not report precipitation phase. Automated precipitation phase determination suffers from low accuracy in the precipitation phase transition zone (PPTZ), i.e. temperature range -1° C to 5° C where rain, snow and mixed precipitation is possible. Therefore, it is valuable to revisit surface based precipitation phase determination schemes (PPDS) while manual verification is still widely available. Hydrological and meteorological approaches to PPDS are vastly different. Most hydrological models apply surface meteorological data into one of two main PPDS approaches. The first is a single rain/snow threshold temperature (TRS), the second uses a formula to describe how mixed precipitation phase changes between the threshold temperatures TS (below this temperature all precipitation is considered snow) and TR (above this temperature all precipitation is considered rain). However, both approaches ignore the effect of lower tropospheric conditions on surface precipitation phase. An alternative could be to apply a meteorological approach in a hydrological model. Many meteorological approaches rely on weather balloon data to determine initial precipitation phase, and latent heat transfer for the melting or freezing of precipitation falling through the lower troposphere. These approaches can improve hydrological PPDS, but would require additional input data. Therefore, it would be beneficial to link expected lower tropospheric conditions to AOS data already used by the model. In a single air mass, rising air can be assumed to cool at a steady rate due to a decrease in atmospheric pressure. When two air masses meet, warm air is forced to ascend the more dense cold air. This causes a thin sharp warming (frontal

  12. Reconstruction of an incident using video images and 3D models

    NARCIS (Netherlands)

    Iersel, M. van; Bijhold, J.; Edelman, G.

    2008-01-01

    The growing amount of security cameras increases the chance that there is important video footage available for the reconstruction of an incident. Using the right methods for analysis of large amounts of videos, it is possible to use this relevant information. In collaboration with the Netherlands

  13. Modeling 3D Unknown object by Range Finder and Video Camera ...

    African Journals Online (AJOL)

    Computer-generated and video images are superimposed. The man-machine interface functions deal mainly with on line building of graphic aids to improve perception, updating the geometric database of the robotic site, and video control of the robot. The superimposition of the real and virtual worlds is carried out through ...

  14. A General Framework for Edited Video and Raw Video Summarization.

    Science.gov (United States)

    Li, Xuelong; Zhao, Bin; Lu, Xiaoqiang

    2017-08-01

    In this paper, we build a general summarization framework for both of edited video and raw video summarization. Overall, our work can be divided into three folds. 1) Four models are designed to capture the properties of video summaries, i.e., containing important people and objects (importance), representative to the video content (representativeness), no similar key-shots (diversity), and smoothness of the storyline (storyness). Specifically, these models are applicable to both edited videos and raw videos. 2) A comprehensive score function is built with the weighted combination of the aforementioned four models. Note that the weights of the four models in the score function, denoted as property-weight, are learned in a supervised manner. Besides, the property-weights are learned for edited videos and raw videos, respectively. 3) The training set is constructed with both edited videos and raw videos in order to make up the lack of training data. Particularly, each training video is equipped with a pair of mixing-coefficients, which can reduce the structure mess in the training set caused by the rough mixture. We test our framework on three data sets, including edited videos, short raw videos, and long raw videos. Experimental results have verified the effectiveness of the proposed framework.

  15. Improvement to microphysical schemes in WRF Model based on observed data, part I: size distribution function

    Science.gov (United States)

    Shan, Y.; Eric, W.; Gao, L.; Zhao, T.; Yin, Y.

    2015-12-01

    In this study, we have evaluated the performance of size distribution functions (SDF) with 2- and 3-moments in fitting the observed size distribution of rain droplets at three different heights. The goal is to improve the microphysics schemes in meso-scale models, such as Weather Research and Forecast (WRF). Rain droplets were observed during eight periods of different rain types at three stations on the Yellow Mountain in East China. The SDF in this study were M-P distribution with a fixed shape parameter in Gamma SDF(FSP). Where the Gamma SDFs were obtained with three diagnosis methods with the shape parameters based on Milbrandt (2010; denoted DSPM10), Milbrandt (2005; denoted DSPM05) and Seifert (2008; denoted DSPS08) for solving the shape parameter(SSP) and Lognormal SDF. Based on the preliminary experiments, three ensemble methods deciding Gamma SDF was also developed and assessed. The magnitude of average relative error caused by applying a FSP was 10-2 for fitting 0-order moment of the observed rain droplet distribution, and the magnitude of average relative error changed to 10-1 and 100 respectively for 1-4 order moments and 5-6 order moments. To different extent, DSPM10, DSPM05, DSPS08, SSP and ensemble methods could improve fitting accuracies for 0-6 order moments, especially the one coupling SSP and DSPS08 methods, which provided a average relative error 6.46% for 1-4 order moments and 11.90% for 5-6 order moments, respectively. The relative error of fitting three moments using the Lognormal SDF was much larger than that of Gamma SDF. The threshold value of shape parameter ranged from 0 to 8, because values beyond this range could cause overflow in the calculation. When average diameter of rain droplets was less than 2mm, the possibility of unavailable shape parameter value(USPV) increased with a decreasing droplet size. There was strong sensitivity of moment group in fitting accuracy. When ensemble method coupling SSP and DSPS08 was used, a better fit

  16. A Coupled Hidden Conditional Random Field Model for Simultaneous Face Clustering and Naming in Videos

    KAUST Repository

    Zhang, Yifan

    2016-08-18

    For face naming in TV series or movies, a typical way is using subtitles/script alignment to get the time stamps of the names, and tagging them to the faces. We study the problem of face naming in videos when subtitles are not available. To this end, we divide the problem into two tasks: face clustering which groups the faces depicting a certain person into a cluster, and name assignment which associates a name to each face. Each task is formulated as a structured prediction problem and modeled by a hidden conditional random field (HCRF) model. We argue that the two tasks are correlated problems whose outputs can provide prior knowledge of the target prediction for each other. The two HCRFs are coupled in a unified graphical model called coupled HCRF where the joint dependence of the cluster labels and face name association is naturally embedded in the correlation between the two HCRFs. We provide an effective algorithm to optimize the two HCRFs iteratively and the performance of the two tasks on real-world data set can be both improved.

  17. Phase Transitions for Quantum XY-Model on the Cayley Tree of Order Three in Quantum Markov Chain Scheme

    International Nuclear Information System (INIS)

    Mukhamedov, Farrukh; Saburov, Mansoor

    2010-06-01

    In the present paper we study forward Quantum Markov Chains (QMC) defined on a Cayley tree. Using the tree structure of graphs, we give a construction of quantum Markov chains on a Cayley tree. By means of such constructions we prove the existence of a phase transition for the XY-model on a Cayley tree of order three in QMC scheme. By the phase transition we mean the existence of two distinct QMC for the given family of interaction operators {K }. (author)

  18. Stability and Convergence Analysis of Second-Order Schemes for a Diffuse Interface Model with Peng-Robinson Equation of State

    KAUST Repository

    Peng, Qiujin

    2017-09-18

    In this paper, we present two second-order numerical schemes to solve the fourth order parabolic equation derived from a diffuse interface model with Peng-Robinson Equation of state (EOS) for pure substance. The mass conservation, energy decay property, unique solvability and L-infinity convergence of these two schemes are proved. Numerical results demonstrate the good approximation of the fourth order equation and confirm reliability of these two schemes.

  19. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  20. Impact of the snow cover scheme on snow distribution and energy budget modeling over the Tibetan Plateau

    Science.gov (United States)

    Xie, Zhipeng; Hu, Zeyong; Xie, Zhenghui; Jia, Binghao; Sun, Genhou; Du, Yizhen; Song, Haiqing

    2018-02-01

    This paper presents the impact of two snow cover schemes (NY07 and SL12) in the Community Land Model version 4.5 (CLM4.5) on the snow distribution and surface energy budget over the Tibetan Plateau. The simulated snow cover fraction (SCF), snow depth, and snow cover days were evaluated against in situ snow depth observations and a satellite-based snow cover product and snow depth dataset. The results show that the SL12 scheme, which considers snow accumulation and snowmelt processes separately, has a higher overall accuracy (81.8%) than the NY07 (75.8%). The newer scheme performs better in the prediction of overall accuracy compared with the NY07; however, SL12 yields a 15.1% underestimation rate while NY07 overestimated the SCF with a 15.2% overestimation rate. Both two schemes capture the distribution of the maximum snow depth well but show large positive biases in the average value through all periods (3.37, 3.15, and 1.48 cm for NY07; 3.91, 3.52, and 1.17 cm for SL12) and overestimate snow cover days compared with the satellite-based product and in situ observations. Higher altitudes show larger root-mean-square errors (RMSEs) in the simulations of snow depth and snow cover days during the snow-free period. Moreover, the surface energy flux estimations from the SL12 scheme are generally superior to the simulation from NY07 when evaluated against ground-based observations, in particular for net radiation and sensible heat flux. This study has great implications for further improvement of the subgrid-scale snow variations over the Tibetan Plateau.

  1. Evaluating the performance of SURFEXv5 as a new land surface scheme for the ALADINcy36 and ALARO-0 models

    Science.gov (United States)

    Hamdi, R.; Degrauwe, D.; Duerinckx, A.; Cedilnik, J.; Costa, V.; Dalkilic, T.; Essaouini, K.; Jerczynki, M.; Kocaman, F.; Kullmann, L.; Mahfouf, J.-F.; Meier, F.; Sassi, M.; Schneider, S.; Váňa, F.; Termonia, P.

    2014-01-01

    The newly developed land surface scheme SURFEX (SURFace EXternalisée) is implemented into a limited-area numerical weather prediction model running operationally in a number of countries of the ALADIN and HIRLAM consortia. The primary question addressed is the ability of SURFEX to be used as a new land surface scheme and thus assessing its potential use in an operational configuration instead of the original ISBA (Interactions between Soil, Biosphere, and Atmosphere) scheme. The results show that the introduction of SURFEX either shows improvement for or has a neutral impact on the 2 m temperature, 2 m relative humidity and 10 m wind. However, it seems that SURFEX has a tendency to produce higher maximum temperatures at high-elevation stations during winter daytime, which degrades the 2 m temperature scores. In addition, surface radiative and energy fluxes improve compared to observations from the Cabauw tower. The results also show that promising improvements with a demonstrated positive impact on the forecast performance are achieved by introducing the town energy balance (TEB) scheme. It was found that the use of SURFEX has a neutral impact on the precipitation scores. However, the implementation of TEB within SURFEX for a high-resolution run tends to cause rainfall to be locally concentrated, and the total accumulated precipitation obviously decreases during the summer. One of the novel features developed in SURFEX is the availability of a more advanced surface data assimilation using the extended Kalman filter. The results over Belgium show that the forecast scores are similar between the extended Kalman filter and the classical optimal interpolation scheme. Finally, concerning the vertical scores, the introduction of SURFEX either shows improvement for or has a neutral impact in the free atmosphere.

  2. Impact of a monotonic advection scheme with low numerical diffusion on transport modeling of emissions from biomass burning

    Directory of Open Access Journals (Sweden)

    Saulo Frietas

    2012-01-01

    Full Text Available An advection scheme, which maintains the initial monotonic characteristics of a tracer field being transported and at the same time produces low numerical diffusion, is implemented in the Coupled Chemistry-Aerosol-Tracer Transport model to the Brazilian developments on the Regional Atmospheric Modeling System (CCATT-BRAMS. Several comparisons of transport modeling using the new and original (non-monotonic CCATT-BRAMS formulations are performed. Idealized 2-D non-divergent or divergent and stationary or time-dependent wind fields are used to transport sharply localized tracer distributions, as well as to verify if an existent correlation of the mass mixing ratios of two interrelated tracers is kept during the transport simulation. Further comparisons are performed using realistic 3-D wind fields. We then perform full simulations of real cases using data assimilation and complete atmospheric physics. In these simulations, we address the impacts of both advection schemes on the transport of biomass burning emissions and the formation of secondary species from non-linear chemical reactions of precursors. The results show that the new scheme produces much more realistic transport patterns, without generating spurious oscillations and under- and overshoots or spreading mass away from the local peaks. Increasing the numerical diffusion in the original scheme in order to remove the spurious oscillations and maintain the monotonicity of the transported field causes excessive smoothing in the tracer distribution, reducing the local gradients and maximum values and unrealistically spreading mass away from the local peaks. As a result, huge differences (hundreds of % for relatively inert tracers (like carbon monoxide are found in the smoke plume cores. In terms of the secondary chemical species formed by non-linear reactions (like ozone, we found differences of up to 50% in our simulations.

  3. The Dynamic Model Embed in Augmented Graph Cuts for Robust Hand Tracking and Segmentation in Videos

    Directory of Open Access Journals (Sweden)

    Jun Wan

    2014-01-01

    Full Text Available Segmenting human hand is important in computer vision applications, for example, sign language interpretation, human computer interaction, and gesture recognition. However, some serious bottlenecks still exist in hand localization systems such as fast hand motion capture, hand over face, and hand occlusions on which we focus in this paper. We present a novel method for hand tracking and segmentation based on augmented graph cuts and dynamic model. First, an effective dynamic model for state estimation is generated, which correctly predicts the location of hands probably having fast motion or shape deformations. Second, new energy terms are brought into the energy function to develop augmented graph cuts based on some cues, namely, spatial information, hand motion, and chamfer distance. The proposed method successfully achieves hand segmentation even though the hand passes over other skin-colored objects. Some challenging videos are provided in the case of hand over face, hand occlusions, dynamic background, and fast motion. Experimental results demonstrate that the proposed method is much more accurate than other graph cuts-based methods for hand tracking and segmentation.

  4. Visual fatigue modeling for stereoscopic video shot based on camera motion

    Science.gov (United States)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  5. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  6. The Measurement and Modeling of a P2P Streaming Video Service

    Science.gov (United States)

    Gao, Peng; Liu, Tao; Chen, Yanming; Wu, Xingyao; El-Khatib, Yehia; Edwards, Christopher

    Most of the work on grid technology in video area has been generally restricted to aspects of resource scheduling and replica management. The traffic of such service has a lot of characteristics in common with that of the traditional video service. However the architecture and user behavior in Grid networks are quite different from those of traditional Internet. Considering the potential of grid networks and video sharing services, measuring and analyzing P2P IPTV traffic are important and fundamental works in the field grid networks.

  7. Teaching social-communication skills to preschoolers with autism: efficacy of video versus in vivo modeling in the classroom.

    Science.gov (United States)

    Wilson, Kaitlyn P

    2013-08-01

    Video modeling is a time- and cost-efficient intervention that has been proven effective for children with autism spectrum disorder (ASD); however, the comparative efficacy of this intervention has not been examined in the classroom setting. The present study examines the relative efficacy of video modeling as compared to the more widely-used strategy of in vivo modeling using an alternating treatments design with baseline and replication across four preschool-aged students with ASD. Results offer insight into the heterogeneous treatment response of students with ASD. Additional data reflecting visual attention and social validity were captured to further describe participants' learning preferences and processes, as well as educators' perceptions of the acceptability of each intervention's procedures in the classroom setting.

  8. Verification of a Higher-Order Finite Difference Scheme for the One-Dimensional Two-Fluid Model

    Directory of Open Access Journals (Sweden)

    William D. Fullmer

    2013-06-01

    Full Text Available The one-dimensional two-fluid model is widely acknowledged as the most detailed and accurate macroscopic formulation model of the thermo-fluid dynamics in nuclear reactor safety analysis. Currently the prevailing one-dimensional thermal hydraulics codes are only first-order accurate. The benefit of first-order schemes is numerical viscosity, which serves as a regularization mechanism for many otherwise ill-posed two-fluid models. However, excessive diffusion in regions of large gradients leads to poor resolution of phenomena related to void wave propagation. In this work, a higher-order shock capturing method is applied to the basic equations for incompressible and isothermal flow of the one-dimensional two-fluid model. The higher-order accuracy is gained by a strong stability preserving multi-step scheme for the time discretization and a minmod flux limiter scheme for the convection terms. Additionally the use of a staggered grid allows for several second-order centered terms, when available. The continuity equations are first tested by manipulating the two-fluid model into a pair of linear wave equations and tested for smooth and discontinuous initial data. The two-fluid model is benchmarked with the water faucet problem. With the higher-order method, the ill-posed nature of the governing equations presents severe challenges due to a growing void fraction jump in the solution. Therefore the initial and boundary conditions of the problem are modified in order to eliminate a large counter-current flow pattern that develops. With the modified water faucet problem the numerical models behave well and allow a convergence study. Using the L1 norm of the liquid fraction, it is verified that the first and higher-order numerical schemes converge to the quasi-analytical solution at a rate of O(1/2 and O(2/3, respectively. It is also shown that the growing void jump is a contact discontinuity, i.e. it is a linearly degenerate wave. The sub

  9. Colour schemes

    DEFF Research Database (Denmark)

    van Leeuwen, Theo

    2013-01-01

    This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation.......This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation....

  10. Self-control over combined video feedback and modeling facilitates motor learning.

    Science.gov (United States)

    Post, Phillip G; Aiken, Christopher A; Laughlin, David D; Fairbrother, Jeffrey T

    2016-06-01

    Allowing learners to control the video presentation of knowledge of performance (KP) or an expert model during practice has been shown to facilitate motor learning (Aiken, Fairbrother, & Post, 2012; Wulf, Raupach, & Pfeiffer, 2005). Split-screen replay features now allow for the simultaneous presentation of these modes of instructional support. It is uncertain, however, if such a combination incorporated into a self-control protocol would yield similar benefits seen in earlier self-control studies. Therefore, the purpose of the present study was to examine the effects of self-controlled split-screen replay on the learning of a golf chip shot. Participants completed 60 practice trials, three administrations of the Intrinsic Motivation Inventory, and a questionnaire on day one. Retention and transfer tests and a final motivation inventory were completed on day two. Results revealed significantly higher form and accuracy scores for the self-control group during transfer. The self-control group also had significantly higher scores on the perceived competence subscale, reported requesting feedback mostly after perceived poor trials, and recalled a greater number of critical task features compared to the yoked group. The findings for the performance measures were consistent with previous self-control research. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. A Programmable Video Platform and Its Application Mapping Framework Using the Target Application's SystemC Models

    Directory of Open Access Journals (Sweden)

    Kim Daewoong

    2011-01-01

    Full Text Available Abstract HD video applications can be represented with multiple tasks consisting of tightly coupled multiple threads. Each task requires massive computation, and their communication can be categorized as asynchronous distributed small data and large streaming data transfers. In this paper, we propose a high performance programmable video platform that consists of four processing element (PE clusters. Each PE cluster runs a task in the video application with RISC cores, a hardware operating system kernel (HOSK, and task-specific accelerators. PE clusters are connected with two separate point-to-point networks: one for asynchronous distributed controls and the other for heavy streaming data transfers among the tasks. Furthermore, we developed an application mapping framework, with which parallel executable codes can be obtained from a manually developed SystemC model of the target application without knowing the detailed architecture of the video platform. To show the effectivity of the platform and its mapping framework, we also present mapping results for an H.264/AVC 720p decoder/encoder and a VC-1 720p decoder with 30 fps, assuming that the platform operates at 200 MHz.

  12. 3D elastic wave modeling using modified high‐order time stepping schemes with improved stability conditions

    KAUST Repository

    Chu, Chunlei

    2009-01-01

    We present two Lax‐Wendroff type high‐order time stepping schemes and apply them to solving the 3D elastic wave equation. The proposed schemes have the same format as the Taylor series expansion based schemes, only with modified temporal extrapolation coefficients. We demonstrate by both theoretical analysis and numerical examples that the modified schemes significantly improve the stability conditions.

  13. Physical control oriented model of large scale refrigerators to synthesize advanced control schemes. Design, validation, and first control results

    Science.gov (United States)

    Bonne, François; Alamir, Mazen; Bonnay, Patrick

    2014-01-01

    In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  14. Is it worth protecting groundwater from diffuse pollution with agri-environmental schemes? A hydro-economic modeling approach.

    Science.gov (United States)

    Hérivaux, Cécile; Orban, Philippe; Brouyère, Serge

    2013-10-15

    In Europe, 30% of groundwater bodies are considered to be at risk of not achieving the Water Framework Directive (WFD) 'good status' objective by 2015, and 45% are in doubt of doing so. Diffuse agricultural pollution is one of the main pressures affecting groundwater bodies. To tackle this problem, the WFD requires Member States to design and implement cost-effective programs of measures to achieve the 'good status' objective by 2027 at the latest. Hitherto, action plans have mainly consisted of promoting the adoption of Agri-Environmental Schemes (AES). This raises a number of questions concerning the effectiveness of such schemes for improving groundwater status, and the economic implications of their implementation. We propose a hydro-economic model that combines a hydrogeological model to simulate groundwater quality evolution with agronomic and economic components to assess the expected costs, effectiveness, and benefits of AES implementation. This hydro-economic model can be used to identify cost-effective AES combinations at groundwater-body scale and to show the benefits to be expected from the resulting improvement in groundwater quality. The model is applied here to a rural area encompassing the Hesbaye aquifer, a large chalk aquifer which supplies about 230,000 inhabitants in the city of Liege (Belgium) and is severely contaminated by agricultural nitrates. We show that the time frame within which improvements in the Hesbaye groundwater quality can be expected may be much longer than that required by the WFD. Current WFD programs based on AES may be inappropriate for achieving the 'good status' objective in the most productive agricultural areas, in particular because these schemes are insufficiently attractive. Achieving 'good status' by 2027 would demand a substantial change in the design of AES, involving costs that may not be offset by benefits in the case of chalk aquifers with long renewal times. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Adaptive protection scheme

    Directory of Open Access Journals (Sweden)

    R. Sitharthan

    2016-09-01

    Full Text Available This paper aims at modelling an electronically coupled distributed energy resource with an adaptive protection scheme. The electronically coupled distributed energy resource is a microgrid framework formed by coupling the renewable energy source electronically. Further, the proposed adaptive protection scheme provides a suitable protection to the microgrid for various fault conditions irrespective of the operating mode of the microgrid: namely, grid connected mode and islanded mode. The outstanding aspect of the developed adaptive protection scheme is that it monitors the microgrid and instantly updates relay fault current according to the variations that occur in the system. The proposed adaptive protection scheme also employs auto reclosures, through which the proposed adaptive protection scheme recovers faster from the fault and thereby increases the consistency of the microgrid. The effectiveness of the proposed adaptive protection is studied through the time domain simulations carried out in the PSCAD⧹EMTDC software environment.

  16. A Componentwise Convex Splitting Scheme for Diffuse Interface Models with Van der Waals and Peng--Robinson Equations of State

    KAUST Repository

    Fan, Xiaolin

    2017-01-19

    This paper presents a componentwise convex splitting scheme for numerical simulation of multicomponent two-phase fluid mixtures in a closed system at constant temperature, which is modeled by a diffuse interface model equipped with the Van der Waals and the Peng-Robinson equations of state (EoS). The Van der Waals EoS has a rigorous foundation in physics, while the Peng-Robinson EoS is more accurate for hydrocarbon mixtures. First, the phase field theory of thermodynamics and variational calculus are applied to a functional minimization problem of the total Helmholtz free energy. Mass conservation constraints are enforced through Lagrange multipliers. A system of chemical equilibrium equations is obtained which is a set of second-order elliptic equations with extremely strong nonlinear source terms. The steady state equations are transformed into a transient system as a numerical strategy on which the scheme is based. The proposed numerical algorithm avoids the indefiniteness of the Hessian matrix arising from the second-order derivative of homogeneous contribution of total Helmholtz free energy; it is also very efficient. This scheme is unconditionally componentwise energy stable and naturally results in unconditional stability for the Van der Waals model. For the Peng-Robinson EoS, it is unconditionally stable through introducing a physics-preserving correction term, which is analogous to the attractive term in the Van der Waals EoS. An efficient numerical algorithm is provided to compute the coefficient in the correction term. Finally, some numerical examples are illustrated to verify the theoretical results and efficiency of the established algorithms. The numerical results match well with laboratory data.

  17. Assessment of Planetary-Boundary-Layer Schemes in the Weather Research and Forecasting Model Within and Above an Urban Canopy Layer

    Science.gov (United States)

    Ferrero, Enrico; Alessandrini, Stefano; Vandenberghe, Francois

    2018-03-01

    We tested several planetary-boundary-layer (PBL) schemes available in the Weather Research and Forecasting (WRF) model against measured wind speed and direction, temperature and turbulent kinetic energy (TKE) at three levels (5, 9, 25 m). The Urban Turbulence Project dataset, gathered from the outskirts of Turin, Italy and used for the comparison, provides measurements made by sonic anemometers for more than 1 year. In contrast to other similar studies, which have mainly focused on short-time periods, we considered 2 months of measurements (January and July) representing both the seasonal and the daily variabilities. To understand how the WRF-model PBL schemes perform in an urban environment, often characterized by low wind-speed conditions, we first compared six PBL schemes against observations taken by the highest anemometer located in the inertial sub-layer. The availability of the TKE measurements allows us to directly evaluate the performances of the model; results of the model evaluation are presented in terms of quantile versus quantile plots and statistical indices. Secondly, we considered WRF-model PBL schemes that can be coupled to the urban-surface exchange parametrizations and compared the simulation results with measurements from the two lower anemometers located inside the canopy layer. We find that the PBL schemes accounting for TKE are more accurate and the model representation of the roughness sub-layer improves when the urban model is coupled to each PBL scheme.

  18. A hybrid finite-volume and finite difference scheme for depth-integrated non-hydrostatic model

    Science.gov (United States)

    Yin, Jing; Sun, Jia-wen; Wang, Xing-gang; Yu, Yong-hai; Sun, Zhao-chen

    2017-06-01

    A depth-integrated, non-hydrostatic model with hybrid finite difference and finite volume numerical algorithm is proposed in this paper. By utilizing a fraction step method, the governing equations are decomposed into hydrostatic and non-hydrostatic parts. The first part is solved by using the finite volume conservative discretization method, whilst the latter is considered by solving discretized Poisson-type equations with the finite difference method. The second-order accuracy, both in time and space, of the finite volume scheme is achieved by using an explicit predictor-correction step and linear construction of variable state in cells. The fluxes across the cell faces are computed in a Godunov-based manner by using MUSTA scheme. Slope and flux limiting technique is used to equip the algorithm with total variation dimensioning property for shock capturing purpose. Wave breaking is treated as a shock by switching off the non-hydrostatic pressure in the steep wave front locally. The model deals with moving wet/dry front in a simple way. Numerical experiments are conducted to verify the proposed model.

  19. Evaluation of two decomposition schemes in Earth System Models against LIDET, C14 observations and global soil carbon maps

    Science.gov (United States)

    Ricciuto, D. M.; Yang, X.; Thornton, P. E.

    2015-12-01

    Soils contain the largest pool of carbon in terrestrial ecosystems. Soil carbon dynamics and associated nutrient dynamics play significant roles in regulating global carbon cycle and atmospheric CO2 concentrations. Our capability to predict future climate change depends to a large extent on a well-constrained representation of soil carbon dynamics in ESMs. Here we evaluate two decomposition schemes - converging trophic cascade (CTC) and Century - in CLM4.5/ACME V0 using data from the long-term intersite decomposition experiment team (LIDET), radiocarbon (14C) observations, and Harmonized World Soil Database (HWSD). For the evaluation against LIDET, We exercise the full CLM4.5/ ACME V0 land model, including seasonal variability in nitrogen limitation and environmental scalars (temperature, moisture, O2), in order to represent LIDET experiment in a realistic way. We show that the proper design of model experiments is crucial to model evaluation using data from field experiments such as LIDET. We also use 14C profile data at 10 sites to evaluate the performance of CTC and CENTURY decomposition scheme. We find that the 14C profiles at these sites are most sensitive to the depth dependent decomposition parameters, consistent with previous studies.

  20. Numerical stability analysis of coupled neutronics and thermal-hydraulics schemes and new neutronic feedback-reactions model

    International Nuclear Information System (INIS)

    Guertin, Chantal

    1995-01-01

    This thesis is part of the validation process of using coupled 3D neutronics and thermal-hydraulics codes for studying accidental situations with boiling. First part is dedicated to a numerical stability analysis of neutronics and thermal-hydraulics coupled schemes. Both explicit and semi-implicit coupling schemes were applied to solve the set of equations describing the linearized neutronics and thermal-hydraulics of point reactor. Point reactor modelling was preferred to obtain analytical expressions of eigenvalues of the discretized Systems. Stability criteria, based on eigenvalues, was calculated as well as neutronic and thermalhydraulic responses of the System following insertion of a reactivity step. Results show no severe restriction of time domain, stability wise. Actual transient calculations using coupled neutronics and thermal-hydraulics codes, like COCCINELLE and THYC developed at Electricite de France, do not show stability problems. Second part introduces surface spline as a new neutronic feedback model. The cross influences of feedback parameters is now taken into account. Moderator temperature and density were modeled. This method, simple and accurate, allows an homogeneous description of cross-sections overall operating reactor situations including accidents with boiling. (author) [fr

  1. A Model Reference Adaptive Control/PID Compound Scheme on Disturbance Rejection for an Aerial Inertially Stabilized Platform

    Directory of Open Access Journals (Sweden)

    Xiangyang Zhou

    2016-01-01

    Full Text Available This paper describes a method to suppress the effect of nonlinear and time-varying mass unbalance torque disturbance on the dynamic performances of an aerial inertially stabilized platform (ISP. To improve the tracking accuracy and robustness of the ISP, a compound control scheme based on both of model reference adaptive control (MRAC and PID control methods is proposed. The dynamic model is first developed which reveals the unbalance torque disturbance with the characteristic of being nonlinear and time-varying. Then, the MRAC/PID compound controller is designed, in which the PID parameters are adaptively adjusted based on the output errors between the reference model and the actual system. In this way, the position errors derived from the prominent unbalance torque disturbance are corrected in real time so that the tracking accuracy is improved. To verify the method, the simulations and experiments are, respectively, carried out. The results show that the compound scheme has good ability in mass unbalance disturbance rejection, by which the system obtains higher stability accuracy compared with the PID method.

  2. A robust and accurate approach to computing compressible multiphase flow: Stratified flow model and AUSM+-up scheme

    International Nuclear Information System (INIS)

    Chang, Chih-Hao; Liou, Meng-Sing

    2007-01-01

    In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM + -up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion

  3. Using Nonlinear Stochastic Evolutionary Game Strategy to Model an Evolutionary Biological Network of Organ Carcinogenesis Under a Natural Selection Scheme.

    Science.gov (United States)

    Chen, Bor-Sen; Tsai, Kun-Wei; Li, Cheng-Wei

    2015-01-01

    Molecular biologists have long recognized carcinogenesis as an evolutionary process that involves natural selection. Cancer is driven by the somatic evolution of cell lineages. In this study, the evolution of somatic cancer cell lineages during carcinogenesis was modeled as an equilibrium point (ie, phenotype of attractor) shifting, the process of a nonlinear stochastic evolutionary biological network. This process is subject to intrinsic random fluctuations because of somatic genetic and epigenetic variations, as well as extrinsic disturbances because of carcinogens and stressors. In order to maintain the normal function (ie, phenotype) of an evolutionary biological network subjected to random intrinsic fluctuations and extrinsic disturbances, a network robustness scheme that incorporates natural selection needs to be developed. This can be accomplished by selecting certain genetic and epigenetic variations to modify the network structure to attenuate intrinsic fluctuations efficiently and to resist extrinsic disturbances in order to maintain the phenotype of the evolutionary biological network at an equilibrium point (attractor). However, during carcinogenesis, the remaining (or neutral) genetic and epigenetic variations accumulate, and the extrinsic disturbances become too large to maintain the normal phenotype at the desired equilibrium point for the nonlinear evolutionary biological network. Thus, the network is shifted to a cancer phenotype at a new equilibrium point that begins a new evolutionary process. In this study, the natural selection scheme of an evolutionary biological network of carcinogenesis was derived from a robust negative feedback scheme based on the nonlinear stochastic Nash game strategy. The evolvability and phenotypic robustness criteria of the evolutionary cancer network were also estimated by solving a Hamilton-Jacobi inequality - constrained optimization problem. The simulation revealed that the phenotypic shift of the lung cancer

  4. Creation of a Collaborative Disaster Preparedness Video for Daycare Providers: Use of the Delphi Model for the Creation of a Comprehensive Disaster Preparedness Video for Daycare Providers.

    Science.gov (United States)

    Mar, Pamela; Spears, Robert; Reeb, Jeffrey; Thompson, Sarah B; Myers, Paul; Burke, Rita V

    2018-02-22

    Eight million American children under the age of 5 attend daycare and more than another 50 million American children are in school or daycare settings. Emergency planning requirements for daycare licensing vary by state. Expert opinions were used to create a disaster preparedness video designed for daycare providers to cover a broad spectrum of scenarios. Various stakeholders (17) devised the outline for an educational pre-disaster video for child daycare providers using the Delphi technique. Fleiss κ values were obtained for consensus data. A 20-minute video was created, addressing the physical, psychological, and legal needs of children during and after a disaster. Viewers completed an anonymous survey to evaluate topic comprehension. A consensus was attempted on all topics, ranging from elements for inclusion to presentation format. The Fleiss κ value of 0.07 was obtained. Fifty-seven of the total 168 video viewers completed the 10-question survey, with comprehension scores ranging from 72% to 100%. Evaluation of caregivers that viewed our video supports understanding of video contents. Ultimately, the technique used to create and disseminate the resources may serve as a template for others providing pre-disaster planning education. (Disaster Med Public Health Preparedness. 2018;page 1 of 5).

  5. 360-degree video and X-ray modeling of the Galactic center's inner parsec

    Science.gov (United States)

    Russell, Christopher Michael Post; Wang, Daniel; Cuadra, Jorge

    2017-08-01

    360-degree videos, which render an image over all 4pi steradian, provide a unique and immersive way to visualize astrophysical simulations. Video sharing sites such as YouTube allow these videos to be shared with the masses; they can be viewed in their 360° nature on computer screens, with smartphones, or, best of all, in virtual-reality (VR) goggles. We present the first such 360° video of an astrophysical simulation: a hydrodynamics calculation of the Wolf-Rayet stars and their ejected winds in the inner parsec of the Galactic center. Viewed from the perspective of the super-massive black hole (SMBH), the most striking aspect of the video, which renders column density, is the inspiraling and stretching of clumps of WR-wind material as they makes their way towards the SMBH. We will brielfy describe how to make 360° videos and how to publish them online in their desired 360° format. Additionally we discuss computing the thermal X-ray emission from a suite of Galactic-center hydrodynamic simulations that have various SMBH feedback mechanisms, which are compared to Chandra X-ray Visionary Program observations of the region. Over a 2-5” ring centered on Sgr A*, the spectral shape is well matched, indicating that the WR winds are the dominant source of the thermal X-ray emission. Furthermore, the X-ray flux depends on the SMBH feedback due to the feedback's ability to clear out material from the central parsec. A moderate outburst is necessary to explain the current thermal X-ray flux, even though the outburst ended ˜100 yr ago.

  6. The sensitivity to the microphysical schemes on the skill of forecasting the track and intensity of tropical cyclones using WRF-ARW model

    Science.gov (United States)

    Choudhury, Devanil; Das, Someshwar

    2017-06-01

    The Advanced Research WRF (ARW) model is used to simulate Very Severe Cyclonic Storms (VSCS) Hudhud (7-13 October, 2014), Phailin (8-14 October, 2013) and Lehar (24-29 November, 2013) to investigate the sensitivity to microphysical schemes on the skill of forecasting track and intensity of the tropical cyclones for high-resolution (9 and 3 km) 120-hr model integration. For cloud resolving grid scale (CONTROL forecast. This study is aimed to investigate the sensitivity to microphysics on the track and intensity with explicitly resolved convection scheme. It shows that the Goddard one-moment bulk liquid-ice microphysical scheme provided the highest skill on the track whereas for intensity both Thompson and Goddard microphysical schemes perform better. The Thompson scheme indicates the highest skill in intensity at 48, 96 and 120 hr, whereas at 24 and 72 hr, the Goddard scheme provides the highest skill in intensity. It is known that higher resolution domain produces better intensity and structure of the cyclones and it is desirable to resolve the convection with sufficiently high resolution and with the use of explicit cloud physics. This study suggests that the Goddard cumulus ensemble microphysical scheme is suitable for high resolution ARW simulation for TC's track and intensity over the BoB. Although the present study is based on only three cyclones, it could be useful for planning real-time predictions using ARW modelling system.

  7. A gas dynamics scheme for a two moments model of radiative transfer; Un schema de type dynamique des gaz pour un modele a deux moments en transfert radiatif

    Energy Technology Data Exchange (ETDEWEB)

    Buet, Ch.; Despres, B

    2007-07-01

    We address the discretization of the Levermore's two moments and entropy model of the radiative transfer equation. We present a new approach for the discretization of this model: first we rewrite the moment equations as a Compressible Gas Dynamics equation by introducing an additional quantity that plays the role of a density. After that we discretize using a Lagrange-projection scheme. The Lagrange-projection scheme permits us to incorporate the source terms in the fluxes of an acoustic solver in the Lagrange step, using the well-known piecewise steady approximation and thus to capture correctly the diffusion regime. Moreover we show that the discretization is entropic and preserve the flux-limited property of the moment model. Numerical examples illustrate the feasibility of our approach. (authors)

  8. New business models for advertisers: The video games sector in Spain. Advergaming Vs Ingame Advertising

    Directory of Open Access Journals (Sweden)

    Ana Sebastián Morillas

    2016-07-01

    Full Text Available The article aims the advertising efficiency video games have in Spain, which is of theutmost importance considering results from latest studies on effectiveness. Video gameshave become one of the most valuable platforms used by advertisers when looking fornew ways to reinforce brand awareness. This study seeks to explain the reasons whybrands are using the advergaming and ingame advertising in order to have their advertisingmessages being effectively reached by the target audience. The topic proposedin this paper deploys a qualitative research methodology focused on a bibliographicreview, in-depth interviews and the analysis of several case studies. Results obtained bythis research may help companies to develop effective marketing and communicationstrategies.

  9. Altered defaecatory behaviour and faecal incontinence in a video-tracked animal model of pudendal neuropathy.

    Science.gov (United States)

    Devane, L A; Lucking, E; Evers, J; Buffini, M; Scott, S M; Knowles, C H; O'Connell, P R; Jones, J F X

    2017-05-01

    The aim was to develop a behavioural animal model of faecal continence and assess the effect of retro-uterine balloon inflation (RBI) injury. RBI in the rat causes pudendal neuropathy, a risk factor for obstetric related faecal incontinence in humans. Video-tracking of healthy rats (n = 12) in a cage containing a latrine box was used to monitor their defaecatory behaviour index (DBI) over 2 weeks. The DBI (range 0-1) was devised by dividing the defaecation rate (pellets per hour) outside the latrine by that of the whole cage. A score of 0 indicates all pellets were deposited in the latrine. Subsequently, the effects of RBI (n = 19), sham surgery (n = 4) and colostomy (n = 2) were determined by monitoring the DBI for 2 weeks preoperatively and 3 weeks postoperatively. The DBI for healthy rats was 0.1 ± 0.03 with no significant change over 2 weeks (P = 0.71). In the RBI group, 13 of 19 rats (68%) showed no significant change in DBI postoperatively (0.08 ±  -0.05 vs 0.11 ±  -0.07) while in six rats the DBI increased from 0.16 ±  -0.09 to 0.46 ± 0.23. The negative control, sham surgery, did not significantly affect the DBI (0.09 ± 0.06 vs 0.08 ± 0.04, P = 0.14). The positive control, colostomy, increased the DBI from 0.26 ± 0.03 to 0.86 ± 0.08. This is the first study showing a quantifiable change in defaecatory behaviour following injury in an animal model. This model of pudendal neuropathy affects continence in 32% of rats and provides a basis for research on interventions for incontinence. Colorectal Disease © 2017 The Association of Coloproctology of Great Britain and Ireland.

  10. Numerical Simulations of the 1 May 2012 Deep Convection Event over Cuba: Sensitivity to Cumulus and Microphysical Schemes in a High-Resolution Model

    Directory of Open Access Journals (Sweden)

    Yandy G. Mayor

    2015-01-01

    Full Text Available This paper evaluates the sensitivity to cumulus and microphysics schemes, as represented in numerical simulations of the Weather Research and Forecasting model, in characterizing a deep convection event over the Cuban island on 1 May 2012. To this end, 30 experiments combining five cumulus and six microphysics schemes, in addition to two experiments in which the cumulus parameterization was turned off, are tested in order to choose the combination that represents the event precipitation more accurately. ERA Interim is used as lateral boundary condition data for the downscaling procedure. Results show that convective schemes are more important than microphysics schemes for determining the precipitation areas within a high-resolution domain simulation. Also, while one cumulus scheme captures the overall spatial convective structure of the event more accurately than others, it fails to capture the precipitation intensity. This apparent discrepancy leads to sensitivity related to the verification method used to rank the scheme combinations. This sensitivity is also observed in a comparison between parameterized and explicit cumulus formation when the Kain-Fritsch scheme was used. A loss of added value is also found when the Grell-Freitas cumulus scheme was activated at 1 km grid spacing.

  11. Physics beyond the standard model in the non-perturbative unification scheme

    International Nuclear Information System (INIS)

    Kapetanakis, D.; Zoupanos, G.

    1990-01-01

    The non-perturbative unification scenario predicts reasonably well the low energy gauge couplings of the standard model. Agreement with the measured low energy couplings is obtained by assuming certain kind of physics beyond the standard model. A number of possibilities for physics beyond the standard model is examined. The best candidates so far are the standard model with eight fermionic families and a similar number of Higgs doublets, and the supersymmetric standard model with five families. (author)

  12. The Effects of Mental Imagery with Video-Modeling on Self-Efficacy and Maximal Front Squat Ability

    Directory of Open Access Journals (Sweden)

    Daniel J. M. Buck

    2016-04-01

    Full Text Available This study was designed to assess the effectiveness of mental imagery supplemented with video-modeling on self-efficacy and front squat strength (three repetition maximum; 3RM. Subjects (13 male, 7 female who had at least 6 months of front squat experience were assigned to either an experimental (n = 10 or a control (n = 10 group. Subjects′ 3RM and self-efficacy for the 3RM were measured at baseline. Following this, subjects in the experimental group followed a structured imagery protocol, incorporating video recordings of both their own 3RM performance and a model lifter with excellent technique, twice a day for three days. Subjects in the control group spent the same amount of time viewing a placebo video. Following three days with no physical training, measurements of front squat 3RM and self-efficacy for the 3RM were repeated. Subjects in the experimental group increased in self-efficacy following the intervention, and showed greater 3RM improvement than those in the control group. Self-efficacy was found to significantly mediate the relationship between imagery and front squat 3RM. These findings point to the importance of mental skills training for the enhancement of self-efficacy and front squat performance.

  13. Nonlinear modeling of ferroelectric-ferromagnetic composites based on condensed and finite element approaches (Presentation Video)

    Science.gov (United States)

    Ricoeur, Andreas; Lange, Stephan; Avakian, Artjom

    2015-04-01

    Magnetoelectric (ME) coupling is an inherent property of only a few crystals exhibiting very low coupling coefficients at low temperatures. On the other hand, these materials are desirable due to many promising applications, e.g. as efficient data storage devices or medical or geophysical sensors. Efficient coupling of magnetic and electric fields in materials can only be achieved in composite structures. Here, ferromagnetic (FM) and ferroelectric (FE) phases are combined e.g. including FM particles in a FE matrix or embedding fibers of the one phase into a matrix of the other. The ME coupling is then accomplished indirectly via strain fields exploiting magnetostrictive and piezoelectric effects. This requires a poling of the composite, where the structure is exposed to both large magnetic and electric fields. The efficiency of ME coupling will strongly depend on the poling process. Besides the alignment of local polarization and magnetization, it is going along with cracking, also being decisive for the coupling properties. Nonlinear ferroelectric and ferromagnetic constitutive equations have been developed and implemented within the framework of a multifield, two-scale FE approach. The models are microphysically motivated, accounting for domain and Bloch wall motions. A second, so called condensed approach is presented which doesn't require the implementation of a spatial discretisation scheme, however still considering grain interactions and residual stresses. A micromechanically motivated continuum damage model is established to simulate degradation processes. The goal of the simulation tools is to predict the different constitutive behaviors, ME coupling properties and lifetime of smart magnetoelectric devices.

  14. Exploration of depth modeling mode one lossless wedgelets storage strategies for 3D-high efficiency video coding

    Science.gov (United States)

    Sanchez, Gustavo; Marcon, César; Agostini, Luciano Volcan

    2018-01-01

    The 3D-high efficiency video coding has introduced tools to obtain higher efficiency in 3-D video coding, and most of them are related to the depth maps coding. Among these tools, the depth modeling mode-1 (DMM-1) focuses on better encoding edges regions of depth maps. The large memory required for storing all wedgelet patterns is one of the bottlenecks in the DMM-1 hardware design of both encoder and decoder since many patterns must be stored. Three algorithms to reduce the DMM-1 memory requirements and a hardware design targeting the most efficient among these algorithms are presented. Experimental results demonstrate that the proposed solutions surpass related works reducing up to 78.8% of the wedgelet memory, without degrading the encoding efficiency. Synthesis results demonstrate that the proposed algorithm reduces almost 75% of the power dissipation when compared to the standard approach.

  15. High-order scheme for the source-sink term in a one-dimensional water temperature model.

    Directory of Open Access Journals (Sweden)

    Zheng Jing

    Full Text Available The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China. The modeling results were in an excellent agreement with measured data.

  16. A Hybrid Secure Scheme for Wireless Sensor Networks against Timing Attacks Using Continuous-Time Markov Chain and Queueing Model.

    Science.gov (United States)

    Meng, Tianhui; Li, Xiaofan; Zhang, Sha; Zhao, Yubin

    2016-09-28

    Wireless sensor networks (WSNs) have recently gained popularity for a wide spectrum of applications. Monitoring tasks can be performed in various environments. This may be beneficial in many scenarios, but it certainly exhibits new challenges in terms of security due to increased data transmission over the wireless channel with potentially unknown threats. Among possible security issues are timing attacks, which are not prevented by traditional cryptographic security. Moreover, the limited energy and memory resources prohibit the use of complex security mechanisms in such systems. Therefore, balancing between security and the associated energy consumption becomes a crucial challenge. This paper proposes a secure scheme for WSNs while maintaining the requirement of the security-performance tradeoff. In order to proceed to a quantitative treatment of this problem, a hybrid continuous-time Markov chain (CTMC) and queueing model are put forward, and the tradeoff analysis of the security and performance attributes is carried out. By extending and transforming this model, the mean time to security attributes failure is evaluated. Through tradeoff analysis, we show that our scheme can enhance the security of WSNs, and the optimal rekeying rate of the performance and security tradeoff can be obtained.

  17. Tradable schemes

    NARCIS (Netherlands)

    J.K. Hoogland (Jiri); C.D.D. Neumann

    2000-01-01

    textabstractIn this article we present a new approach to the numerical valuation of derivative securities. The method is based on our previous work where we formulated the theory of pricing in terms of tradables. The basic idea is to fit a finite difference scheme to exact solutions of the pricing

  18. SU-F-T-497: Spatiotemporally Optimal, Personalized Prescription Scheme for Glioblastoma Patients Using the Proliferation and Invasion Glioma Model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, M; Rockhill, J; Phillips, M [University Washington, Seattle, WA (United States)

    2016-06-15

    Purpose: To investigate a spatiotemporally optimal radiotherapy prescription scheme and its potential benefit for glioblastoma (GBM) patients using the proliferation and invasion (PI) glioma model. Methods: Standard prescription for GBM was assumed to deliver 46Gy in 23 fractions to GTV1+2cm margin and additional 14Gy in 7 fractions to GTV2+2cm margin. We simulated the tumor proliferation and invasion in 2D according to the PI glioma model with a moving velocity of 0.029(slow-move), 0.079(average-move), and 0.13(fast-move) mm/day for GTV2 with a radius of 1 and 2cm. For each tumor, the margin around GTV1 and GTV2 was varied to 0–6 cm and 1–3 cm respectively. Total dose to GTV1 was constrained such that the equivalent uniform dose (EUD) to normal brain equals EUD with the standard prescription. A non-stationary dose policy, where the fractional dose varies, was investigated to estimate the temporal effect of the radiation dose. The efficacy of an optimal prescription scheme was evaluated by tumor cell-surviving fraction (SF), EUD, and the expected survival time. Results: Optimal prescription for the slow-move tumors was to use 3.0(small)-3.5(large) cm margins to GTV1, and 1.5cm margin to GTV2. For the average- and fast-move tumors, it was optimal to use 6.0cm margin for GTV1 suggesting that whole brain therapy is optimal, and then 1.5cm (average-move) and 1.5–3.0cm (fast-move, small-large) margins for GTV2. It was optimal to deliver the boost sequentially using a linearly decreasing fractional dose for all tumors. Optimal prescription led to 0.001–0.465% of the tumor SF resulted from using the standard prescription, and increased tumor EUD by 25.3–49.3% and the estimated survival time by 7.6–22.2 months. Conclusion: It is feasible to optimize a prescription scheme depending on the individual tumor characteristics. A personalized prescription scheme could potentially increase tumor EUD and the expected survival time significantly without increasing EUD to

  19. SU-F-T-497: Spatiotemporally Optimal, Personalized Prescription Scheme for Glioblastoma Patients Using the Proliferation and Invasion Glioma Model

    International Nuclear Information System (INIS)

    Kim, M; Rockhill, J; Phillips, M

    2016-01-01

    Purpose: To investigate a spatiotemporally optimal radiotherapy prescription scheme and its potential benefit for glioblastoma (GBM) patients using the proliferation and invasion (PI) glioma model. Methods: Standard prescription for GBM was assumed to deliver 46Gy in 23 fractions to GTV1+2cm margin and additional 14Gy in 7 fractions to GTV2+2cm margin. We simulated the tumor proliferation and invasion in 2D according to the PI glioma model with a moving velocity of 0.029(slow-move), 0.079(average-move), and 0.13(fast-move) mm/day for GTV2 with a radius of 1 and 2cm. For each tumor, the margin around GTV1 and GTV2 was varied to 0–6 cm and 1–3 cm respectively. Total dose to GTV1 was constrained such that the equivalent uniform dose (EUD) to normal brain equals EUD with the standard prescription. A non-stationary dose policy, where the fractional dose varies, was investigated to estimate the temporal effect of the radiation dose. The efficacy of an optimal prescription scheme was evaluated by tumor cell-surviving fraction (SF), EUD, and the expected survival time. Results: Optimal prescription for the slow-move tumors was to use 3.0(small)-3.5(large) cm margins to GTV1, and 1.5cm margin to GTV2. For the average- and fast-move tumors, it was optimal to use 6.0cm margin for GTV1 suggesting that whole brain therapy is optimal, and then 1.5cm (average-move) and 1.5–3.0cm (fast-move, small-large) margins for GTV2. It was optimal to deliver the boost sequentially using a linearly decreasing fractional dose for all tumors. Optimal prescription led to 0.001–0.465% of the tumor SF resulted from using the standard prescription, and increased tumor EUD by 25.3–49.3% and the estimated survival time by 7.6–22.2 months. Conclusion: It is feasible to optimize a prescription scheme depending on the individual tumor characteristics. A personalized prescription scheme could potentially increase tumor EUD and the expected survival time significantly without increasing EUD to

  20. Conceptual design and modeling of a six-dimensional bunch merging scheme for a muon collider

    Directory of Open Access Journals (Sweden)

    Yu Bao

    2016-03-01

    Full Text Available A high luminosity muon collider requires single, intense, muon bunches with small emittances: just one of each sign. An efficient front end and a cooling channel have been designed and simulated within the collaboration of the Muon Accelerator Program. The muons are first bunched and phase rotated into 21 bunches, and then cooled in six dimensions. When they are cool enough, they are merged into single bunches: one of each sign. The bunch merging scheme has been outlined with preliminary simulations in previous studies. In this paper we present a comprehensive design with its end-to-end simulation. The 21 bunches are first merged in longitudinal phase space into seven bunches. These are directed into seven “trombone” paths with different lengths, to bring them to the same time, and then merged transversely in a collecting “funnel” into the required single larger bunches. Detailed numerical simulations show that the 6D emittance of the resulting bunch reaches the parameters needed for high acceptance into the downstream cooling channel.

  1. Snow specific surface area simulation using the one-layer snow model in the Canadian LAnd Surface Scheme (CLASS

    Directory of Open Access Journals (Sweden)

    A. Roy

    2013-06-01

    Full Text Available Snow grain size is a key parameter for modeling microwave snow emission properties and the surface energy balance because of its influence on the snow albedo, thermal conductivity and diffusivity. A model of the specific surface area (SSA of snow was implemented in the one-layer snow model in the Canadian LAnd Surface Scheme (CLASS version 3.4. This offline multilayer model (CLASS-SSA simulates the decrease of SSA based on snow age, snow temperature and the temperature gradient under dry snow conditions, while it considers the liquid water content of the snowpack for wet snow metamorphism. We compare the model with ground-based measurements from several sites (alpine, arctic and subarctic with different types of snow. The model provides simulated SSA in good agreement with measurements with an overall point-to-point comparison RMSE of 8.0 m2 kg–1, and a root mean square error (RMSE of 5.1 m2 kg–1 for the snowpack average SSA. The model, however, is limited under wet conditions due to the single-layer nature of the CLASS model, leading to a single liquid water content value for the whole snowpack. The SSA simulations are of great interest for satellite passive microwave brightness temperature assimilations, snow mass balance retrievals and surface energy balance calculations with associated climate feedbacks.

  2. MODEL OF PHYTOPLANKTON COMPETITION FOR LIMITING AND NONLIMITING NUTRIENTS: IMPLICATIONS FOR DEVELOPMENT OF ESTUARINE AND NEARSHORE MANAGEMENT SCHEMES

    Science.gov (United States)

    The global increase of noxious bloom occurrences has increased the need for phytoplankton management schemes. Such schemes require the ability to predict phytoplankton succession. Equilibrium Resources Competition theory, which is popular for predicting succession in lake systems...

  3. Modeling the Structural Response of Reinforced Glass Beams using an SLA Scheme

    NARCIS (Netherlands)

    Louter, P.C.; Graaf, van de Anne; Rots, J.G.; Bos, Freek; Louter, Pieter Christiaan; Veer, Fred

    2010-01-01

    This paper investigates whether a novel computational sequentially linear analysis (SLA) technique, which is especially developed for modeling brittle material response, is applicable for modeling the structural response of metal reinforced glass beams. To do so, computational SLA results are

  4. You are who you play you are : Modeling Player Traits from Video Game Behavior

    NARCIS (Netherlands)

    Tekofsky, Shoshannah

    2017-01-01

    You are who you play you are - especially when it comes to your age and your motivations. People say age is only a number, but it's a number we can guess pretty accurately from how someone plays video games. We find that younger people are favored by speed, while older people are favored by wisdom.

  5. An Attention-Information-Based Spatial Adaptation Framework for Browsing Videos via Mobile Devices

    Directory of Open Access Journals (Sweden)

    Li Houqiang

    2007-01-01

    Full Text Available With the growing popularity of personal digital assistant devices and smart phones, more and more consumers are becoming quite enthusiastic to appreciate videos via mobile devices. However, limited display size of the mobile devices has been imposing significant barriers for users to enjoy browsing high-resolution videos. In this paper, we present an attention-information-based spatial adaptation framework to address this problem. The whole framework includes two major parts: video content generation and video adaptation system. During video compression, the attention information in video sequences will be detected using an attention model and embedded into bitstreams with proposed supplement-enhanced information (SEI structure. Furthermore, we also develop an innovative scheme to adaptively adjust quantization parameters in order to simultaneously improve the quality of overall encoding and the quality of transcoding the attention areas. When the high-resolution bitstream is transmitted to mobile users, a fast transcoding algorithm we developed earlier will be applied to generate a new bitstream for attention areas in frames. The new low-resolution bitstream containing mostly attention information, instead of the high-resolution one, will be sent to users for display on the mobile devices. Experimental results show that the proposed spatial adaptation scheme is able to improve both subjective and objective video qualities.

  6. An Attention-Information-Based Spatial Adaptation Framework for Browsing Videos via Mobile Devices

    Science.gov (United States)

    Li, Houqiang; Wang, Yi; Chen, Chang Wen

    2007-12-01

    With the growing popularity of personal digital assistant devices and smart phones, more and more consumers are becoming quite enthusiastic to appreciate videos via mobile devices. However, limited display size of the mobile devices has been imposing significant barriers for users to enjoy browsing high-resolution videos. In this paper, we present an attention-information-based spatial adaptation framework to address this problem. The whole framework includes two major parts: video content generation and video adaptation system. During video compression, the attention information in video sequences will be detected using an attention model and embedded into bitstreams with proposed supplement-enhanced information (SEI) structure. Furthermore, we also develop an innovative scheme to adaptively adjust quantization parameters in order to simultaneously improve the quality of overall encoding and the quality of transcoding the attention areas. When the high-resolution bitstream is transmitted to mobile users, a fast transcoding algorithm we developed earlier will be applied to generate a new bitstream for attention areas in frames. The new low-resolution bitstream containing mostly attention information, instead of the high-resolution one, will be sent to users for display on the mobile devices. Experimental results show that the proposed spatial adaptation scheme is able to improve both subjective and objective video qualities.

  7. Ensemble using different Planetary Boundary Layer schemes in WRF model for wind speed and direction prediction over Apulia region

    Science.gov (United States)

    Tateo, Andrea; Marcello Miglietta, Mario; Fedele, Francesca; Menegotto, Micaela; Monaco, Alfonso; Bellotti, Roberto

    2017-04-01

    The Weather Research and Forecasting mesoscale model (WRF) was used to simulate hourly 10 m wind speed and direction over the city of Taranto, Apulia region (south-eastern Italy). This area is characterized by a large industrial complex including the largest European steel plant and is subject to a Regional Air Quality Recovery Plan. This plan constrains industries in the area to reduce by 10 % the mean daily emissions by diffuse and point sources during specific meteorological conditions named wind days. According to the Recovery Plan, the Regional Environmental Agency ARPA-PUGLIA is responsible for forecasting these specific meteorological conditions with 72 h in advance and possibly issue the early warning. In particular, an accurate wind simulation is required. Unfortunately, numerical weather prediction models suffer from errors, especially for what concerns near-surface fields. These errors depend primarily on uncertainties in the initial and boundary conditions provided by global models and secondly on the model formulation, in particular the physical parametrizations used to represent processes such as turbulence, radiation exchange, cumulus and microphysics. In our work, we tried to compensate for the latter limitation by using different Planetary Boundary Layer (PBL) parameterization schemes. Five combinations of PBL and Surface Layer (SL) schemes were considered. Simulations are implemented in a real-time configuration since our intention is to analyze the same configuration implemented by ARPA-PUGLIA for operational runs; the validation is focused over a time range extending from 49 to 72 h with hourly time resolution. The assessment of the performance was computed by comparing the WRF model output with ground data measured at a weather monitoring station in Taranto, near the steel plant. After the analysis of the simulations performed with different PBL schemes, both simple (e.g. average) and more complex post-processing methods (e.g. weighted average

  8. Line Differential Protection Scheme Modelling for Underground 420 kV Cable Systems

    DEFF Research Database (Denmark)

    Sztykiel, Michal; Bak, Claus Leth; Dollerup, Sebastian

    2011-01-01

    models can be applied with various systems, allowing to obtain the most optimal configuration of the protective relaying. The present paper describes modelling methodology on the basis of Siemens SIPROTEC 4 7SD522/610. Relay model was verified experimentally with its real equivalent by both EMTP...

  9. Lattice Boltzmann scheme for mixture modeling: analysis of the continuum diffusion regimes recovering Maxwell-Stefan model and incompressible Navier-Stokes equations.

    Science.gov (United States)

    Asinari, Pietro

    2009-11-01

    A finite difference lattice Boltzmann scheme for homogeneous mixture modeling, which recovers Maxwell-Stefan diffusion model in the continuum limit, without the restriction of the mixture-averaged diffusion approximation, was recently proposed [P. Asinari, Phys. Rev. E 77, 056706 (2008)]. The theoretical basis is the Bhatnagar-Gross-Krook-type kinetic model for gas mixtures [P. Andries, K. Aoki, and B. Perthame, J. Stat. Phys. 106, 993 (2002)]. In the present paper, the recovered macroscopic equations in the continuum limit are systematically investigated by varying the ratio between the characteristic diffusion speed and the characteristic barycentric speed. It comes out that the diffusion speed must be at least one order of magnitude (in terms of Knudsen number) smaller than the barycentric speed, in order to recover the Navier-Stokes equations for mixtures in the incompressible limit. Some further numerical tests are also reported. In particular, (1) the solvent and dilute test cases are considered, because they are limiting cases in which the Maxwell-Stefan model reduces automatically to Fickian cases. Moreover, (2) some tests based on the Stefan diffusion tube are reported for proving the complete capabilities of the proposed scheme in solving Maxwell-Stefan diffusion problems. The proposed scheme agrees well with the expected theoretical results.

  10. A New Scheme for the Simulation of Microscale Flow and Dispersion in Urban Areas by Coupling Large-Eddy Simulation with Mesoscale Models

    Science.gov (United States)

    Li, Haifeng; Cui, Guixiang; Zhang, Zhaoshun

    2018-04-01

    A coupling scheme is proposed for the simulation of microscale flow and dispersion in which both the mesoscale field and small-scale turbulence are specified at the boundary of a microscale model. The small-scale turbulence is obtained individually in the inner and outer layers by the transformation of pre-computed databases, and then combined in a weighted sum. Validation of the results of a flow over a cluster of model buildings shows that the inner- and outer-layer transition height should be located in the roughness sublayer. Both the new scheme and the previous scheme are applied in the simulation of the flow over the central business district of Oklahoma City (a point source during intensive observation period 3 of the Joint Urban 2003 experimental campaign), with results showing that the wind speed is well predicted in the canopy layer. Compared with the previous scheme, the new scheme improves the prediction of the wind direction and turbulent kinetic energy (TKE) in the canopy layer. The flow field influences the scalar plume in two ways, i.e. the averaged flow field determines the advective flux and the TKE field determines the turbulent flux. Thus, the mean, root-mean-square and maximum of the concentration agree better with the observations with the new scheme. These results indicate that the new scheme is an effective means of simulating the complex flow and dispersion in urban canopies.

  11. Proposed Robot Scheme with 5 DoF and Dynamic Modelling Using Maple Software

    OpenAIRE

    Shala Ahmet; Bruçi Mirlind

    2017-01-01

    In this paper is represented Dynamical Modelling of robots which is commonly first important step of Modelling, Analysis and Control of robotic systems. This paper is focused on using Denavit-Hartenberg (DH) convention for kinematics and Newton-Euler Formulations for dynamic modelling of 5 DoF - Degree of Freedom of 3D robot. The process of deriving of dynamical model is done using Software Maple. Derived Dynamical Model of 5 DoF robot is converted for Matlab use for future analysis, control ...

  12. Proposed Robot Scheme with 5 DoF and Dynamic Modelling Using Maple Software

    Directory of Open Access Journals (Sweden)

    Shala Ahmet

    2017-11-01

    Full Text Available In this paper is represented Dynamical Modelling of robots which is commonly first important step of Modelling, Analysis and Control of robotic systems. This paper is focused on using Denavit-Hartenberg (DH convention for kinematics and Newton-Euler Formulations for dynamic modelling of 5 DoF - Degree of Freedom of 3D robot. The process of deriving of dynamical model is done using Software Maple. Derived Dynamical Model of 5 DoF robot is converted for Matlab use for future analysis, control and simulations.

  13. A hybrid feature selection and health indicator construction scheme for delay-time-based degradation modelling of rolling element bearings

    Science.gov (United States)

    Zhang, Bin; Deng, Congying; Zhang, Yi

    2018-03-01

    Rolling element bearings are mechanical components used frequently in most rotating machinery and they are also vulnerable links representing the main source of failures in such systems. Thus, health condition monitoring and fault diagnosis of rolling element bearings have long been studied to improve operational reliability and maintenance efficiency of rotatory machines. Over the past decade, prognosis that enables forewarning of failure and estimation of residual life attracted increasing attention. To accurately and efficiently predict failure of the rolling element bearing, the degradation requires to be well represented and modelled. For this purpose, degradation of the rolling element bearing is analysed with the delay-time-based model in this paper. Also, a hybrid feature selection and health indicator construction scheme is proposed for extraction of the bearing health relevant information from condition monitoring sensor data. Effectiveness of the presented approach is validated through case studies on rolling element bearing run-to-failure experiments.

  14. Models of Marine Fish Biodiversity: Assessing Predictors from Three Habitat Classification Schemes.

    Science.gov (United States)

    Yates, Katherine L; Mellin, Camille; Caley, M Julian; Radford, Ben T; Meeuwig, Jessica J

    2016-01-01

    Prioritising biodiversity conservation requires knowledge of where biodiversity occurs. Such knowledge, however, is often lacking. New technologies for collecting biological and physical data coupled with advances in modelling techniques could help address these gaps and facilitate improved management outcomes. Here we examined the utility of environmental data, obtained using different methods, for developing models of both uni- and multivariate biodiversity metrics. We tested which biodiversity metrics could be predicted best and evaluated the performance of predictor variables generated from three types of habitat data: acoustic multibeam sonar imagery, predicted habitat classification, and direct observer habitat classification. We used boosted regression trees (BRT) to model metrics of fish species richness, abundance and biomass, and multivariate regression trees (MRT) to model biomass and abundance of fish functional groups. We compared model performance using different sets of predictors and estimated the relative influence of individual predictors. Models of total species richness and total abundance performed best; those developed for endemic species performed worst. Abundance models performed substantially better than corresponding biomass models. In general, BRT and MRTs developed using predicted habitat classifications performed less well than those using multibeam data. The most influential individual predictor was the abiotic categorical variable from direct observer habitat classification and models that incorporated predictors from direct observer habitat classification consistently outperformed those that did not. Our results show that while remotely sensed data can offer considerable utility for predictive modelling, the addition of direct observer habitat classification data can substantially improve model performance. Thus it appears that there are aspects of marine habitats that are important for modelling metrics of fish biodiversity that are

  15. Energy law preserving C0 finite element schemes for phase field models in two-phase flow computations

    International Nuclear Information System (INIS)

    Hua Jinsong; Lin Ping; Liu Chun; Wang Qi

    2011-01-01

    Highlights: → We study phase-field models for multi-phase flow computation. → We develop an energy-law preserving C0 FEM. → We show that the energy-law preserving method work better. → We overcome unphysical oscillation associated with the Cahn-Hilliard model. - Abstract: We use the idea in to develop the energy law preserving method and compute the diffusive interface (phase-field) models of Allen-Cahn and Cahn-Hilliard type, respectively, governing the motion of two-phase incompressible flows. We discretize these two models using a C 0 finite element in space and a modified midpoint scheme in time. To increase the stability in the pressure variable we treat the divergence free condition by a penalty formulation, under which the discrete energy law can still be derived for these diffusive interface models. Through an example we demonstrate that the energy law preserving method is beneficial for computing these multi-phase flow models. We also demonstrate that when applying the energy law preserving method to the model of Cahn-Hilliard type, un-physical interfacial oscillations may occur. We examine the source of such oscillations and a remedy is presented to eliminate the oscillations. A few two-phase incompressible flow examples are computed to show the good performance of our method.

  16. Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes

    Science.gov (United States)

    Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris

    2017-12-01

    Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.

  17. Development of a Model for a Cordon Pricing Scheme Considering Environmental Equity: A Case Study of Tehran

    Directory of Open Access Journals (Sweden)

    Shahriar Afandizadeh

    2016-02-01

    Full Text Available Congestion pricing strategy has been recognized as an effective countermeasure in the practical field of urban traffic congestion mitigation. Despite the positive effects of congestion pricing, its implementation has faced problems. This paper investigates the issue of environmental equity in cordon pricing and a park-and-ride scheme. Although pollution decreases inside the cordon by implementation of cordon pricing, air pollutants emission may increase in some links and in the whole network. Therefore, an increase in air emissions in the network means more emission outside the cordon. In fact, due to the implementation of this policy, air pollutants emission may transfer from inside to outside the cordon, creating a type of environmental inequity. To reduce this inequity, a bi-level optimization model with an equity constraint is developed. The proposed solution algorithm based on the second version of the strength Pareto evolutionary algorithm (SPEA2 is applied to the city network in Tehran. The results revealed that it seems reasonable to consider environmental equity as an objective function in cordon pricing. In addition, we can create a sustainable situation for the transportation system by improving environmental inequity with a relatively low reduction in social welfare. Moreover, there are environmental inequity impacts in real networks, which should be considered in the cordon pricing scheme.

  18. Computable error estimates of a finite difference scheme for option pricing in exponential Lévy models

    KAUST Repository

    Kiessling, Jonas

    2014-05-06

    Option prices in exponential Lévy models solve certain partial integro-differential equations. This work focuses on developing novel, computable error approximations for a finite difference scheme that is suitable for solving such PIDEs. The scheme was introduced in (Cont and Voltchkova, SIAM J. Numer. Anal. 43(4):1596-1626, 2005). The main results of this work are new estimates of the dominating error terms, namely the time and space discretisation errors. In addition, the leading order terms of the error estimates are determined in a form that is more amenable to computations. The payoff is only assumed to satisfy an exponential growth condition, it is not assumed to be Lipschitz continuous as in previous works. If the underlying Lévy process has infinite jump activity, then the jumps smaller than some (Formula presented.) are approximated by diffusion. The resulting diffusion approximation error is also estimated, with leading order term in computable form, as well as the dependence of the time and space discretisation errors on this approximation. Consequently, it is possible to determine how to jointly choose the space and time grid sizes and the cut off parameter (Formula presented.). © 2014 Springer Science+Business Media Dordrecht.

  19. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions.

    Science.gov (United States)

    Yao, Weiguang; Merchant, Thomas E; Farr, Jonathan B

    2016-10-03

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  20. An efficient scheme for a phase field model for the moving contact line problem with variable density and viscosity

    KAUST Repository

    Gao, Min

    2014-09-01

    In this paper, we develop an efficient numerical method for the two phase moving contact line problem with variable density, viscosity, and slip length. The physical model is based on a phase field approach, which consists of a coupled system of the Cahn-Hilliard and Navier-Stokes equations with the generalized Navier boundary condition [1,2,5]. To overcome the difficulties due to large density and viscosity ratio, the Navier-Stokes equations are solved by a splitting method based on a pressure Poisson equation [11], while the Cahn-Hilliard equation is solved by a convex splitting method. We show that the method is stable under certain conditions. The linearized schemes are easy to implement and introduce only mild CFL time constraint. Numerical tests are carried out to verify the accuracy, stability and efficiency of the schemes. The method allows us to simulate the interface problems with extremely small interface thickness. Three dimensional simulations are included to validate the efficiency of the method. © 2014 Elsevier Inc.

  1. Dashboard Videos

    Science.gov (United States)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-11-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his Lab Out Loud blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing this website and video, I decided to create my own dashboard videos to show to my high school physics students. I have produced and synchronized 12 separate dashboard videos, each about 10 minutes in length, driving around the city of Lawrence, KS, and Douglas County, and posted them to a website.2 Each video reflects different types of driving: both positive and negative accelerations and constant speeds. As shown in Fig. 1, I was able to capture speed, distance, and miles per gallon from my dashboard instrumentation. By linking this with a stopwatch, each of these quantities can be graphed with respect to time. I anticipate and hope that teachers will find these useful in their own classrooms, i.e., having physics students watch the videos and create their own motion maps (distance-time, speed-time) for study.

  2. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  3. High-speed video analysis improves the accuracy of spinal cord compression measurement in a mouse contusion model.

    Science.gov (United States)

    Fournely, Marion; Petit, Yvan; Wagnac, Éric; Laurin, Jérôme; Callot, Virginie; Arnoux, Pierre-Jean

    2018-01-01

    Animal models of spinal cord injuries aim to utilize controlled and reproducible conditions. However, a literature review reveals that mouse contusion studies using equivalent protocols may show large disparities in the observed impact force vs. cord compression relationship. The overall purpose of this study was to investigate possible sources of bias in these measurements. The specific objective was to improve spinal cord compression measurements using a video-based setup to detect the impactor-spinal cord time-to-contact. A force-controlled 30kDyn unilateral contusion at C4 vertebral level was performed in six mice with the Infinite Horizon impactor (IH). High-speed video was used to determine the time-to-contact between the impactor tip and the spinal cord and to compute the related displacement of the tip into the tissue: the spinal cord compression and the compression ratio. Delayed time-to-contact detection with the IH device led to an underestimation of the cord compression. Compression values indicated by the IH were 64% lower than those based on video analysis (0.33mm vs. 0.88mm). Consequently, the mean compression ratio derived from the device was underestimated when compared to the value derived from video analysis (22% vs. 61%). Default time-to-contact detection from the IH led to significant errors in spinal cord compression assessment. Accordingly, this may explain some of the reported data discrepancies in the literature. The proposed setup could be implemented by users of contusion devices to improve the quantative description of the primary injury inflicted to the spinal cord. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. A New Scheme for Experimental-Based Modeling of a Traveling Wave Ultrasonic Motor

    DEFF Research Database (Denmark)

    Mojallali, Hamed; Amini, R.; Izadi-Zamanabadi, Roozbeh

    2005-01-01

    In this paper, a new method for equivalent circuit modeling of a traveling wave ultrasonic motor is presented. The free stator of the motor is modeled by an equivalent circuit containing complex circuit elements. A systematic approach for identifying the elements of the equivalent circuit...

  5. Line Differential Protection Scheme Modelling for Underground 420 kV Cable Systems

    DEFF Research Database (Denmark)

    Sztykiel, Michal; Bak, Claus Leth; Wiechowski, Wojciech

    2010-01-01

    can be applied with various systems, allowing to obtain the most optimal configuration of the protective relaying. The present paper describes modelling methodology on the basis of Siemens SIPROTEC 4 7SD522/610. Relay model was verified experimentally with its real equivalent by both EMTP...

  6. A general scheme for training and optimization of the Grenander deformable template model

    DEFF Research Database (Denmark)

    Fisker, Rune; Schultz, Nette; Duta, N.

    2000-01-01

    parameters, a very fast general initialization algorithm and an adaptive likelihood model based on local means. The model parameters are trained by a combination of a 2D shape learning algorithm and a maximum likelihood based criteria. The fast initialization algorithm is based on a search approach using...

  7. Kernel Density Independence Sampling based Monte Carlo Scheme (KISMCS) for inverse hydrological modeling

    NARCIS (Netherlands)

    Shafiei, M.; Gharari, S.; Pande, S.; Bhulai, S.

    2014-01-01

    Posterior sampling methods are increasingly being used to describe parameter and model predictive uncertainty in hydrologic modelling. This paper proposes an alternative to random walk chains (such as DREAM-zs). We propose a sampler based on independence chains with an embedded feature of

  8. Improving SWAT model prediction using an upgraded denitrification scheme and constrained auto calibration

    Science.gov (United States)

    The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...

  9. Experimental Investigation of Aeroelastic Deformation of Slender Wings at Supersonic Speeds Using a Video Model Deformation Measurement Technique

    Science.gov (United States)

    Erickson, Gary E.

    2013-01-01

    A video-based photogrammetric model deformation system was established as a dedicated optical measurement technique at supersonic speeds in the NASA Langley Research Center Unitary Plan Wind Tunnel. This system was used to measure the wing twist due to aerodynamic loads of two supersonic commercial transport airplane models with identical outer mold lines but different aeroelastic properties. One model featured wings with deflectable leading- and trailing-edge flaps and internal channels to accommodate static pressure tube instrumentation. The wings of the second model were of single-piece construction without flaps or internal channels. The testing was performed at Mach numbers from 1.6 to 2.7, unit Reynolds numbers of 1.0 million to 5.0 million, and angles of attack from -4 degrees to +10 degrees. The video model deformation system quantified the wing aeroelastic response to changes in the Mach number, Reynolds number concurrent with dynamic pressure, and angle of attack and effectively captured the differences in the wing twist characteristics between the two test articles.

  10. Multiple player tracking in sports video: a dual-mode two-way bayesian inference approach with progressive observation modeling.

    Science.gov (United States)

    Xing, Junliang; Ai, Haizhou; Liu, Liwei; Lao, Shihong

    2011-06-01

    Multiple object tracking (MOT) is a very challenging task yet of fundamental importance for many practical applications. In this paper, we focus on the problem of tracking multiple players in sports video which is even more difficult due to the abrupt movements of players and their complex interactions. To handle the difficulties in this problem, we present a new MOT algorithm which contributes both in the observation modeling level and in the tracking strategy level. For the observation modeling, we develop a progressive observation modeling process that is able to provide strong tracking observations and greatly facilitate the tracking task. For the tracking strategy, we propose a dual-mode two-way Bayesian inference approach which dynamically switches between an offline general model and an online dedicated model to deal with single isolated object tracking and multiple occluded object tracking integrally by forward filtering and backward smoothing. Extensive experiments on different kinds of sports videos, including football, basketball, as well as hockey, demonstrate the effectiveness and efficiency of the proposed method.

  11. A rainfall disaggregation scheme for sub-hourly time scales: Coupling a Bartlett-Lewis based model with adjusting procedures

    Science.gov (United States)

    Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris

    2018-01-01

    Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.

  12. Investigation of a Boiler's Furnace Aerodynamics with a Vortex Solid Fuel Combustion Scheme on Physical and Mathematical Models

    Directory of Open Access Journals (Sweden)

    Prokhorov V.B.,

    2018-04-01

    Full Text Available The important problem of developing the low-cost technologies that will be able to provide a deep decrease in the concentration of nitrogen oxides while maintaining fuel burn-up efficiency is considered. This paper presents the results of the aerodynamics study of the furnace of boiler TPP-210A on the base of the physical and mathematical models in the case when boiler retrofitting from liquid to solid slag removal with two to three times reduction of nitrogen oxide emissions and replacing the vortex burners with direct-flow burners. The need for these studies is due to the fact that the direct-flow burners are "collective action" burners, and efficient fuel combustion can be provided only by the interaction of fuel jets, secondary and tertiary air jets in the furnace volume. The new scheme of air staged combustion in a system of vertical vortexes of opposite rotation with direct-flow burners and nozzles and direct injection of Kuznetsky lean coal dust was developed. In order to test the functional ability and efficiency of the proposed combustion scheme, studies on the physical model of the boiler furnace and the mathematical model of the experimental furnace bench for the case of an isothermal fluid flow were carried out. Comparison showed an acceptable degree of coincidence of these results. In all studied regimes, pronounced vortices remain in both the vertical and horizontal planes, that indicates a high degree of mass exchange between jets and combustion products and the furnace aerodynamics stability to changes in regime factors.

  13. Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes

    Directory of Open Access Journals (Sweden)

    J.-L. Guerrero

    2017-12-01

    Full Text Available Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere – heat-exchange fluxes – is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM, a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd. A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE, was used to perform sensitivity analysis (SA and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue – different parameter-value combinations yielding equivalent results – the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.

  14. Finite difference schemes for a nonlinear black-scholes model with transaction cost and volatility risk

    DEFF Research Database (Denmark)

    Mashayekhi, Sima; Hugger, Jens

    2015-01-01

    Several nonlinear Black-Scholes models have been proposed to take transaction cost, large investor performance and illiquid markets into account. One of the most comprehensive models introduced by Barles and Soner in [4] considers transaction cost in the hedging strategy and risk from an illiquid...... market. In this paper, we compare several finite difference methods for the solution of this model with respect to precision and order of convergence within a computationally feasible domain allowing at most 200 space steps and 10000 time steps. We conclude that standard explicit Euler comes out...

  15. Video compressed sensing using iterative self-similarity modeling and residual reconstruction

    Science.gov (United States)

    Kim, Yookyung; Oh, Han; Bilgin, Ali

    2013-04-01

    Compressed sensing (CS) has great potential for use in video data acquisition and storage because it makes it unnecessary to collect an enormous amount of data and to perform the computationally demanding compression process. We propose an effective CS algorithm for video that consists of two iterative stages. In the first stage, frames containing the dominant structure are estimated. These frames are obtained by thresholding the coefficients of similar blocks. In the second stage, refined residual frames are reconstructed from the original measurements and the measurements corresponding to the frames estimated in the first stage. These two stages are iterated until convergence. The proposed algorithm exhibits superior subjective image quality and significantly improves the peak-signal-to-noise ratio and the structural similarity index measure compared to other state-of-the-art CS algorithms.

  16. Coupled Atmospheric Chemistry Schemes for Modeling Regional and Global Atmospheric Chemistry

    Science.gov (United States)

    Saunders, E.; Stockwell, W. R.

    2016-12-01

    Atmospheric chemistry models require chemical reaction mechanisms to simulate the production of air pollution. GACM (Global Atmospheric Chemistry Mechanism) is intended for use in global scale atmospheric chemistry models to provide chemical boundary conditions for regional scale simulations by models such as CMAQ. GACM includes additional chemistry for marine environments while reducing its treatment of the chemistry needed for highly polluted urban regions. This keeps GACM's size small enough to allow it to be used efficiently in global models. GACM's chemistry of volatile organic compounds (VOC) is highly compatible with the VOC chemistry in RACM2 allowing a global model with GACM to provide VOC boundary conditions to a regional scale model with RACM2 with reduced error. The GACM-RACM2 system of mechanisms should yield more accurate forecasts by regional air quality models such as CMAQ. Chemical box models coupled with the regional and global atmospheric chemistry mechanisms (RACM2 & GACM) will be used to make simulations of tropospheric ozone, nitric oxides, and volatile organic compounds that are produced in regional and global domains. The simulations will focus on the Los Angeles' South Coast Air Basin (SoCAB) where the Pacific Ocean meets a highly polluted urban area. These two mechanisms will be compared on the basis of simulated ozone concentrations over this marine-urban region. Simulations made with the more established RACM2 will be compared with simulations made with the newer GACM. In addition WRF-Chem will be used to simulate how RACM2 will produce regional simulations of tropospheric ozone and NOx, which can be further, analyzed for air quality impacts. Both the regional and global model in WRF-Chem will be used to predict how the concentrations of ozone and nitrogen oxides change over land and ocean. The air quality model simulation results will be applied to EPA's BenMAP-CE (Environmental Benefits Mapping & Analysis Program-Community Edition

  17. Model Reference Adaptive Scheme for Multi-drug Infusion for Blood Pressure Control

    OpenAIRE

    Enbiya, Saleh; Mahieddine, Fatima; Hossain, Alamgir

    2011-01-01

    Using multiple interacting drugs to control both the mean arterial pressure (MAP) and cardiac output (CO) of patients with different sensitivity to drugs is a challenging task which this paper attempts to address. A multivariable model reference adaptive control (MRAC) algorithm is developed using a two-input, two-output patient model. The control objective is to maintain the homodynamic variables MAP and CO at the normal values by simultaneously administering two drugs; sodium nitroprusside ...

  18. Experimental evaluation of a polycrystal deformation modeling scheme using neutron diffraction measurements

    DEFF Research Database (Denmark)

    Clausen, Bjørn; Lorentzen, Torben

    1997-01-01

    The uniaxial behavior of aluminum polycrystals is simulated using a rate-independent incremental self-consistent elastic-plastic polycrystal deformation model, and the results are evaluated by neutron diffraction measurements. The elastic strains deduced from the model show good agreement...... with the experimental results for the 111 and 220 reflections, whereas the predicted elastic strain level for the 200 reflection is, in general, approximately 10 pct too low in the plastic regime....

  19. Beyond Scheme F

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; Fisher, H.; Pepin, J. [Los Alamos National Lab., NM (United States); Gillmann, R. [Federal Highway Administration, Washington, DC (United States)

    1996-07-01

    Traffic classification techniques were evaluated using data from a 1993 investigation of the traffic flow patterns on I-20 in Georgia. First we improved the data by sifting through the data base, checking against the original video for questionable events and removing and/or repairing questionable events. We used this data base to critique the performance quantitatively of a classification method known as Scheme F. As a context for improving the approach, we show in this paper that scheme F can be represented as a McCullogh-Pitts neural network, oar as an equivalent decomposition of the plane. We found that Scheme F, among other things, severely misrepresents the number of vehicles in Class 3 by labeling them as Class 2. After discussing the basic classification problem in terms of what is measured, and what is the desired prediction goal, we set forth desirable characteristics of the classification scheme and describe a recurrent neural network system that partitions the high dimensional space up into bins for each axle separation. the collection of bin numbers, one for each of the axle separations, specifies a region in the axle space called a hyper-bin. All the vehicles counted that have the same set of in numbers are in the same hyper-bin. The probability of the occurrence of a particular class in that hyper- bin is the relative frequency with which that class occurs in that set of bin numbers. This type of algorithm produces classification results that are much more balanced and uniform with respect to Classes 2 and 3 and Class 10. In particular, the cancellation of errors of classification that occurs is for many applications the ideal classification scenario. The neural network results are presented in the form of a primary classification network and a reclassification network, the performance matrices for which are presented.

  20. An Improved Approach for Detecting Car in Video using Neural Network Model

    OpenAIRE

    S. N. Sivanandam; T. Senthil Kumar

    2012-01-01

    The study represents a novel approach taken towards car detection, feature extraction and classification in a video. Though many methods have been proposed to deal with individual features of a vehicle, like edge, license plate, corners, no system has been implemented to combine features. Combination of four unique features, namely, color, shape, number plate and logo gives the application a stronghold on various applications like surveillance recording to detect accident percentage(for every...

  1. A Splitting Scheme for Solving Reaction-Diffusion Equations Modeling Dislocation Dynamics in Materials Subjected to Cyclic Loading

    Science.gov (United States)

    Pontes, J.; Walgraef, D.; Christov, C. I.

    2010-11-01

    Strain localization and dislocation pattern formation are typical features of plastic deformation in metals and alloys. Glide and climb dislocation motion along with accompanying production/annihilation processes of dislocations lead to the occurrence of instabilities of initially uniform dislocation distributions. These instabilities result into the development of various types of dislocation micro-structures, such as dislocation cells, slip and kink bands, persistent slip bands, labyrinth structures, etc., depending on the externally applied loading and the intrinsic lattice constraints. The Walgraef-Aifantis (WA) (Walgraef and Aifanits, J. Appl. Phys., 58, 668, 1985) model is an example of a reaction-diffusion model of coupled nonlinear equations which describe 0 formation of forest (immobile) and gliding (mobile) dislocation densities in the presence of cyclic loading. This paper discuss two versions of the WA model and focus on a finite difference, second order in time 1-Nicolson semi-implicit scheme, with internal iterations at each time step and a spatial splitting using the Stabilizing, Correction (Christov and Pontes, Mathematical and Computer Modelling, 35, 87, 2002) for solving the model evolution equations in two dimensions. The results of two simulations are presented. More complete results will appear in a forthcoming paper.

  2. Sensitivity of tropical cyclones to resolution, convection scheme and ocean flux parameterization over Eastern Tropical Pacific and Tropical North Atlantic Oceans in the RegCM4 model

    Science.gov (United States)

    Fuentes-Franco, Ramón; Giorgi, Filippo; Coppola, Erika; Zimmermann, Klaus

    2017-07-01

    The sensitivity of simulated tropical cyclones (TCs) to resolution, convection scheme and ocean surface flux parameterization is investigated with a regional climate model (RegCM4) over the CORDEX Central America domain, including the Tropical North Atlantic (TNA) and Eastern Tropical Pacific (ETP) basins. Simulations for the TC seasons of the ten-year period (1989-1998) driven by ERA-Interim reanalysis fields are completed using 50 and 25 km grid spacing, two convection schemes (Emanuel, Em; and Kain-Fritsch, KF) and two ocean surface flux representations, a Monin-Obukhov scheme available in the BATS land surface package (Dickinson et al. 1993), and the scheme of Zeng et al. (J Clim 11(10):2628-2644, 1998). The model performance is assessed against observed TC characteristics for the simulation period. In general, different sensitivities are found over the two basins investigated. The simulations using the KF scheme show higher TC density, longer TC duration (up to 15 days) and stronger peak winds (>50 ms-1) than those using Em (<40 ms-1). All simulations show a better spatial representation of simulated TC density and interannual variability over the TNA than over the ETP. The 25 km resolution simulations show greater TC density, duration and intensity compared to the 50 km resolution ones, especially over the ETP basin, and generally more in line with observations. Simulated TCs show a strong sensitivity to ocean fluxes, especially over the TNA basin, with the Monin-Obukhov scheme leading to an overestimate of the TC number, and the Zeng scheme being closer to observations. All simulations capture the density of cyclones during active TC seasons over the TNA, however, without data assimilation, the tracks of individual events do not match closely the corresponding observed ones. Overall, the best model performance is obtained when using the KF and Zeng schemes at 25 km grid spacing.

  3. Cancerous Tumour Model Analysis and Constructing schemes of Anti-angiogenesis Therapy at an Early Stage

    Directory of Open Access Journals (Sweden)

    O. Yu. Mukhomorova

    2015-01-01

    Full Text Available Anti-angiogenesis therapy is an alternative and successfully employed method for treatment of cancerous tumour. However, this therapy isn't widely used in medicine because of expensive drugs. It leads naturally to elaboration of such treatment regimens which use minimum amount of drugs.The aim of the paper is to investigate the model of development of illness and elaborate appropriate treatment regimens in the case of early diagnosis of the disease. The given model reflects the therapy at an intermediate stage of the disease treatment. Further treatment is aimed to destroy cancer cells and may be continued by other means, which are not reflected in the model.Analysis of the main properties of the model was carried out with consideration of two types of auxiliary systems. In the first case, the system is considered without control, as a model of tumour development in the absence of medical treatment. The study of the equilibrium point and determination of its type allowed us to describe disease dynamics and to determine tumour size resulting in death. In the second case a model with a constant control was investigated. The study of its equilibrium point showed that continuous control is not sufficient to support satisfactory patient's condition, and it is necessary to elaborate more complex treatment regimens. For this purpose, we used the method of terminal problems consisting in the search for such program control which forces system to a given final state. Selecting the initial and final states is due to medical grounds.As a result, we found two treatment regimens | one-stage treatment regimen and multi-stage one. The properties of each treatment regimen are analyzed and compared. The total amount of used drugs was a criterion for comparing these two treatment regimens. The theoretical conclusions obtained in this work are supported by computer modeling in MATLAB environment.

  4. Categorizing Video Game Audio

    DEFF Research Database (Denmark)

    Westerberg, Andreas Rytter; Schoenau-Fog, Henrik

    2015-01-01

    This paper dives into the subject of video game audio and how it can be categorized in order to deliver a message to a player in the most precise way. A new categorization, with a new take on the diegetic spaces, can be used a tool of inspiration for sound- and game-designers to rethink how...... they can use audio in video games. The conclusion of this study is that the current models' view of the diegetic spaces, used to categorize video game audio, is not t to categorize all sounds. This can however possibly be changed though a rethinking of how the player interprets audio....

  5. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  6. Perceptual quality estimation of H.264/AVC videos using reduced-reference and no-reference models

    Science.gov (United States)

    Shahid, Muhammad; Pandremmenou, Katerina; Kondi, Lisimachos P.; Rossholm, Andreas; Lövström, Benny

    2016-09-01

    Reduced-reference (RR) and no-reference (NR) models for video quality estimation, using features that account for the impact of coding artifacts, spatio-temporal complexity, and packet losses, are proposed. The purpose of this study is to analyze a number of potentially quality-relevant features in order to select the most suitable set of features for building the desired models. The proposed sets of features have not been used in the literature and some of the features are used for the first time in this study. The features are employed by the least absolute shrinkage and selection operator (LASSO), which selects only the most influential of them toward perceptual quality. For comparison, we apply feature selection in the complete feature sets and ridge regression on the reduced sets. The models are validated using a database of H.264/AVC encoded videos that were subjectively assessed for quality in an ITU-T compliant laboratory. We infer that just two features selected by RR LASSO and two bitstream-based features selected by NR LASSO are able to estimate perceptual quality with high accuracy, higher than that of ridge, which uses more features. The comparisons with competing works and two full-reference metrics also verify the superiority of our models.

  7. Assessing FPAR Source and Parameter Optimization Scheme in Application of a Diagnostic Carbon Flux Model

    Energy Technology Data Exchange (ETDEWEB)

    Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A

    2009-02-26

    The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.

  8. Investigating How German Biology Teachers Use Three-Dimensional Physical Models in Classroom Instruction: a Video Study

    Science.gov (United States)

    Werner, Sonja; Förtsch, Christian; Boone, William; von Kotzebue, Lena; Neuhaus, Birgit J.

    2017-07-01

    To obtain a general understanding of science, model use as part of National Education Standards is important for instruction. Model use can be characterized by three aspects: (1) the characteristics of the model, (2) the integration of the model into instruction, and (3) the use of models to foster scientific reasoning. However, there were no empirical results describing the implementation of National Education Standards in science instruction concerning the use of models. Therefore, the present study investigated the implementation of different aspects of model use in German biology instruction. Two biology lessons on the topic neurobiology in grade nine of 32 biology teachers were videotaped (N = 64 videos). These lessons were analysed using an event-based coding manual according to three aspects of model described above. Rasch analysis of the coded categories was conducted and showed reliable measurement. In the first analysis, we identified 68 lessons where a total of 112 different models were used. The in-depth analysis showed that special aspects of an elaborate model use according to several categories of scientific reasoning were rarely implemented in biology instruction. A critical reflection of the used model (N = 25 models; 22.3%) and models to demonstrate scientific reasoning (N = 26 models; 23.2%) were seldom observed. Our findings suggest that pre-service biology teacher education and professional development initiatives in Germany have to focus on both aspects.

  9. A Simple Scheme for Modeling Irrigation Water Requirements at the Regional Scale Applied to an Alpine River Catchment

    Directory of Open Access Journals (Sweden)

    Pascalle C. Smith

    2012-11-01

    Full Text Available This paper presents a simple approach for estimating the spatial and temporal variability of seasonal net irrigation water requirement (IWR at the catchment scale, based on gridded land use, soil and daily weather data at 500 × 500 m resolution. In this approach, IWR is expressed as a bounded, linear function of the atmospheric water budget, whereby the latter is defined as the difference between seasonal precipitation and reference evapotranspiration. To account for the effects of soil and crop properties on the soil water balance, the coefficients of the linear relation are expressed as a function of the soil water holding capacity and the so-called crop coefficient. The 12 parameters defining the relation were estimated with good coefficients of determination from a systematic analysis of simulations performed at daily time step with a FAO-type point-scale model for five climatically contrasted sites around the River Rhone and for combinations of six crop and ten soil types. The simple scheme was found to reproduce well results obtained with the daily model at six additional verification sites. We applied the simple scheme to the assessment of irrigation requirements in the whole Swiss Rhone catchment. The results suggest seasonal requirements of 32 × 106 m3 per year on average over 1981–2009, half of which at altitudes above 1500 m. They also disclose a positive trend in the intensity of extreme events over the study period, with an estimated total IWR of 55 × 106 m3 in 2009, and indicate a 45% increase in water demand of grasslands during the 2003 European heat wave in the driest area of the studied catchment. In view of its simplicity, the approach can be extended to other applications, including assessments of the impacts of climate and land-use change.

  10. Hydrogeological modelling of the Atlantis aquifer for management support to the Atlantis water supply scheme

    CSIR Research Space (South Africa)

    Jovanovic, Nebo

    2017-01-01

    Full Text Available .4314/wsa.v43i1.15 Available on website http://www.wrc.org.za ISSN 1816-7950 (Online) = Water SA Vol. 43 No. 1 January 2017 Published under a Creative Commons Attribution Licence Hydrogeological modelling of the Atlantis aquifer for management support... and the delineation of groundwater protection zones. Keywords: Groundwater abstraction; managed aquifer recharge; MODFLOW; particle tracking; scenario modelling 123 http://dx.doi.org/10.4314/wsa.v43i1.15 Available on website http://www.wrc.org.za ISSN 1816...

  11. Experimental Modeling of Monolithic Resistors for Silicon ICS with a Robust Optimizer-Driving Scheme

    Directory of Open Access Journals (Sweden)

    Philippe Leduc

    2002-06-01

    Full Text Available Today, an exhaustive library of models describing the electrical behavior of integrated passive components in the radio-frequency range is essential for the simulation and optimization of complex circuits. In this work, a preliminary study has been done on Tantalum Nitride (TaN resistors integrated on silicon, and this leads to a single p-type lumped-element circuit. An efficient extraction technique will be presented to provide a computer-driven optimizer with relevant initial model parameter values (the "guess-timate". The results show the unicity in most cases of the lumped element determination, which leads to a precise simulation of self-resonant frequencies.

  12. Comparison of two different sea-salt aerosol schemes as implemented in air quality models applied to the Mediterranean Basin

    Directory of Open Access Journals (Sweden)

    P. Jiménez-Guerrero

    2011-05-01

    Full Text Available A number of attempts have been made to incorporate sea-salt aerosol (SSA source functions in chemistry transport models with varying results according to the complexity of the scheme considered. This contribution compares the inclusion of two different SSA algorithms in two chemistry transport models: CMAQ and CHIMERE. The main goal is to examine the differences in average SSA mass and composition and to study the seasonality of the prediction of SSA when applied to the Mediterranean area with high resolution for a reference year. Dry and wet deposition schemes are also analyzed to better understand the differences observed between both models in the target area. The applied emission algorithm in CHIMERE uses a semi-empirical formulation which obtains the surface emission rate of SSA as a function of the particle size and the surface wind speed raised to the power 3.41. The emission parameterization included within CMAQ is somehow more sophisticated, since fluxes of SSA are corrected with relative humidity. In order to evaluate their strengths and weaknesses, the participating algorithms as implemented in the chemistry transport models were evaluated against AOD measurements from Aeronet and available surface measurements in Southern Europe and the Mediterranean area, showing biases around −0.002 and −1.2 μg m−3, respectively. The results indicate that both models represent accurately the patterns and dynamics of SSA and its non-uniform behavior in the Mediterranean basin, showing a strong seasonality. The levels of SSA strongly vary across the Western and the Eastern Mediterranean, reproducing CHIMERE higher annual levels in the Aegean Sea (12 μg m−3 and CMAQ in the Gulf of Lion (9 μg m−3. The large difference found for the ratio PM2.5/total SSA in CMAQ and CHIMERE is also investigated. The dry and wet removal rates are very similar for both models despite the different schemes

  13. A comparison of video modeling, text-based instruction, and no instruction for creating multiple baseline graphs in Microsoft Excel.

    Science.gov (United States)

    Tyner, Bryan C; Fienup, Daniel M

    2015-09-01

    Graphing is socially significant for behavior analysts; however, graphing can be difficult to learn. Video modeling (VM) may be a useful instructional method but lacks evidence for effective teaching of computer skills. A between-groups design compared the effects of VM, text-based instruction, and no instruction on graphing performance. Participants who used VM constructed graphs significantly faster and with fewer errors than those who used text-based instruction or no instruction. Implications for instruction are discussed. © Society for the Experimental Analysis of Behavior.

  14. A new adaptive control scheme based on the interacting multiple model (IMM) estimation

    International Nuclear Information System (INIS)

    Afshari, Hamed H.; Al-Ani, Dhafar; Habibi, Saeid

    2016-01-01

    In this paper, an Interacting multiple model (IMM) adaptive estimation approach is incorporated to design an optimal adaptive control law for stabilizing an Unmanned vehicle. Due to variations of the forward velocity of the Unmanned vehicle, its aerodynamic derivatives are constantly changing. In order to stabilize the unmanned vehicle and achieve the control objectives for in-flight conditions, one seeks for an adaptive control strategy that can adjust itself to varying flight conditions. In this context, a bank of linear models is used to describe the vehicle dynamics in different operating modes. Each operating mode represents a particular dynamic with a different forward velocity. These models are then used within an IMM filter containing a bank of Kalman filters (KF) in a parallel operating mechanism. To regulate and stabilize the vehicle, a Linear quadratic regulator (LQR) law is designed and implemented for each mode. The IMM structure determines the particular mode based on the stored models and in-flight input-output measurements. The LQR controller also provides a set of controllers; each corresponds to a particular flight mode and minimizes the tracking error. Finally, the ultimate control law is obtained as a weighted summation of all individual controllers whereas weights are obtained using mode probabilities of each operating mode.

  15. A new adaptive control scheme based on the interacting multiple model (IMM) estimation

    Energy Technology Data Exchange (ETDEWEB)

    Afshari, Hamed H.; Al-Ani, Dhafar; Habibi, Saeid [McMaster University, Hamilton (Canada)

    2016-06-15

    In this paper, an Interacting multiple model (IMM) adaptive estimation approach is incorporated to design an optimal adaptive control law for stabilizing an Unmanned vehicle. Due to variations of the forward velocity of the Unmanned vehicle, its aerodynamic derivatives are constantly changing. In order to stabilize the unmanned vehicle and achieve the control objectives for in-flight conditions, one seeks for an adaptive control strategy that can adjust itself to varying flight conditions. In this context, a bank of linear models is used to describe the vehicle dynamics in different operating modes. Each operating mode represents a particular dynamic with a different forward velocity. These models are then used within an IMM filter containing a bank of Kalman filters (KF) in a parallel operating mechanism. To regulate and stabilize the vehicle, a Linear quadratic regulator (LQR) law is designed and implemented for each mode. The IMM structure determines the particular mode based on the stored models and in-flight input-output measurements. The LQR controller also provides a set of controllers; each corresponds to a particular flight mode and minimizes the tracking error. Finally, the ultimate control law is obtained as a weighted summation of all individual controllers whereas weights are obtained using mode probabilities of each operating mode.

  16. On noice in data assimilation schemes for improved flood forecasting using distributed hydrological models

    NARCIS (Netherlands)

    Noh, S.J.; Rakovec, O.; Weerts, A.H.; Tachikawa, Y.

    2014-01-01

    We investigate the effects of noise specification on the quality of hydrological forecasts via an advanced data assimilation (DA) procedure using a distributed hydrological model driven by numerical weather predictions. The sequential DA procedure is based on (1) a multivariate rainfall ensemble

  17. Model-Based Predictive Control Scheme for Cost Optimization and Balancing Services for Supermarket Refrigeration Systems

    DEFF Research Database (Denmark)

    Weerts, Hermanus H. M.; Shafiei, Seyed Ehsan; Stoustrup, Jakob

    2014-01-01

    A new formulation of model predictive control for supermarket refrigeration systems is proposed to facilitate the regulatory power services as well as energy cost optimization of such systems in the smart grid. Nonlinear dynamics existed in large-scale refrigeration plants challenges the predicti...

  18. Development of a 3D cell-centered Lagrangian scheme for the numerical modeling of the gas dynamics and hyper-elasticity systems

    International Nuclear Information System (INIS)

    Georges, Gabriel

    2016-01-01

    High Energy Density Physics (HEDP) flows are multi-material flows characterized by strong shock waves and large changes in the domain shape due to rare faction waves. Numerical schemes based on the Lagrangian formalism are good candidates to model this kind of flows since the computational grid follows the fluid motion. This provides accurate results around the shocks as well as a natural tracking of multi-material interfaces and free-surfaces. In particular, cell-centered Finite Volume Lagrangian schemes such as GLACE (Godunov-type Lagrangian scheme Conservative for total Energy) and EUCCLHYD (Explicit Unstructured Cell-Centered Lagrangian Hydrodynamics) provide good results on both the modeling of gas dynamics and elastic-plastic equations. The work produced during this PhD thesis is in continuity with the work of Maire and Nkonga [JCP, 2009] for the hydrodynamic part and the work of Kluth and Despres [JCP, 2010] for the hyper elasticity part. More precisely, the aim of this thesis is to develop robust and accurate methods for the 3D extension of the EUCCLHYD scheme with a second-order extension based on MUSCL (Monotonic Upstream-centered Scheme for Conservation Laws) and GRP (Generalized Riemann Problem) procedures. A particular care is taken on the preservation of symmetries and the monotonicity of the solutions. The scheme robustness and accuracy are assessed on numerous Lagrangian test cases for which the 3D extensions are very challenging. (author) [fr

  19. Developing a computationally efficient dynamic multilevel hybrid optimization scheme using multifidelity model interactions.

    Energy Technology Data Exchange (ETDEWEB)

    Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr. (; .); Giunta, Anthony Andrew

    2006-01-01

    Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and

  20. Integrating a reservoir regulation scheme into a spatially distributed hydrological model

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Gang; Gao, Huilin; Naz, Bibi S.; Kao, Shih-Chieh; Voisin, Nathalie

    2016-12-01

    During the past several decades, numerous reservoirs have been built across the world for a variety of purposes such as flood control, irrigation, municipal/industrial water supplies, and hydropower generation. Consequently, natural streamflow timing and magnitude have been altered significantly by reservoir operations. In addition, the hydrological cycle can be modified by land use/land cover and climate changes. To understand the fine scale feedback between hydrological processes and water management decisions, a distributed hydrological model embedded with a reservoir component is of desire. In this study, a multi-purpose reservoir module with predefined complex operational rules was integrated into the Distributed Hydrology Soil Vegetation Model (DHSVM). Conditional operating rules, which are designed to reduce flood risk and enhance water supply reliability, were adopted in this module. The performance of the integrated model was tested over the upper Brazos River Basin in Texas, where two U.S. Army Corps of Engineers reservoirs, Lake Whitney and Aquilla Lake, are located. The integrated DHSVM model was calibrated and validated using observed reservoir inflow, outflow, and storage data. The error statistics were summarized for both reservoirs on a daily, weekly, and monthly basis. Using the weekly reservoir storage for Lake Whitney as an example, the coefficients of determination (R2) and the Nash-Sutcliff Efficiency (NSE) are 0.85 and 0.75, respectively. These results suggest that this reservoir module has promise for use in sub-monthly hydrological simulations. Enabled with the new reservoir component, the DHSVM model provides a platform to support adaptive water resources management under the impacts of evolving anthropogenic activities and substantial environmental changes.

  1. Use of Geostationary Satellite Data to Force Land Surface Schemes within Atmospheric Mesoscale Models

    Science.gov (United States)

    Lapenta, William M.; Suggs, Ron; McNider, Richard T.; Jedlovec, Gary; Dembek, Scott R.; Goodman, H. Michael (Technical Monitor)

    2000-01-01

    A technique has been developed for assimilating GOES-derived skin temperature tendencies and insolation into the surface energy budget equation of a mesoscale model so that the simulated rate of temperature change closely agrees with the satellite observations. A critical assumption of the technique is that the availability of moisture (either from the soil or vegetation) is the least known term in the model's surface energy budget. Therefore, the simulated latent heat flux, which is a function of surface moisture availability, is adjusted based upon differences between the modeled and satellite-observed skin temperature tendencies. An advantage of this technique is that satellite temperature tendencies are assimilated in an energetically consistent manner that avoids energy imbalances and surface stability problems that arise from direct assimilation of surface shelter temperatures. The fact that the rate of change of the satellite skin temperature is used rather than the absolute temperature means that sensor calibration is not as critical. The technique has been employed on a semi-operational basis at the GHCC within the PSU/NCAR MM5. Assimilation has been performed on a grid centered over the Southeastern US since November 1998. Results from the past year show that assimilation of the satellite data reduces both the bias and RMSE for simulations of surface air temperature and relative humidity. These findings are based on comparison of assimilation runs with a control using the simple 5-layer soil model available in MM5. A significant development in the past several months was the inclusion of the detailed Oregon State University land surface model (OSU/LSM) as an option within MM5. One of our working hypotheses has been that the assimilation technique, although simple, may provide better short-term forecasts than a detailed LSM that requires significant number initialized parameters. Preliminary results indicate that the assimilation out performs the OSU

  2. A Reversible Data Hiding Scheme for 3D Polygonal Models Based on Histogram Shifting with High Embedding Capacity

    Science.gov (United States)

    Huang, Yao-Hsien; Tsai, Yuan-Yu

    2015-06-01

    Reversibility is the ability to recover the stego media back to the cover media without any error after correctly extracting the secret message. This study proposes a reversible data hiding scheme for 3D polygonal models based on histogram shifting. Specifically, the histogram construction is based on the geometric similarity between neighboring vertices. The distances between the neighboring vertices in a 3D model with some point in the 3D space are usually similar, especially for a high-resolution 3D model. Therefore, the difference between the above distances of the neighboring vertices has a small value for a high probability. This study uses the modified breadth-first search to traverse each vertex once in a sequential order and determine the unique referencing neighbor for each vertex. The histogram is then constructed based on the normalized distance difference of neighboring vertices. This approach significantly increases embedding capacity. Experimental results show that the proposed algorithm can achieve higher embedding capacity than existing algorithms while still maintaining acceptable model distortion. This algorithm also provides greater robustness against similarity transformation attacks and vertex reordering attacks. The proposed technique is feasible for 3D reversible data hiding.

  3. Distortion Optimized Packet Scheduling and Prioritization of Multiple Video Streams over 802.11e Networks

    Directory of Open Access Journals (Sweden)

    Ilias Politis

    2007-01-01

    Full Text Available This paper presents a generic framework solution for minimizing video distortion of all multiple video streams transmitted over 802.11e wireless networks, including intelligent packet scheduling and channel access differentiation mechanisms. A distortion prediction model designed to capture the multireferenced frame coding characteristic of H.264/AVC encoded videos is used to predetermine the distortion importance of each video packet in all streams. Two intelligent scheduling algorithms are proposed: the “even-loss distribution,” where each video sender is experiencing the same loss and the “greedy-loss distribution” packet scheduling, where selected packets are dropped over all streams, ensuring that the most significant video stream in terms of picture context and quality characteristics will experience minimum losses. The proposed model has been verified with actual distortion measurements and has been found more accurate than the “additive distortion” model that omits the correlation among lost frames. The paper includes analytical and simulation results from the comparison of both schemes and from their comparison to the simplified additive model, for different video sequences and channel conditions.

  4. A Bayesian spatial assimilation scheme for snow coverage observations in a gridded snow model

    Directory of Open Access Journals (Sweden)

    S. Kolberg

    2006-01-01

    Full Text Available A method for assimilating remotely sensed snow covered area (SCA into the snow subroutine of a grid distributed precipitation-runoff model (PRM is presented. The PRM is assumed to simulate the snow state in each grid cell by a snow depletion curve (SDC, which relates that cell's SCA to its snow cover mass balance. The assimilation is based on Bayes' theorem, which requires a joint prior distribution of the SDC variables in all the grid cells. In this paper we propose a spatial model for this prior distribution, and include similarities and dependencies among the grid cells. Used to represent the PRM simulated snow cover state, our joint prior model regards two elevation gradients and a degree-day factor as global variables, rather than describing their effect separately for each cell. This transformation results in smooth normalised surfaces for the two related mass balance variables, supporting a strong inter-cell dependency in their joint prior model. The global features and spatial interdependency in the prior model cause each SCA observation to provide information for many grid cells. The spatial approach similarly facilitates the utilisation of observed discharge. Assimilation of SCA data using the proposed spatial model is evaluated in a 2400 km2 mountainous region in central Norway (61° N, 9° E, based on two Landsat 7 ETM+ images generalized to 1 km2 resolution. An image acquired on 11 May, a week before the peak flood, removes 78% of the variance in the remaining snow storage. Even an image from 4 May, less than a week after the melt onset, reduces this variance by 53%. These results are largely improved compared to a cell-by-cell independent assimilation routine previously reported. Including observed discharge in the updating information improves the 4 May results, but has weak effect on 11 May. Estimated elevation gradients are shown to be sensitive to informational deficits occurring at high altitude, where snowmelt has not started

  5. Time-Varying Scheme for Noncentralized Model Predictive Control of Large-Scale Systems

    Directory of Open Access Journals (Sweden)

    Alfredo Núñez

    2015-01-01

    Full Text Available The noncentralized model predictive control (NC-MPC framework in this paper refers to any distributed, hierarchical, or decentralized model predictive controller (or a combination of them the structure of which can change over time and the control actions of which are not obtained based on a centralized computation. Within this framework, we propose suitable online methods to decide which information is shared and how this information is used between the different local predictive controllers operating in a decentralized, distributed, and/or hierarchical way. Evaluating all the possible structures of the NC-MPC controller leads to a combinatorial optimization problem. Therefore, we also propose heuristic reduction methods, to keep the number of NC-MPC problems tractable to be solved. To show the benefits of the proposed framework, a case study of a set of coupled water tanks is presented.

  6. Validation and application of an urban turbulence parameterisation scheme for mesoscale atmospheric models

    OpenAIRE

    Roulet, Yves-Alain F.; Clappier, Alain

    2005-01-01

    Growing population, extensive use (and abuse) of the natural resources, increasing pollutants emissions in the atmosphere: these are a few obstacles (and not the least) one has to face with nowadays to ensure the sustainability of our planet in general, and of the air quality in particular. In the case of air pollution, the processes that govern the transport and the chemical transformation of pollutants are highly complex and non-linear. The use of numerical models for simulating meteorologi...

  7. The Use of Model Matching Video Analysis and Computational Simulation to Study the Ankle Sprain Injury Mechanism

    Directory of Open Access Journals (Sweden)

    Daniel Tik-Pui Fong

    2012-10-01

    Full Text Available Lateral ankle sprains continue to be the most common injury sustained by athletes and create an annual healthcare burden of over $4 billion in the U.S. alone. Foot inversion is suspected in these cases, but the mechanism of injury remains unclear. While kinematics and kinetics data are crucial in understanding the injury mechanisms, ligament behaviour measures – such as ligament strains – are viewed as the potential causal factors of ankle sprains. This review article demonstrates a novel methodology that integrates model matching video analyses with computational simulations in order to investigate injury-producing events for a better understanding of such injury mechanisms. In particular, ankle joint kinematics from actual injury incidents were deduced by model matching video analyses and then input into a generic computational model based on rigid bone surfaces and deformable ligaments of the ankle so as to investigate the ligament strains that accompany these sprain injuries. These techniques may have the potential for guiding ankle sprain prevention strategies and targeted rehabilitation therapies.

  8. An Ultrafast Maximum Power Point Setting Scheme for Photovoltaic Arrays Using Model Parameter Identification

    Directory of Open Access Journals (Sweden)

    Zhaohui Cen

    2015-01-01

    Full Text Available Maximum power point tracking (MPPT for photovoltaic (PV arrays is essential to optimize conversion efficiency under variable and nonuniform irradiance conditions. Unfortunately, conventional MPPT algorithms such as perturb and observe (P&O, incremental conductance, and current sweep method need to iterate command current or voltage and frequently operate power converters with associated losses. Under partial overcast conditions, tracking the real MPP in multipeak P-I or P-V curve model becomes highly challenging, with associated increase in search time and converter operation, leading to unnecessary power being lost in the MPP tracking process. In this paper, the noted drawbacks in MPPT-controlled converters are addressed. In order to separate the search algorithms from converter operation, a model parameter identification approach is presented to estimate insolation conditions of each PV panel and build a real-time overall P-I curve of PV arrays. Subsequently a simple but effective global MPPT algorithm is proposed to track the MPP in the overall P-I curve obtained from the identified PV array model, ensuring that the converter works at the MPP. The novel MPPT is ultrafast, resulting in conserved power in the tracking process. Finally, simulations in different scenarios are executed to validate the novel scheme’s effectiveness and advantages.

  9. A stable and robust calibration scheme of the log-periodic power law model

    Science.gov (United States)

    Filimonov, V.; Sornette, D.

    2013-09-01

    We present a simple transformation of the formulation of the log-periodic power law formula of the Johansen-Ledoit-Sornette (JLS) model of financial bubbles that reduces it to a function of only three nonlinear parameters. The transformation significantly decreases the complexity of the fitting procedure and improves its stability tremendously because the modified cost function is now characterized by good smooth properties with in general a single minimum in the case where the model is appropriate to the empirical data. We complement the approach with an additional subordination procedure that slaves two of the nonlinear parameters to the most crucial nonlinear parameter, the critical time tc, defined in the JLS model as the end of the bubble and the most probable time for a crash to occur. This further decreases the complexity of the search and provides an intuitive representation of the results of the calibration. With our proposed methodology, metaheuristic searches are not longer necessary and one can resort solely to rigorous controlled local search algorithms, leading to a dramatic increase in efficiency. Empirical tests on the Shanghai Composite index (SSE) from January 2007 to March 2008 illustrate our findings.

  10. A statistical scheme to forecast the daily lightning threat over southern Africa using the Unified Model

    Science.gov (United States)

    Gijben, Morné; Dyson, Liesl L.; Loots, Mattheus T.

    2017-09-01

    Cloud-to-ground lightning data from the Southern Africa Lightning Detection Network and numerical weather prediction model parameters from the Unified Model are used to develop a lightning threat index (LTI) for South Africa. The aim is to predict lightning for austral summer days (September to February) by means of a statistical approach. The austral summer months are divided into spring and summer seasons and analysed separately. Stepwise logistic regression techniques are used to select the most appropriate model parameters to predict lightning. These parameters are then utilized in a rare-event logistic regression analysis to produce equations for the LTI that predicts the probability of the occurrence of lightning. Results show that LTI forecasts have a high sensitivity and specificity for spring and summer. The LTI is less reliable during spring, since it over-forecasts the occurrence of lightning. However, during summer, the LTI forecast is reliable, only slightly over-forecasting lightning activity. The LTI produces sharp forecasts during spring and summer. These results show that the LTI will be useful early in the morning in areas where lightning can be expected during the day.

  11. Probabilistic Decision Based Block Partitioning for Future Video Coding

    KAUST Repository

    Wang, Zhao

    2017-11-29

    In the latest Joint Video Exploration Team development, the quadtree plus binary tree (QTBT) block partitioning structure has been proposed for future video coding. Compared to the traditional quadtree structure of High Efficiency Video Coding (HEVC) standard, QTBT provides more flexible patterns for splitting the blocks, which results in dramatically increased combinations of block partitions and high computational complexity. In view of this, a confidence interval based early termination (CIET) scheme is proposed for QTBT to identify the unnecessary partition modes in the sense of rate-distortion (RD) optimization. In particular, a RD model is established to predict the RD cost of each partition pattern without the full encoding process. Subsequently, the mode decision problem is casted into a probabilistic framework to select the final partition based on the confidence interval decision strategy. Experimental results show that the proposed CIET algorithm can speed up QTBT block partitioning structure by reducing 54.7% encoding time with only 1.12% increase in terms of bit rate. Moreover, the proposed scheme performs consistently well for the high resolution sequences, of which the video coding efficiency is crucial in real applications.

  12. Advantages of a Laplace transform filtering integration scheme over semi-implicit methods in a global shallow water model

    Science.gov (United States)

    Clancy, Colm; Lynch, Peter

    2010-05-01

    A filtering numerical time-integration scheme is being developed. Using a modified inversion to the Laplace Transform (LT), the scheme is designed to remove spurious noise while faithfully simulating low frequency atmospheric modes. The method has been compared with traditional semi-implicit schemes in a shallow water framework and shows a number of advantages. In particular we are investigating the behaviour of a semi-Lagrangian formulation of the LT scheme in the presence of orography. We will also discuss its effects on the energy spectra of atmospheric simulations.

  13. Improving High-resolution Weather Forecasts using the Weather Research and Forecasting (WRF) Model with Upgraded Kain-Fritsch Cumulus Scheme

    Science.gov (United States)

    High-resolution weather forecasting is affected by many aspects, i.e. model initial conditions, subgrid-scale cumulus convection and cloud microphysics schemes. Recent 12km grid studies using the Weather Research and Forecasting (WRF) model have identified the importance of inco...

  14. Implementation of non-local boundary layer schemes in the Regional Atmospheric Modeling System and its impact on simulated mesoscale circulations

    NARCIS (Netherlands)

    Gómez, I.; Ronda, R.J.; Caselles, V.; Estrela, M.J.

    2016-01-01

    This paper proposes the implementation of different non-local Planetary Boundary Layer schemes within the Regional Atmospheric Modeling System (RAMS) model. The two selected PBL parameterizations are the Medium-Range Forecast (MRF) PBL and its updated version, known as the Yonsei University (YSU)

  15. Robust master-slave synchronization for general uncertain delayed dynamical model based on adaptive control scheme.

    Science.gov (United States)

    Wang, Tianbo; Zhou, Wuneng; Zhao, Shouwei; Yu, Weiqin

    2014-03-01

    In this paper, the robust exponential synchronization problem for a class of uncertain delayed master-slave dynamical system is investigated by using the adaptive control method. Different from some existing master-slave models, the considered master-slave system includes bounded unmodeled dynamics. In order to compensate the effect of unmodeled dynamics and effectively achieve synchronization, a novel adaptive controller with simple updated laws is proposed. Moreover, the results are given in terms of LMIs, which can be easily solved by LMI Toolbox in Matlab. A numerical example is given to illustrate the effectiveness of the method. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Tunneling dynamics in open ultracold bosonic systems. Numerically exact dynamics - Analytical models - Control schemes

    Energy Technology Data Exchange (ETDEWEB)

    Lode, Axel U.J.

    2013-06-03

    This thesis explores the quantum many-body tunneling dynamics of open ultracold bosonic systems with the recently developed multiconfigurational time-dependent Hartree for bosons (MCTDHB) method. The capabilities of MCTDHB to provide solutions to the full time-dependent many-body problem are assessed in a benchmark using the analytically solvable harmonic interaction Hamiltonian and a generalization of it with time-dependent both one- and two-body potentials. In a comparison with numerically exact MCTDHB results, it is shown that e.g. lattice methods fail qualitatively to describe the tunneling dynamics. A model assembling the many-body physics of the process from basic simultaneously happening single-particle processes is derived and verified with a numerically exact MCTDHB description. The generality of the model is demonstrated even for strong interactions and large particle numbers. The ejection of the bosons from the source occurs with characteristic velocities. These velocities are defined by the chemical potentials of systems with different particle numbers which are converted to kinetic energy. The tunneling process is accompanied by fragmentation: the ejected bosons lose their coherence with the source and among each other. It is shown that the various aspects of the tunneling dynamics' can be controlled well with the interaction and the potential threshold.

  17. A Fovea Localization Scheme Using Vessel Origin-Based Parabolic Model

    Directory of Open Access Journals (Sweden)

    Chun-Yuan Yu

    2014-09-01

    Full Text Available At the center of the macula, fovea plays an important role in computer-aided diagnosis. To locate the fovea, this paper proposes a vessel origin (VO-based parabolic model, which takes the VO as the vertex of the parabola-like vasculature. Image processing steps are applied to accurately locate the fovea on retinal images. Firstly, morphological gradient and the circular Hough transform are used to find the optic disc. The structure of the vessel is then segmented with the line detector. Based on the characteristics of the VO, four features of VO are extracted, following the Bayesian classification procedure. Once the VO is identified, the VO-based parabolic model will locate the fovea. To find the fittest parabola and the symmetry axis of the retinal vessel, an Shift and Rotation (SR-Hough transform that combines the Hough transform with the shift and rotation of coordinates is presented. Two public databases of retinal images, DRIVE and STARE, are used to evaluate the proposed method. The experiment results show that the average Euclidean distances between the located fovea and the fovea marked by experts in two databases are 9.8 pixels and 30.7 pixels, respectively. The results are stronger than other methods and thus provide a better macular detection for further disease discovery.

  18. Reduced 3d modeling on injection schemes for laser wakefield acceleration at plasma scale lengths

    Science.gov (United States)

    Helm, Anton; Vieira, Jorge; Silva, Luis; Fonseca, Ricardo

    2017-10-01

    Current modelling techniques for laser wakefield acceleration (LWFA) are based on particle-in-cell (PIC) codes which are computationally demanding. In PIC simulations the laser wavelength λ0, in μm-range, has to be resolved over the acceleration lengths in meter-range. A promising approach is the ponderomotive guiding center solver (PGC) by only considering the laser envelope for laser pulse propagation. Therefore only the plasma skin depth λp has to be resolved, leading to speedups of (λp /λ0) 2. This allows to perform a wide-range of parameter studies and use it for λ0 Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014 and PD/BD/105882/2014.

  19. Performance Assessment of the VSC Using Two Model Predictive Control Schemes

    DEFF Research Database (Denmark)

    Al hasheem, Mohamed; Abdelhakim, Ahmed; Dragicevic, Tomislav

    2018-01-01

    Finite control set model predictive control (FCS-MPC) methods in different power electronics application are gaining high attention due to their simplicity and fast dynamics. This paper introduces an experimental assessment of the two-level three-phase voltage source converter (2L-VSC) using two...... FCS-MPC algorithms. In order to perform such comparative evaluation, the 2L-VSC efficiency and total harmonics distortion voltage (THDv) have been measured where considering a linear load and non-linear load. The new algorithm gives better results than the conventional algorithm in terms of the THD...... and 2L-VSC efficiency. The results also demonstrate the performance of the system using carrier based pulse width modulation (CB-PWM). These findings have been validated for both linear and non-linear loads through experimental verification on 4 kW 2L-VSC prototype. It can be concluded that a comparable...

  20. Prototype-based Models for the Supervised Learning of Classification Schemes

    Science.gov (United States)

    Biehl, Michael; Hammer, Barbara; Villmann, Thomas

    2017-06-01

    An introduction is given to the use of prototype-based models in supervised machine learning. The main concept of the framework is to represent previously observed data in terms of so-called prototypes, which reflect typical properties of the data. Together with a suitable, discriminative distance or dissimilarity measure, prototypes can be used for the classification of complex, possibly high-dimensional data. We illustrate the framework in terms of the popular Learning Vector Quantization (LVQ). Most frequently, standard Euclidean distance is employed as a distance measure. We discuss how LVQ can be equipped with more general dissimilarites. Moreover, we introduce relevance learning as a tool for the data-driven optimization of parameterized distances.