How update schemes influence crowd simulations
International Nuclear Information System (INIS)
Seitz, Michael J; Köster, Gerta
2014-01-01
Time discretization is a key modeling aspect of dynamic computer simulations. In current pedestrian motion models based on discrete events, e.g. cellular automata and the Optimal Steps Model, fixed-order sequential updates and shuffle updates are prevalent. We propose to use event-driven updates that process events in the order they occur, and thus better match natural movement. In addition, we present a parallel update with collision detection and resolution for situations where computational speed is crucial. Two simulation studies serve to demonstrate the practical impact of the choice of update scheme. Not only do density-speed relations differ, but there is a statistically significant effect on evacuation times. Fixed-order sequential and random shuffle updates with a short update period come close to event-driven updates. The parallel update scheme overestimates evacuation times. All schemes can be employed for arbitrary simulation models with discrete events, such as car traffic or animal behavior. (paper)
Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin
Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.
Update schemes of multi-velocity floor field cellular automaton for pedestrian dynamics
Luo, Lin; Fu, Zhijian; Cheng, Han; Yang, Lizhong
2018-02-01
Modeling pedestrian movement is an interesting problem both in statistical physics and in computational physics. Update schemes of cellular automaton (CA) models for pedestrian dynamics govern the schedule of pedestrian movement. Usually, different update schemes make the models behave in different ways, which should be carefully recalibrated. Thus, in this paper, we investigated the influence of four different update schemes, namely parallel/synchronous scheme, random scheme, order-sequential scheme and shuffled scheme, on pedestrian dynamics. The multi-velocity floor field cellular automaton (FFCA) considering the changes of pedestrians' moving properties along walking paths and heterogeneity of pedestrians' walking abilities was used. As for parallel scheme only, the collisions detection and resolution should be considered, resulting in a great difference from any other update schemes. For pedestrian evacuation, the evacuation time is enlarged, and the difference in pedestrians' walking abilities is better reflected, under parallel scheme. In face of a bottleneck, for example a exit, using a parallel scheme leads to a longer congestion period and a more dispersive density distribution. The exit flow and the space-time distribution of density and velocity have significant discrepancies under four different update schemes when we simulate pedestrian flow with high desired velocity. Update schemes may have no influence on pedestrians in simulation to create tendency to follow others, but sequential and shuffled update scheme may enhance the effect of pedestrians' familiarity with environments.
Mining Sequential Update Summarization with Hierarchical Text Analysis
Directory of Open Access Journals (Sweden)
Chunyun Zhang
2016-01-01
Full Text Available The outbreak of unexpected news events such as large human accident or natural disaster brings about a new information access problem where traditional approaches fail. Mostly, news of these events shows characteristics that are early sparse and later redundant. Hence, it is very important to get updates and provide individuals with timely and important information of these incidents during their development, especially when being applied in wireless and mobile Internet of Things (IoT. In this paper, we define the problem of sequential update summarization extraction and present a new hierarchical update mining system which can broadcast with useful, new, and timely sentence-length updates about a developing event. The new system proposes a novel method, which incorporates techniques from topic-level and sentence-level summarization. To evaluate the performance of the proposed system, we apply it to the task of sequential update summarization of temporal summarization (TS track at Text Retrieval Conference (TREC 2013 to compute four measurements of the update mining system: the expected gain, expected latency gain, comprehensiveness, and latency comprehensiveness. Experimental results show that our proposed method has good performance.
A keyword searchable attribute-based encryption scheme with attribute update for cloud storage.
Wang, Shangping; Ye, Jian; Zhang, Yaling
2018-01-01
Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption.
Automatic synthesis of sequential control schemes
International Nuclear Information System (INIS)
Klein, I.
1993-01-01
Of all hard- and software developed for industrial control purposes, the majority is devoted to sequential, or binary valued, control and only a minor part to classical linear control. Typically, the sequential parts of the controller are invoked during startup and shut-down to bring the system into its normal operating region and into some safe standby region, respectively. Despite its importance, fairly little theoretical research has been devoted to this area, and sequential control programs are therefore still created manually without much theoretical support to obtain a systematic approach. We propose a method to create sequential control programs automatically. The main ideas is to spend some effort off-line modelling the plant, and from this model generate the control strategy, that is the plan. The plant is modelled using action structures, thereby concentrating on the actions instead of the states of the plant. In general the planning problem shows exponential complexity in the number of state variables. However, by focusing on the actions, we can identify problem classes as well as algorithms such that the planning complexity is reduced to polynomial complexity. We prove that these algorithms are sound, i.e., the generated solution will solve the stated problem, and complete, i.e., if the algorithms fail, then no solution exists. The algorithms generate a plan as a set of actions and a partial order on this set specifying the execution order. The generated plant is proven to be minimal and maximally parallel. For a larger class of problems we propose a method to split the original problem into a number of simple problems that can each be solved using one of the presented algorithms. It is also shown how a plan can be translated into a GRAFCET chart, and to illustrate these ideas we have implemented a planing tool, i.e., a system that is able to automatically create control schemes. Such a tool can of course also be used on-line if it is fast enough. This
Sequential updating of a new dynamic pharmacokinetic model for caffeine in premature neonates.
Micallef, Sandrine; Amzal, Billy; Bach, Véronique; Chardon, Karen; Tourneux, Pierre; Bois, Frédéric Y
2007-01-01
Caffeine treatment is widely used in nursing care to reduce the risk of apnoea in premature neonates. To check the therapeutic efficacy of the treatment against apnoea, caffeine concentration in blood is an important indicator. The present study was aimed at building a pharmacokinetic model as a basis for a medical decision support tool. In the proposed model, time dependence of physiological parameters is introduced to describe rapid growth of neonates. To take into account the large variability in the population, the pharmacokinetic model is embedded in a population structure. The whole model is inferred within a Bayesian framework. To update caffeine concentration predictions as data of an incoming patient are collected, we propose a fast method that can be used in a medical context. This involves the sequential updating of model parameters (at individual and population levels) via a stochastic particle algorithm. Our model provides better predictions than the ones obtained with models previously published. We show, through an example, that sequential updating improves predictions of caffeine concentration in blood (reduce bias and length of credibility intervals). The update of the pharmacokinetic model using body mass and caffeine concentration data is studied. It shows how informative caffeine concentration data are in contrast to body mass data. This study provides the methodological basis to predict caffeine concentration in blood, after a given treatment if data are collected on the treated neonate.
Directory of Open Access Journals (Sweden)
SK Hafizul Islam
2014-01-01
Full Text Available Several certificateless short signature and multisignature schemes based on traditional public key infrastructure (PKI or identity-based cryptosystem (IBC have been proposed in the literature; however, no certificateless short sequential (or serial multisignature (CL-SSMS or short broadcast (or parallel multisignature (CL-SBMS schemes have been proposed. In this paper, we propose two such new CL-SSMS and CL-SBMS schemes based on elliptic curve bilinear pairing. Like any certificateless public key cryptosystem (CL-PKC, the proposed schemes are free from the public key certificate management burden and the private key escrow problem as found in PKI- and IBC-based cryptosystems, respectively. In addition, the requirements of the expected security level and the fixed length signature with constant verification time have been achieved in our schemes. The schemes are communication efficient as the length of the multisignature is equivalent to a single elliptic curve point and thus become the shortest possible multisignature scheme. The proposed schemes are then suitable for communication systems having resource constrained devices such as PDAs, mobile phones, RFID chips, and sensors where the communication bandwidth, battery life, computing power and storage space are limited.
Impact of the updating scheme on stationary states of networks
International Nuclear Information System (INIS)
Radicchi, F; Ahn, Y Y; Meyer-Ortmanns, H
2008-01-01
From Boolean networks it is well known that the number of attractors as a function of the system size depends on the updating scheme which is chosen either synchronously or asynchronously. In this contribution, we report on a systematic interpolation between synchronous and asynchronous updating in a one-dimensional chain of Ising spins. The stationary state for fully synchronous updating is antiferromagnetic. The interpolation allows us to locate a phase transition between phases with an absorbing and a fluctuating stationary state. The associated universality class is that of parity conservation. We also report on a more recent study of asynchronous updates applied to the yeast cell-cycle network. Compared to the synchronous update, the basin of attraction of the largest attractor considerably shrinks and the convergence to the biological pathway slows down and is less dominant. Both examples illustrate how sensitively the stationary states and the properties of attractors can depend on the updating mode of the algorithm
Online Sequential Projection Vector Machine with Adaptive Data Mean Update.
Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei
2016-01-01
We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.
Online Sequential Projection Vector Machine with Adaptive Data Mean Update
Directory of Open Access Journals (Sweden)
Lin Chen
2016-01-01
Full Text Available We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1 the projection vectors for dimension reduction, (2 the input weights, biases, and output weights, and (3 the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD approach, adaptive multihyperplane machine (AMM, primal estimated subgradient solver (Pegasos, online sequential extreme learning machine (OSELM, and SVD + OSELM (feature selection based on SVD is performed before OSELM. The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.
Yan, Y.; Barth, A.; Beckers, J. M.; Brankart, J. M.; Brasseur, P.; Candille, G.
2017-07-01
In this paper, three incremental analysis update schemes (IAU 0, IAU 50 and IAU 100) are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The difference between the three IAU schemes lies on the position of the increment update window. The relevance of each IAU scheme is evaluated through analyses on both thermohaline and dynamical variables. The validation of the assimilation results is performed according to both deterministic and probabilistic metrics against different sources of observations. For deterministic validation, the ensemble mean and the ensemble spread are compared to the observations. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score. The obtained results show that 1) the IAU 50 scheme has the same performance as the IAU 100 scheme 2) the IAU 50/100 schemes outperform the IAU 0 scheme in error covariance propagation for thermohaline variables in relatively stable region, while the IAU 0 scheme outperforms the IAU 50/100 schemes in dynamical variables estimation in dynamically active region 3) in case with sufficient number of observations and good error specification, the impact of IAU schemes is negligible. The differences between the IAU 0 scheme and the IAU 50/100 schemes are mainly due to different model integration time and different instability (density inversion, large vertical velocity, etc.) induced by the increment update. The longer model integration time with the IAU 50/100 schemes, especially the free model integration, on one hand, allows for better re-establishment of the equilibrium model state, on the other hand, smooths the strong gradients in dynamically active region.
Optimal Sales Schemes for Network Goods
DEFF Research Database (Denmark)
Parakhonyak, Alexei; Vikander, Nick
consumers simultaneously, serve them all sequentially, or employ any intermediate scheme. We show that the optimal sales scheme is purely sequential, where each consumer observes all previous sales before choosing whether to buy himself. A sequential scheme maximizes the amount of information available...
Sequential and parallel image restoration: neural network implementations.
Figueiredo, M T; Leitao, J N
1994-01-01
Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.
A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations
Qin, Fangjun; Jiang, Sai; Zha, Feng
2018-01-01
In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms. PMID:29751538
A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations
Directory of Open Access Journals (Sweden)
Fangjun Qin
2018-05-01
Full Text Available In this paper, a sequential multiplicative extended Kalman filter (SMEKF is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms.
Veire, Van de M.; Sterk, G.; Staaij, van der M.; Ramakers, P.M.J.; Tirry, L.
2002-01-01
This paper describes a number of test methods, to beused in a sequential scheme, for testing the side-effects ofplant protection products on anthocorid bugs. Orius laevigatuswas used as test species. A `worst case' laboratory method wasdeveloped for evaluating the effect on mortality of the
Forced Sequence Sequential Decoding
DEFF Research Database (Denmark)
Jensen, Ole Riis; Paaske, Erik
1998-01-01
We describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon (RS) codes with nonuniform profile. With this scheme decoding with good performance is possible as low...... as Eb/N0=0.6 dB, which is about 1.25 dB below the signal-to-noise ratio (SNR) that marks the cutoff rate for the full system. Accounting for about 0.45 dB due to the outer codes, sequential decoding takes place at about 1.7 dB below the SNR cutoff rate for the convolutional code. This is possible since...... the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...
A PSO Driven Intelligent Model Updating and Parameter Identification Scheme for Cable-Damper System
Directory of Open Access Journals (Sweden)
Danhui Dan
2015-01-01
Full Text Available The precise measurement of the cable force is very important for monitoring and evaluating the operation status of cable structures such as cable-stayed bridges. The cable system should be installed with lateral dampers to reduce the vibration, which affects the precise measurement of the cable force and other cable parameters. This paper suggests a cable model updating calculation scheme driven by the particle swarm optimization (PSO algorithm. By establishing a finite element model considering the static geometric nonlinearity and stress-stiffening effect firstly, an automatically finite element method model updating powered by PSO algorithm is proposed, with the aims to identify the cable force and relevant parameters of cable-damper system precisely. Both numerical case studies and full-scale cable tests indicated that, after two rounds of updating process, the algorithm can accurately identify the cable force, moment of inertia, and damping coefficient of the cable-damper system.
Optimal updating magnitude in adaptive flat-distribution sampling.
Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery
2017-11-07
We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.
Multi-Stage Recognition of Speech Emotion Using Sequential Forward Feature Selection
Directory of Open Access Journals (Sweden)
Liogienė Tatjana
2016-07-01
Full Text Available The intensive research of speech emotion recognition introduced a huge collection of speech emotion features. Large feature sets complicate the speech emotion recognition task. Among various feature selection and transformation techniques for one-stage classification, multiple classifier systems were proposed. The main idea of multiple classifiers is to arrange the emotion classification process in stages. Besides parallel and serial cases, the hierarchical arrangement of multi-stage classification is most widely used for speech emotion recognition. In this paper, we present a sequential-forward-feature-selection-based multi-stage classification scheme. The Sequential Forward Selection (SFS and Sequential Floating Forward Selection (SFFS techniques were employed for every stage of the multi-stage classification scheme. Experimental testing of the proposed scheme was performed using the German and Lithuanian emotional speech datasets. Sequential-feature-selection-based multi-stage classification outperformed the single-stage scheme by 12–42 % for different emotion sets. The multi-stage scheme has shown higher robustness to the growth of emotion set. The decrease in recognition rate with the increase in emotion set for multi-stage scheme was lower by 10–20 % in comparison with the single-stage case. Differences in SFS and SFFS employment for feature selection were negligible.
National Oceanic and Atmospheric Administration, Department of Commerce — Circular Updates are periodic sequentially numbered instructions to debriefing staff and observers informing them of changes or additions to scientific and specimen...
Dissociating Working Memory Updating and Automatic Updating: The Reference-Back Paradigm
Rac-Lubashevsky, Rachel; Kessler, Yoav
2016-01-01
Working memory (WM) updating is a controlled process through which relevant information in the environment is selected to enter the gate to WM and substitute its contents. We suggest that there is also an automatic form of updating, which influences performance in many tasks and is primarily manifested in reaction time sequential effects. The goal…
Forced Sequence Sequential Decoding
DEFF Research Database (Denmark)
Jensen, Ole Riis
In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance...... is possible as low as Eb/No=0.6 dB, which is about 1.7 dB below the signal-to-noise ratio that marks the cut-off rate for the convolutional code. This is possible since the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....
Sequential sampling: a novel method in farm animal welfare assessment.
Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J
2016-02-01
Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall
O'Keeffe, C J; Ren, Ruichao; Orkoulas, G
2007-11-21
Spatial updating grand canonical Monte Carlo algorithms are generalizations of random and sequential updating algorithms for lattice systems to continuum fluid models. The elementary steps, insertions or removals, are constructed by generating points in space either at random (random updating) or in a prescribed order (sequential updating). These algorithms have previously been developed only for systems of impenetrable spheres for which no particle overlap occurs. In this work, spatial updating grand canonical algorithms are generalized to continuous, soft-core potentials to account for overlapping configurations. Results on two- and three-dimensional Lennard-Jones fluids indicate that spatial updating grand canonical algorithms, both random and sequential, converge faster than standard grand canonical algorithms. Spatial algorithms based on sequential updating not only exhibit the fastest convergence but also are ideal for parallel implementation due to the absence of strict detailed balance and the nature of the updating that minimizes interprocessor communication. Parallel simulation results for three-dimensional Lennard-Jones fluids show a substantial reduction of simulation time for systems of moderate and large size. The efficiency improvement by parallel processing through domain decomposition is always in addition to the efficiency improvement by sequential updating.
Metal fractionation of atmospheric aerosols via sequential chemical extraction: a review
Energy Technology Data Exchange (ETDEWEB)
Smichowski, Patricia; Gomez, Dario [Unidad de Actividad Quimica, Comision Nacional de Energia Atomica, San Martin (Argentina); Polla, Griselda [Unidad de Actividad Fisica, Comision Nacional de Energia Atomica, San Martin (Argentina)
2005-01-01
This review surveys schemes used to sequentially chemically fractionate metals and metalloids present in airborne particulate matter. It focuses mainly on sequential chemical fractionation schemes published over the last 15 years. These schemes have been classified into five main categories: (1) based on Tessier's procedure, (2) based on Chester's procedure, (3) based on Zatka's procedure, (4) based on BCR procedure, and (5) other procedures. The operational characteristics as well as the state of the art in metal fractionation of airborne particulate matter, fly ashes and workroom aerosols, in terms of applications, optimizations and innovations, are also described. Many references to other works in this area are provided. (orig.)
Richardson, LaTonia Clay; Bazaco, Michael C; Parker, Cary Chen; Dewey-Mattia, Daniel; Golden, Neal; Jones, Karen; Klontz, Karl; Travis, Curtis; Kufel, Joanna Zablotsky; Cole, Dana
2017-12-01
Foodborne disease data collected during outbreak investigations are used to estimate the percentage of foodborne illnesses attributable to specific food categories. Current food categories do not reflect whether or how the food has been processed and exclude many multiple-ingredient foods. Representatives from three federal agencies worked collaboratively in the Interagency Food Safety Analytics Collaboration (IFSAC) to develop a hierarchical scheme for categorizing foods implicated in outbreaks, which accounts for the type of processing and provides more specific food categories for regulatory purposes. IFSAC also developed standard assumptions for assigning foods to specific food categories, including some multiple-ingredient foods. The number and percentage of outbreaks assignable to each level of the hierarchy were summarized. The IFSAC scheme is a five-level hierarchy for categorizing implicated foods with increasingly specific subcategories at each level, resulting in a total of 234 food categories. Subcategories allow distinguishing features of implicated foods to be reported, such as pasteurized versus unpasteurized fluid milk, shell eggs versus liquid egg products, ready-to-eat versus raw meats, and five different varieties of fruit categories. Twenty-four aggregate food categories contained a sufficient number of outbreaks for source attribution analyses. Among 9791 outbreaks reported from 1998 to 2014 with an identified food vehicle, 4607 (47%) were assignable to food categories using this scheme. Among these, 4218 (92%) were assigned to one of the 24 aggregate food categories, and 840 (18%) were assigned to the most specific category possible. Updates to the food categorization scheme and new methods for assigning implicated foods to specific food categories can help increase the number of outbreaks attributed to a single food category. The increased specificity of food categories in this scheme may help improve source attribution analyses, eventually
Further comments on the sequential probability ratio testing methods
Energy Technology Data Exchange (ETDEWEB)
Kulacsy, K. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics
1997-05-23
The Bayesian method for belief updating proposed in Racz (1996) is examined. The interpretation of the belief function introduced therein is found, and the method is compared to the classical binary Sequential Probability Ratio Testing method (SPRT). (author).
Certificateless Key-Insulated Generalized Signcryption Scheme without Bilinear Pairings
Directory of Open Access Journals (Sweden)
Caixue Zhou
2017-01-01
Full Text Available Generalized signcryption (GSC can be applied as an encryption scheme, a signature scheme, or a signcryption scheme with only one algorithm and one key pair. A key-insulated mechanism can resolve the private key exposure problem. To ensure the security of cloud storage, we introduce the key-insulated mechanism into GSC and propose a concrete scheme without bilinear pairings in the certificateless cryptosystem setting. We provide a formal definition and a security model of certificateless key-insulated GSC. Then, we prove that our scheme is confidential under the computational Diffie-Hellman (CDH assumption and unforgeable under the elliptic curve discrete logarithm (EC-DL assumption. Our scheme also supports both random-access key update and secure key update. Finally, we evaluate the efficiency of our scheme and demonstrate that it is highly efficient. Thus, our scheme is more suitable for users who communicate with the cloud using mobile devices.
Hybrid Modulation Scheme for Cascaded H-Bridge Inverter Cells ...
African Journals Online (AJOL)
This work proposes a switching technique for cascaded H-Bridge (CHB) cells. Single carrier Sinusoidal PWM (SCSPWM) scheme is employed in the generation of the gating signals. A sequential switching and base PWM circulation schemes are presented for this fundamental cascaded multilevel inverter topology.
Sequential blind identification of underdetermined mixtures using a novel deflation scheme.
Zhang, Mingjian; Yu, Simin; Wei, Gang
2013-09-01
In this brief, we consider the problem of blind identification in underdetermined instantaneous mixture cases, where there are more sources than sensors. A new blind identification algorithm, which estimates the mixing matrix in a sequential fashion, is proposed. By using the rank-1 detecting device, blind identification is reformulated as a constrained optimization problem. The identification of one column of the mixing matrix hence reduces to an optimization task for which an efficient iterative algorithm is proposed. The identification of the other columns of the mixing matrix is then carried out by a generalized eigenvalue decomposition-based deflation method. The key merit of the proposed deflation method is that it does not suffer from error accumulation. The proposed sequential blind identification algorithm provides more flexibility and better robustness than its simultaneous counterpart. Comparative simulation results demonstrate the superior performance of the proposed algorithm over the simultaneous blind identification algorithm.
Directory of Open Access Journals (Sweden)
C. Barthe
2012-01-01
Full Text Available The paper describes the fully parallelized electrical scheme CELLS which is suitable to simulate explicitly electrified storm systems on parallel computers. Our motivation here is to show that a cloud electricity scheme can be developed for use on large grids with complex terrain. Large computational domains are needed to perform real case meteorological simulations with many independent convective cells.
The scheme computes the bulk electric charge attached to each cloud particle and hydrometeor. Positive and negative ions are also taken into account. Several parametrizations of the dominant non-inductive charging process are included and an inductive charging process as well. The electric field is obtained by inverting the Gauss equation with an extension to terrain-following coordinates. The new feature concerns the lightning flash scheme which is a simplified version of an older detailed sequential scheme. Flashes are composed of a bidirectional leader phase (vertical extension from the triggering point and a phase obeying a fractal law (with horizontal extension on electrically charged zones. The originality of the scheme lies in the way the branching phase is treated to get a parallel code.
The complete electrification scheme is tested for the 10 July 1996 STERAO case and for the 21 July 1998 EULINOX case. Flash characteristics are analysed in detail and additional sensitivity experiments are performed for the STERAO case. Although the simulations were run for flat terrain conditions, they show that the model behaves well on multiprocessor computers. This opens a wide area of application for this electrical scheme with the next objective of running real meterological case on large domains.
Towards a multigrid scheme in SU(2) lattice gauge theory
International Nuclear Information System (INIS)
Gutbrod, F.
1992-12-01
The task of constructing a viable updating multigrid scheme for SU(2) lattice gauge theory is discussed in connection with the classical eigenvalue problem. For a nonlocal overrelaxation Monte Carlo update step, the central numerical problem is the search for the minimum of a quadratic approximation to the action under nonlocal constraints. Here approximate eigenfunctions are essential to reduce the numerical work, and these eigenfunctions are to be constructed with multigrid techniques. A simple implementation on asymmetric lattices is described, where the grids are restricted to 3-dimensional hyperplanes. The scheme is shown to be moderately successful in the early stages of the updating history (starting from a cold configuration). The main results of another, less asymmetric scheme are presented briefly. (orig.)
Directory of Open Access Journals (Sweden)
R. Sitharthan
2016-09-01
Full Text Available This paper aims at modelling an electronically coupled distributed energy resource with an adaptive protection scheme. The electronically coupled distributed energy resource is a microgrid framework formed by coupling the renewable energy source electronically. Further, the proposed adaptive protection scheme provides a suitable protection to the microgrid for various fault conditions irrespective of the operating mode of the microgrid: namely, grid connected mode and islanded mode. The outstanding aspect of the developed adaptive protection scheme is that it monitors the microgrid and instantly updates relay fault current according to the variations that occur in the system. The proposed adaptive protection scheme also employs auto reclosures, through which the proposed adaptive protection scheme recovers faster from the fault and thereby increases the consistency of the microgrid. The effectiveness of the proposed adaptive protection is studied through the time domain simulations carried out in the PSCAD⧹EMTDC software environment.
International Nuclear Information System (INIS)
Tosic, P.T.
2011-01-01
We study certain types of Cellular Automata (CA) viewed as an abstraction of large-scale Multi-Agent Systems (MAS). We argue that the classical CA model needs to be modified in several important respects, in order to become a relevant and sufficiently general model for the large-scale MAS, and so that thus generalized model can capture many important MAS properties at the level of agent ensembles and their long-term collective behavior patterns. We specifically focus on the issue of inter-agent communication in CA, and propose sequential cellular automata (SCA) as the first step, and genuinely Asynchronous Cellular Automata (ACA) as the ultimate deterministic CA-based abstract models for large-scale MAS made of simple reactive agents. We first formulate deterministic and nondeterministic versions of sequential CA, and then summarize some interesting configuration space properties (i.e., possible behaviors) of a restricted class of sequential CA. In particular, we compare and contrast those properties of sequential CA with the corresponding properties of the classical (that is, parallel and perfectly synchronous) CA with the same restricted class of update rules. We analytically demonstrate failure of the studied sequential CA models to simulate all possible behaviors of perfectly synchronous parallel CA, even for a very restricted class of non-linear totalistic node update rules. The lesson learned is that the interleaving semantics of concurrency, when applied to sequential CA, is not refined enough to adequately capture the perfect synchrony of parallel CA updates. Last but not least, we outline what would be an appropriate CA-like abstraction for large-scale distributed computing insofar as the inter-agent communication model is concerned, and in that context we propose genuinely asynchronous CA. (author)
Nonsynchronous updating in the multiverse of cellular automata.
Reia, Sandro M; Kinouchi, Osame
2015-04-01
In this paper we study updating effects on cellular automata rule space. We consider a subset of 6144 order-3 automata from the space of 262144 bidimensional outer-totalistic rules. We compare synchronous to asynchronous and sequential updatings. Focusing on two automata, we discuss how update changes destroy typical structures of these rules. Besides, we show that the first-order phase transition in the multiverse of synchronous cellular automata, revealed with the use of a recently introduced control parameter, seems to be robust not only to changes in update schema but also to different initial densities.
Nonsynchronous updating in the multiverse of cellular automata
Reia, Sandro M.; Kinouchi, Osame
2015-04-01
In this paper we study updating effects on cellular automata rule space. We consider a subset of 6144 order-3 automata from the space of 262144 bidimensional outer-totalistic rules. We compare synchronous to asynchronous and sequential updatings. Focusing on two automata, we discuss how update changes destroy typical structures of these rules. Besides, we show that the first-order phase transition in the multiverse of synchronous cellular automata, revealed with the use of a recently introduced control parameter, seems to be robust not only to changes in update schema but also to different initial densities.
A Multigrid NLS-4DVar Data Assimilation Scheme with Advanced Research WRF (ARW)
Zhang, H.; Tian, X.
2017-12-01
The motions of the atmosphere have multiscale properties in space and/or time, and the background error covariance matrix (Β) should thus contain error information at different correlation scales. To obtain an optimal analysis, the multigrid three-dimensional variational data assimilation scheme is used widely when sequentially correcting errors from large to small scales. However, introduction of the multigrid technique into four-dimensional variational data assimilation is not easy, due to its strong dependence on the adjoint model, which has extremely high computational costs in data coding, maintenance, and updating. In this study, the multigrid technique was introduced into the nonlinear least-squares four-dimensional variational assimilation (NLS-4DVar) method, which is an advanced four-dimensional ensemble-variational method that can be applied without invoking the adjoint models. The multigrid NLS-4DVar (MG-NLS-4DVar) scheme uses the number of grid points to control the scale, with doubling of this number when moving from a coarse to a finer grid. Furthermore, the MG-NLS-4DVar scheme not only retains the advantages of NLS-4DVar, but also sufficiently corrects multiscale errors to achieve a highly accurate analysis. The effectiveness and efficiency of the proposed MG-NLS-4DVar scheme were evaluated by several groups of observing system simulation experiments using the Advanced Research Weather Research and Forecasting Model. MG-NLS-4DVar outperformed NLS-4DVar, with a lower computational cost.
Li, Wei; Yang, Zhen; Hu, Haifeng
2014-01-01
Graphical models have been widely applied in solving distributed inference problems in wireless networks. In this paper, we formulate the cooperative localization problem in a mobile network as an inference problem on a factor graph. Using a sequential schedule of message updates, a sequential uniformly reweighted sum-product algorithm (SURW-SPA) is developed for mobile localization problems. The proposed algorithm combines the distributed nature of belief propagation (BP) with the improved p...
Connection Setup Signaling Scheme with Flooding-Based Path Searching for Diverse-Metric Network
Kikuta, Ko; Ishii, Daisuke; Okamoto, Satoru; Oki, Eiji; Yamanaka, Naoaki
Connection setup on various computer networks is now achieved by GMPLS. This technology is based on the source-routing approach, which requires the source node to store metric information of the entire network prior to computing a route. Thus all metric information must be distributed to all network nodes and kept up-to-date. However, as metric information become more diverse and generalized, it is hard to update all information due to the huge update overhead. Emerging network services and applications require the network to support diverse metrics for achieving various communication qualities. Increasing the number of metrics supported by the network causes excessive processing of metric update messages. To reduce the number of metric update messages, another scheme is required. This paper proposes a connection setup scheme that uses flooding-based signaling rather than the distribution of metric information. The proposed scheme requires only flooding of signaling messages with requested metric information, no routing protocol is required. Evaluations confirm that the proposed scheme achieves connection establishment without excessive overhead. Our analysis shows that the proposed scheme greatly reduces the number of control messages compared to the conventional scheme, while their blocking probabilities are comparable.
Map updates in a dynamic Voronoi data structure
DEFF Research Database (Denmark)
Mioc, Darka; Antón Castro, Francesc/François; Gold, C. M.
2006-01-01
In this paper we are using local and sequential map updates in the Voronoi data structure, which allows us to automatically record each event and performed map updates within the system. These map updates are executed through map construction commands that are composed of atomic actions (geometric...... algorithms for addition, deletion, and motion of spatial objects) on the dynamic Voronoi data structure. The formalization of map commands led to the development of a spatial language comprising a set of atomic operations or constructs on spatial primitives (points and lines), powerful enough to define...
International Nuclear Information System (INIS)
Zhang, Xiaole; Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu
2017-01-01
Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Xiaole, E-mail: zhangxiaole10@outlook.com [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany); Institute of Public Safety Research, Department of Engineering Physics, Tsinghua University, Beijing, 100084 (China); Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany)
2017-03-05
Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.
Study of Cu and Pb partitioning in mine tailings using the Tessier sequential extraction scheme
Energy Technology Data Exchange (ETDEWEB)
Andrei, Mariana Lucia, E-mail: marianaluciaandrei@yahoo.com [National Institute for Research and Development of Isotopic and Molecular Technologies, 65-103 Donath, 400293 Cluj-Napoca (Romania); Babes-Bolyai University, Environmental Science and Engineering Faculty, 30 Fantanele, 400294, Cluj-Napoca (Romania); Senila, Marin; Hoaghia, Maria Alexandra; Levei, Erika-Andrea [INCDO-INOE 2000, Research Institute for Analytical Instrumentation, 67 Donath, 400293, Cluj-Napoca (Romania); Borodi, Gheorghe [National Institute for Research and Development of Isotopic and Molecular Technologies, 65-103 Donath, 400293 Cluj-Napoca (Romania)
2015-12-23
The Cu and Pb partitioning in nonferrous mine tailings was investigated using the Tessier sequential extraction scheme. The contents of Cu and Pb found in the five operationally defined fractions were determined by inductively coupled plasma optical emission spectrometry. The results showed different partitioning patterns for Cu and Pb in the studied tailings. The total Cu and Pb contents were higher in tailings from Brazesti than in those from Saliste, while the Cu contents in the first two fractions considered as mobile were comparable and the content of mobile Pb was the highest in Brazesti tailings. In the tailings from Saliste about 30% of Cu and 3% of Pb were found in exchangeable fraction, while in those from Brazesti no metals were found in the exchangeable fraction, but the percent of Cu and Pb found in the bound to carbonate fraction were high (20% and 26%, respectively). The highest Pb content was found in the residual fraction in Saliste tailings and in bound to Fe and Mn oxides fraction in Brazesti tailings, while the highest Cu content was found in the fraction bound to organic matter in Saliste tailings and in the residual fraction in Brazesti tailings. In case of tailings of Brazesti medium environmental risk was found both for Pb and Cu, while in case of Saliste tailings low risk for Pb and high risk for Cu were found.
Study of Cu and Pb partitioning in mine tailings using the Tessier sequential extraction scheme
International Nuclear Information System (INIS)
Andrei, Mariana Lucia; Senila, Marin; Hoaghia, Maria Alexandra; Levei, Erika-Andrea; Borodi, Gheorghe
2015-01-01
The Cu and Pb partitioning in nonferrous mine tailings was investigated using the Tessier sequential extraction scheme. The contents of Cu and Pb found in the five operationally defined fractions were determined by inductively coupled plasma optical emission spectrometry. The results showed different partitioning patterns for Cu and Pb in the studied tailings. The total Cu and Pb contents were higher in tailings from Brazesti than in those from Saliste, while the Cu contents in the first two fractions considered as mobile were comparable and the content of mobile Pb was the highest in Brazesti tailings. In the tailings from Saliste about 30% of Cu and 3% of Pb were found in exchangeable fraction, while in those from Brazesti no metals were found in the exchangeable fraction, but the percent of Cu and Pb found in the bound to carbonate fraction were high (20% and 26%, respectively). The highest Pb content was found in the residual fraction in Saliste tailings and in bound to Fe and Mn oxides fraction in Brazesti tailings, while the highest Cu content was found in the fraction bound to organic matter in Saliste tailings and in the residual fraction in Brazesti tailings. In case of tailings of Brazesti medium environmental risk was found both for Pb and Cu, while in case of Saliste tailings low risk for Pb and high risk for Cu were found
An anomaly detection and isolation scheme with instance-based learning and sequential analysis
International Nuclear Information System (INIS)
Yoo, T. S.; Garcia, H. E.
2006-01-01
This paper presents an online anomaly detection and isolation (FDI) technique using an instance-based learning method combined with a sequential change detection and isolation algorithm. The proposed method uses kernel density estimation techniques to build statistical models of the given empirical data (null hypothesis). The null hypothesis is associated with the set of alternative hypotheses modeling the abnormalities of the systems. A decision procedure involves a sequential change detection and isolation algorithm. Notably, the proposed method enjoys asymptotic optimality as the applied change detection and isolation algorithm is optimal in minimizing the worst mean detection/isolation delay for a given mean time before a false alarm or a false isolation. Applicability of this methodology is illustrated with redundant sensor data set and its performance. (authors)
International Nuclear Information System (INIS)
Herreweghe, Samuel van; Swennen, Rudy; Vandecasteele, Carlo; Cappuyns, Valerie
2003-01-01
Leaching experiments, a mineralogical survey and larger samples are preferred when arsenic is present as discrete mineral phases. - Availability, mobility, (phyto)toxicity and potential risk of contaminants is strongly affected by the manner of appearance of elements, the so-called speciation. Operational fractionation methods like sequential extractions have been applied for a long time to determine the solid phase speciation of heavy metals since direct determination of specific chemical compounds can not always be easily achieved. The three-step sequential extraction scheme recommended by the BCR and two extraction schemes based on the phosphorus-like protocol proposed by Manful (1992, Occurrence and Ecochemical Behaviours of Arsenic in a Goldsmelter Impacted Area in Ghana, PhD dissertation, at the RUG) were applied to four standard reference materials (SRM) and to a batch of samples from industrially contaminated sites, heavily contaminated with arsenic and heavy metals. The SRM 2710 (Montana soil) was found to be the most useful reference material for metal (Mn, Cu, Zn, As, Cd and Pb) fractionation using the BCR sequential extraction procedure. Two sequential extraction schemes were developed and compared for arsenic with the aim to establish a better fractionation and recovery rate than the BCR-scheme for this element in the SRM samples. The major part of arsenic was released from the heavily contaminated samples after NaOH-extraction. Inferior extraction variability and recovery in the heavily contaminated samples compared to SRMs could be mainly contributed to subsample heterogeneity
Sequential error concealment for video/images by weighted template matching
DEFF Research Database (Denmark)
Koloda, Jan; Østergaard, Jan; Jensen, Søren Holdt
2012-01-01
In this paper we propose a novel spatial error concealment algorithm for video and images based on convex optimization. Block-based coding schemes in packet loss environment are considered. Missing macro blocks are sequentially reconstructed by filling them with a weighted set of templates...
A Bayesian sequential processor approach to spectroscopic portal system decisions
Energy Technology Data Exchange (ETDEWEB)
Sale, K; Candy, J; Breitfeller, E; Guidry, B; Manatt, D; Gosnell, T; Chambers, D
2007-07-31
The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waiting for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.
The composite sequential clustering technique for analysis of multispectral scanner data
Su, M. Y.
1972-01-01
The clustering technique consists of two parts: (1) a sequential statistical clustering which is essentially a sequential variance analysis, and (2) a generalized K-means clustering. In this composite clustering technique, the output of (1) is a set of initial clusters which are input to (2) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by traditional supervised maximum likelihood classification techniques. The mathematical algorithms for the composite sequential clustering program and a detailed computer program description with job setup are given.
Multicore-Optimized Wavefront Diamond Blocking for Optimizing Stencil Updates
Malas, T.
2015-07-02
The importance of stencil-based algorithms in computational science has focused attention on optimized parallel implementations for multilevel cache-based processors. Temporal blocking schemes leverage the large bandwidth and low latency of caches to accelerate stencil updates and approach theoretical peak performance. A key ingredient is the reduction of data traffic across slow data paths, especially the main memory interface. In this work we combine the ideas of multicore wavefront temporal blocking and diamond tiling to arrive at stencil update schemes that show large reductions in memory pressure compared to existing approaches. The resulting schemes show performance advantages in bandwidth-starved situations, which are exacerbated by the high bytes per lattice update case of variable coefficients. Our thread groups concept provides a controllable trade-off between concurrency and memory usage, shifting the pressure between the memory interface and the CPU. We present performance results on a contemporary Intel processor.
Multicore-Optimized Wavefront Diamond Blocking for Optimizing Stencil Updates
Malas, T.; Hager, G.; Ltaief, Hatem; Stengel, H.; Wellein, G.; Keyes, David E.
2015-01-01
The importance of stencil-based algorithms in computational science has focused attention on optimized parallel implementations for multilevel cache-based processors. Temporal blocking schemes leverage the large bandwidth and low latency of caches to accelerate stencil updates and approach theoretical peak performance. A key ingredient is the reduction of data traffic across slow data paths, especially the main memory interface. In this work we combine the ideas of multicore wavefront temporal blocking and diamond tiling to arrive at stencil update schemes that show large reductions in memory pressure compared to existing approaches. The resulting schemes show performance advantages in bandwidth-starved situations, which are exacerbated by the high bytes per lattice update case of variable coefficients. Our thread groups concept provides a controllable trade-off between concurrency and memory usage, shifting the pressure between the memory interface and the CPU. We present performance results on a contemporary Intel processor.
Systolic array processing of the sequential decoding algorithm
Chang, C. Y.; Yao, K.
1989-01-01
A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.
Desiraju, Naveen Kumar; Doclo, Simon; Wolff, Tobias
2017-12-01
Acoustic echo cancellation (AEC) is a key speech enhancement technology in speech communication and voice-enabled devices. AEC systems employ adaptive filters to estimate the acoustic echo paths between the loudspeakers and the microphone(s). In applications involving surround sound, the computational complexity of an AEC system may become demanding due to the multiple loudspeaker channels and the necessity of using long filters in reverberant environments. In order to reduce the computational complexity, the approach of partially updating the AEC filters is considered in this paper. In particular, we investigate tap selection schemes which exploit the sparsity present in the loudspeaker channels for partially updating subband AEC filters. The potential for exploiting signal sparsity across three dimensions, namely time, frequency, and channels, is analyzed. A thorough analysis of different state-of-the-art tap selection schemes is performed and insights about their limitations are gained. A novel tap selection scheme is proposed which overcomes these limitations by exploiting signal sparsity while not ignoring any filters for update in the different subbands and channels. Extensive simulation results using both artificial as well as real-world multichannel signals show that the proposed tap selection scheme outperforms state-of-the-art tap selection schemes in terms of echo cancellation performance. In addition, it yields almost identical echo cancellation performance as compared to updating all filter taps at a significantly reduced computational cost.
Comments on the sequential probability ratio testing methods
Energy Technology Data Exchange (ETDEWEB)
Racz, A. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics
1996-07-01
In this paper the classical sequential probability ratio testing method (SPRT) is reconsidered. Every individual boundary crossing event of the SPRT is regarded as a new piece of evidence about the problem under hypothesis testing. The Bayes method is applied for belief updating, i.e. integrating these individual decisions. The procedure is recommended to use when the user (1) would like to be informed about the tested hypothesis continuously and (2) would like to achieve his final conclusion with high confidence level. (Author).
A node linkage approach for sequential pattern mining.
Directory of Open Access Journals (Sweden)
Osvaldo Navarro
Full Text Available Sequential Pattern Mining is a widely addressed problem in data mining, with applications such as analyzing Web usage, examining purchase behavior, and text mining, among others. Nevertheless, with the dramatic increase in data volume, the current approaches prove inefficient when dealing with large input datasets, a large number of different symbols and low minimum supports. In this paper, we propose a new sequential pattern mining algorithm, which follows a pattern-growth scheme to discover sequential patterns. Unlike most pattern growth algorithms, our approach does not build a data structure to represent the input dataset, but instead accesses the required sequences through pseudo-projection databases, achieving better runtime and reducing memory requirements. Our algorithm traverses the search space in a depth-first fashion and only preserves in memory a pattern node linkage and the pseudo-projections required for the branch being explored at the time. Experimental results show that our new approach, the Node Linkage Depth-First Traversal algorithm (NLDFT, has better performance and scalability in comparison with state of the art algorithms.
Chung, Yun Won; Kwon, Jae Kyun; Park, Suwon
2014-01-01
One of the key technologies to support mobility of mobile station (MS) in mobile communication systems is location management which consists of location update and paging. In this paper, an improved movement-based location management scheme with two movement thresholds is proposed, considering bursty data traffic characteristics of packet-switched (PS) services. The analytical modeling for location update and paging signaling loads of the proposed scheme is developed thoroughly and the performance of the proposed scheme is compared with that of the conventional scheme. We show that the proposed scheme outperforms the conventional scheme in terms of total signaling load with an appropriate selection of movement thresholds.
Physics-based, Bayesian sequential detection method and system for radioactive contraband
Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E
2014-03-18
A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.
Efficient Anonymous Authenticated Key Agreement Scheme for Wireless Body Area Networks
Directory of Open Access Journals (Sweden)
Tong Li
2017-01-01
Full Text Available Wireless body area networks (WBANs are widely used in telemedicine, which can be utilized for real-time patients monitoring and home health-care. The sensor nodes in WBANs collect the client’s physiological data and transmit it to the medical center. However, the clients’ personal information is sensitive and there are many security threats in the extra-body communication. Therefore, the security and privacy of client’s physiological data need to be ensured. Many authentication protocols for WBANs have been proposed in recent years. However, the existing protocols fail to consider the key update phase. In this paper, we propose an efficient authenticated key agreement scheme for WBANs and add the key update phase to enhance the security of the proposed scheme. In addition, session keys are generated during the registration phase and kept secretly, thus reducing computation cost in the authentication phase. The performance analysis demonstrates that our scheme is more efficient than the currently popular related schemes.
Handling data redundancy and update anomalies in fuzzy relational databases
International Nuclear Information System (INIS)
Chen, G.; Kerre, E.E.
1996-01-01
This paper discusses various data redundancy and update anomaly problems that may occur with fuzzy relational databases. In coping with these problems to avoid undesirable consequences when fuzzy databases are updated via data insertion, deletion and modification, a number of fuzzy normal forms (e.g., F1NF, 0-F2NF, 0-F3NF, 0-FBCNF) are used to guide the design of relation schemes such that partial and transitive fuzzy functional dependencies (FFDs) between relation attributes are restricted. Based upon FFDs and related concepts, particular attention is paid to 0-F3NF and 0-FBCNF, and to the corresponding decomposition algorithms. These algorithms not only produce relation schemes which are either in 0-F3NF or in 0-FBCNF, but also guarantee that the information (data content and FFDs) with original schemes can be recovered with those resultant schemes
Second-generation speed limit map updating applications
DEFF Research Database (Denmark)
Tradisauskas, Nerius; Agerholm, Niels; Juhl, Jens
2011-01-01
Intelligent Speed Adaptation is an Intelligent Transport System developed to significantly improve road safety in helping car drivers maintain appropriate driving behaviour. The system works in connection with the speed limits on the road network. It is thus essential to keep the speed limit map...... used in the Intelligent Speed Adaptation scheme updated. The traditional method of updating speed limit maps on the basis of long time interval observations needed to be replaced by a more efficient speed limit updating tool. In a Danish Intelligent Speed Adaptation trial a web-based tool was therefore...... for map updating should preferably be made on the basis of a commercial map provider, 2 such as Google Maps and that the real challenge is to oblige road authorities to carry out updates....
TCAM-based High Speed Longest Prefix Matching with Fast Incremental Table Updates
DEFF Research Database (Denmark)
Rasmussen, Anders; Kragelund, A.; Berger, Michael Stübert
2013-01-01
and consequently a higher throughput of the network search engine, since the TCAM down time caused by incremental updates is eliminated. The LPM scheme is described in HDL for FPGA implementation and compared to an existing scheme for customized CAM circuits. The paper shows that the proposed scheme can process...
A Temporal Domain Decomposition Algorithmic Scheme for Large-Scale Dynamic Traffic Assignment
Directory of Open Access Journals (Sweden)
Eric J. Nava
2012-03-01
This paper presents a temporal decomposition scheme for large spatial- and temporal-scale dynamic traffic assignment, in which the entire analysis period is divided into Epochs. Vehicle assignment is performed sequentially in each Epoch, thus improving the model scalability and confining the peak run-time memory requirement regardless of the total analysis period. A proposed self-turning scheme adaptively searches for the run-time-optimal Epoch setting during iterations regardless of the characteristics of the modeled network. Extensive numerical experiments confirm the promising performance of the proposed algorithmic schemes.
Felder, Thomas; Gambogi, William; Stika, Katherine; Yu, Bao-Ling; Bradley, Alex; Hu, Hongjie; Garreau-Iles, Lucie; Trout, T. John
2016-09-01
DuPont has been working steadily to develop accelerated backsheet tests that correlate with solar panels observations in the field. This report updates efforts in sequential testing. Single exposure tests are more commonly used and can be completed more quickly, and certain tests provide helpful predictions of certain backsheet failure modes. DuPont recommendations for single exposure tests are based on 25-year exposure levels for UV and humidity/temperature, and form a good basis for sequential test development. We recommend a sequential exposure of damp heat followed by UV then repetitions of thermal cycling and UVA. This sequence preserves 25-year exposure levels for humidity/temperature and UV, and correlates well with a large body of field observations. Measurements can be taken at intervals in the test, although the full test runs 10 months. A second, shorter sequential test based on damp heat and thermal cycling tests mechanical durability and correlates with loss of mechanical properties seen in the field. Ongoing work is directed toward shorter sequential tests that preserve good correlation to field data.
International Nuclear Information System (INIS)
Liu, Aihua; Thumm, Uwe
2015-01-01
We study two-photon double ionization of helium by short XUV pulses by numerically solving the time-dependent Schrodinger equation in full dimensionality within a finite-element discrete-variable-representation scheme. Based on the emission asymmetries in joint photoelectron angular distributions, we identify sequential and non-sequential contributions to two-photon double ionization for ultrashort pulses whose spectrum overlaps the sequential (ħω > 54.4 eV) and non-sequential (39.5 eV < ħω < 54.4 eV) double-ionization regimes. (paper)
Moving mesh generation with a sequential approach for solving PDEs
DEFF Research Database (Denmark)
In moving mesh methods, physical PDEs and a mesh equation derived from equidistribution of an error metrics (so-called the monitor function) are simultaneously solved and meshes are dynamically concentrated on steep regions (Lim et al., 2001). However, the simultaneous solution procedure...... a simple and robust moving mesh algorithm in one or multidimension. In this study, we propose a sequential solution procedure including two separate parts: prediction step to obtain an approximate solution to a next time level (integration of physical PDEs) and regriding step at the next time level (mesh...... generation and solution interpolation). Convection terms, which appear in physical PDEs and a mesh equation, are discretized by a WENO (Weighted Essentially Non-Oscillatory) scheme under the consrvative form. This sequential approach is to keep the advantages of robustness and simplicity for the static...
Sequential segmental classification of feline congenital heart disease.
Scansen, Brian A; Schneider, Matthias; Bonagura, John D
2015-12-01
Feline congenital heart disease is less commonly encountered in veterinary medicine than acquired feline heart diseases such as cardiomyopathy. Understanding the wide spectrum of congenital cardiovascular disease demands a familiarity with a variety of lesions, occurring both in isolation and in combination, along with an appreciation of complex nomenclature and variable classification schemes. This review begins with an overview of congenital heart disease in the cat, including proposed etiologies and prevalence, examination approaches, and principles of therapy. Specific congenital defects are presented and organized by a sequential segmental classification with respect to their morphologic lesions. Highlights of diagnosis, treatment options, and prognosis are offered. It is hoped that this review will provide a framework for approaching congenital heart disease in the cat, and more broadly in other animal species based on the sequential segmental approach, which represents an adaptation of the common methodology used in children and adults with congenital heart disease. Copyright © 2015 Elsevier B.V. All rights reserved.
Fine-Grained Forward-Secure Signature Schemes without Random Oracles
DEFF Research Database (Denmark)
Camenisch, Jan; Koprowski, Maciej
2006-01-01
We propose the concept of fine-grained forward-secure signature schemes. Such signature schemes not only provide nonrepudiation w.r.t. past time periods the way ordinary forward-secure signature schemes do but, in addition, allow the signer to specify which signatures of the current time period...... remain valid when revoking the public key. This is an important advantage if the signer produces many signatures per time period as otherwise the signer would have to re-issue those signatures (and possibly re-negotiate the respective messages) with a new key.Apart from a formal model for fine......-grained forward-secure signature schemes, we present practical schemes and prove them secure under the strong RSA assumption only, i.e., we do not resort to the random oracle model to prove security. As a side-result, we provide an ordinary forward-secure scheme whose key-update time is significantly smaller than...
A new parallelization algorithm of ocean model with explicit scheme
Fu, X. D.
2017-08-01
This paper will focus on the parallelization of ocean model with explicit scheme which is one of the most commonly used schemes in the discretization of governing equation of ocean model. The characteristic of explicit schema is that calculation is simple, and that the value of the given grid point of ocean model depends on the grid point at the previous time step, which means that one doesn’t need to solve sparse linear equations in the process of solving the governing equation of the ocean model. Aiming at characteristics of the explicit scheme, this paper designs a parallel algorithm named halo cells update with tiny modification of original ocean model and little change of space step and time step of the original ocean model, which can parallelize ocean model by designing transmission module between sub-domains. This paper takes the GRGO for an example to implement the parallelization of GRGO (Global Reduced Gravity Ocean model) with halo update. The result demonstrates that the higher speedup can be achieved at different problem size.
On Lattice Sequential Decoding for Large MIMO Systems
Ali, Konpal S.
2014-04-01
Due to their ability to provide high data rates, Multiple-Input Multiple-Output (MIMO) wireless communication systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In the case of large overdetermined MIMO systems, we employ the Sequential Decoder using the Fano Algorithm. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. We attempt to bound the error by bounding the bias, using the minimum distance of a lattice. Also, a particular trend is observed with increasing SNR: a region of low complexity and high error, followed by a region of high complexity and error falling, and finally a region of low complexity and low error. For lower bias values, the stages of the trend are incurred at lower SNR than for higher bias values. This has the important implication that a low enough bias value, at low to moderate SNR, can result in low error and low complexity even for large MIMO systems. Our work is compared against Lattice Reduction (LR) aided Linear Decoders (LDs). Another impressive observation for low bias values that satisfy the error bound is that the Sequential Decoder\\'s error is seen to fall with increasing system size, while it grows for the LR-aided LDs. For the case of large underdetermined MIMO systems, Sequential Decoding with two preprocessing schemes is proposed – 1) Minimum Mean Square Error Generalized Decision Feedback Equalization (MMSE-GDFE) preprocessing 2) MMSE-GDFE preprocessing, followed by Lattice Reduction and Greedy Ordering. Our work is compared against previous work which employs Sphere Decoding preprocessed using MMSE-GDFE, Lattice Reduction and Greedy Ordering. For the case of large systems, this results in high complexity and difficulty in choosing the sphere radius. Our schemes
Khaki, M.
2017-07-06
The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.
International Nuclear Information System (INIS)
Ding, Yi; Wang, Peng; Goel, Lalit; Billinton, Roy; Karki, Rajesh
2007-01-01
This paper presents a technique to evaluate reliability of a restructured power system with a bilateral market. The proposed technique is based on the combination of the reliability network equivalent and pseudo-sequential simulation approaches. The reliability network equivalent techniques have been implemented in the Monte Carlo simulation procedure to reduce the computational burden of the analysis. Pseudo-sequential simulation has been used to increase the computational efficiency of the non-sequential simulation method and to model the chronological aspects of market trading and system operation. Multi-state Markov models for generation and transmission systems are proposed and implemented in the simulation. A new load shedding scheme is proposed during generation inadequacy and network congestion to minimize the load curtailment. The IEEE reliability test system (RTS) is used to illustrate the technique. (author)
A double-loop adaptive sampling approach for sensitivity-free dynamic reliability analysis
International Nuclear Information System (INIS)
Wang, Zequn; Wang, Pingfeng
2015-01-01
Dynamic reliability measures reliability of an engineered system considering time-variant operation condition and component deterioration. Due to high computational costs, conducting dynamic reliability analysis at an early system design stage remains challenging. This paper presents a confidence-based meta-modeling approach, referred to as double-loop adaptive sampling (DLAS), for efficient sensitivity-free dynamic reliability analysis. The DLAS builds a Gaussian process (GP) model sequentially to approximate extreme system responses over time, so that Monte Carlo simulation (MCS) can be employed directly to estimate dynamic reliability. A generic confidence measure is developed to evaluate the accuracy of dynamic reliability estimation while using the MCS approach based on developed GP models. A double-loop adaptive sampling scheme is developed to efficiently update the GP model in a sequential manner, by considering system input variables and time concurrently in two sampling loops. The model updating process using the developed sampling scheme can be terminated once the user defined confidence target is satisfied. The developed DLAS approach eliminates computationally expensive sensitivity analysis process, thus substantially improves the efficiency of dynamic reliability analysis. Three case studies are used to demonstrate the efficacy of DLAS for dynamic reliability analysis. - Highlights: • Developed a novel adaptive sampling approach for dynamic reliability analysis. • POD Developed a new metric to quantify the accuracy of dynamic reliability estimation. • Developed a new sequential sampling scheme to efficiently update surrogate models. • Three case studies were used to demonstrate the efficacy of the new approach. • Case study results showed substantially enhanced efficiency with high accuracy
Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.
2015-05-01
Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The co-processor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of Xeon Phi will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.3x.
A chaotic cryptography scheme for generating short ciphertext
International Nuclear Information System (INIS)
Wong, Kwok-Wo; Ho, Sun-Wah; Yung, Ching-Ki
2003-01-01
Recently, we have proposed a chaotic cryptographic scheme based on iterating the logistic map and updating the look-up table dynamically. The encryption and decryption processes become faster as the number of iterations required is reduced. However, the length of the ciphertext is still at least twice that of the original message. This may result in huge ciphertext files and hence long transmission time when encrypting large multimedia files. In this Letter, we modify the chaotic cryptographic scheme proposed previously so as to reduce the length of the ciphertext to the level slightly longer than that of the original message. Moreover, a session key is introduced in the cryptographic scheme so that the ciphertext length for a given message is not fixed
An Efficient V2I Authentication Scheme for VANETs
Directory of Open Access Journals (Sweden)
Yousheng Zhou
2018-01-01
Full Text Available The advent of intelligent transportation system has a crucial impact on the traffic safety and efficiency. To cope with security issues such as spoofing attack and forgery attack, many authentication schemes for vehicular ad hoc networks (VANETs have been developed, which are based on the hypothesis that secret keys are kept perfectly secure. However, key exposure is inevitable on account of the openness of VANET environment. To address this problem, key insulation is introduced in our proposed scheme. With a helper device, vehicles could periodically update their own secret keys. In this way, the forward and backward secrecy has been achieved. In addition, the elliptic curve operations have been integrated to improve the performance. The random oracle model is adopted to prove the security of the proposed scheme, and the experiment has been conducted to demonstrate the comparison between our scheme and the existing similar schemes.
DEFF Research Database (Denmark)
Kirkelund, Gunvor Marie; Ottosen, Lisbeth M.; Villumsen, Arne
2010-01-01
remediation time. A three step sequential extraction scheme (BCR), with an extra residual step, was used to evaluate the heavy metal distribution in the sediments before and after electrodialytic remediation. Cu was mainly associated with the oxidisable phase of the sediment, both before and after remediation...
A Semi-Potential for Finite and Infinite Sequential Games (Extended Abstract
Directory of Open Access Journals (Sweden)
Stéphane Le Roux
2016-09-01
Full Text Available We consider a dynamical approach to sequential games. By restricting the convertibility relation over strategy profiles, we obtain a semi-potential (in the sense of Kukushkin, and we show that in finite games the corresponding restriction of better-response dynamics will converge to a Nash equilibrium in quadratic time. Convergence happens on a per-player basis, and even in the presence of players with cyclic preferences, the players with acyclic preferences will stabilize. Thus, we obtain a candidate notion for rationality in the presence of irrational agents. Moreover, the restriction of convertibility can be justified by a conservative updating of beliefs about the other players strategies. For infinite sequential games we can retain convergence to a Nash equilibrium (in some sense, if the preferences are given by continuous payoff functions; or obtain a transfinite convergence if the outcome sets of the game are Delta^0_2 sets.
A Comprehensive Study of Data Collection Schemes Using Mobile Sinks in Wireless Sensor Networks
Khan, Abdul Waheed; Abdullah, Abdul Hanan; Anisi, Mohammad Hossein; Bangash, Javed Iqbal
2014-01-01
Recently sink mobility has been exploited in numerous schemes to prolong the lifetime of wireless sensor networks (WSNs). Contrary to traditional WSNs where sensory data from sensor field is ultimately sent to a static sink, mobile sink-based approaches alleviate energy-holes issues thereby facilitating balanced energy consumption among nodes. In mobility scenarios, nodes need to keep track of the latest location of mobile sinks for data delivery. However, frequent propagation of sink topological updates undermines the energy conservation goal and therefore should be controlled. Furthermore, controlled propagation of sinks' topological updates affects the performance of routing strategies thereby increasing data delivery latency and reducing packet delivery ratios. This paper presents a taxonomy of various data collection/dissemination schemes that exploit sink mobility. Based on how sink mobility is exploited in the sensor field, we classify existing schemes into three classes, namely path constrained, path unconstrained, and controlled sink mobility-based schemes. We also organize existing schemes based on their primary goals and provide a comparative study to aid readers in selecting the appropriate scheme in accordance with their particular intended applications and network dynamics. Finally, we conclude our discussion with the identification of some unresolved issues in pursuit of data delivery to a mobile sink. PMID:24504107
Sequential detection of influenza epidemics by the Kolmogorov-Smirnov test
Directory of Open Access Journals (Sweden)
Closas Pau
2012-10-01
Full Text Available Abstract Background Influenza is a well known and common human respiratory infection, causing significant morbidity and mortality every year. Despite Influenza variability, fast and reliable outbreak detection is required for health resource planning. Clinical health records, as published by the Diagnosticat database in Catalonia, host useful data for probabilistic detection of influenza outbreaks. Methods This paper proposes a statistical method to detect influenza epidemic activity. Non-epidemic incidence rates are modeled against the exponential distribution, and the maximum likelihood estimate for the decaying factor λ is calculated. The sequential detection algorithm updates the parameter as new data becomes available. Binary epidemic detection of weekly incidence rates is assessed by Kolmogorov-Smirnov test on the absolute difference between the empirical and the cumulative density function of the estimated exponential distribution with significance level 0 ≤ α ≤ 1. Results The main advantage with respect to other approaches is the adoption of a statistically meaningful test, which provides an indicator of epidemic activity with an associated probability. The detection algorithm was initiated with parameter λ0 = 3.8617 estimated from the training sequence (corresponding to non-epidemic incidence rates of the 2008-2009 influenza season and sequentially updated. Kolmogorov-Smirnov test detected the following weeks as epidemic for each influenza season: 50−10 (2008-2009 season, 38−50 (2009-2010 season, weeks 50−9 (2010-2011 season and weeks 3 to 12 for the current 2011-2012 season. Conclusions Real medical data was used to assess the validity of the approach, as well as to construct a realistic statistical model of weekly influenza incidence rates in non-epidemic periods. For the tested data, the results confirmed the ability of the algorithm to detect the start and the end of epidemic periods. In general, the proposed test could
Update of the NNLO PDFs in the 3-, 4- and 5-flavour schemes
International Nuclear Information System (INIS)
Alekhin, Sergey; Bluemlein, Johannes; Moch, Sven-Olaf
2010-07-01
We report on an update of the next-to-next-to-leading order (NNLO) ABKM09 parton distributions functions. They are obtained with the use of the combined HERA collider Run I inclusive deep-inelastic scattering (DIS) data and the partial NNLO corrections to the heavy quark electro-production taken into account. The value of the strong couplig constant α NNLO s (M Z )=0.1147(12) is obtained. The standard candle cross sections for the Tevatron collider and the LHC estimated with the updated PDFs are provided. (orig.)
A Dual Key-Based Activation Scheme for Secure LoRaWAN
Directory of Open Access Journals (Sweden)
Jaehyu Kim
2017-01-01
Full Text Available With the advent of the Internet of Things (IoT era, we are experiencing rapid technological progress. Billions of devices are connected to each other, and our homes, cities, hospitals, and schools are getting smarter and smarter. However, to realize the IoT, several challenging issues such as connecting resource-constrained devices to the Internet must be resolved. Recently introduced Low Power Wide Area Network (LPWAN technologies have been devised to resolve this issue. Among many LPWAN candidates, the Long Range (LoRa is one of the most promising technologies. The Long Range Wide Area Network (LoRaWAN is a communication protocol for LoRa that provides basic security mechanisms. However, some security loopholes exist in LoRaWAN’s key update and session key generation. In this paper, we propose a dual key-based activation scheme for LoRaWAN. It resolves the problem of key updates not being fully supported. In addition, our scheme facilitates each layer in generating its own session key directly, which ensures the independence of all layers. Real-world experimental results compared with the original scheme show that the proposed scheme is totally feasible in terms of delay and battery consumption.
About efficient quasi-Newtonian schemes for variational calculations in nuclear structure
International Nuclear Information System (INIS)
Puddu, G.
2009-01-01
The Broyden-Fletcher-Goldhaber-Shanno (BFGS) quasi-Newtonian scheme is known as the most efficient scheme for variational calculations of energies. This scheme is actually a member of a one-parameter family of variational methods, known as the Broyden β-family. In some applications to light nuclei using microscopically derived effective Hamiltonians starting from accurate nucleon-nucleon potentials, we actually found other members of the same family which have better performance than the BFGS method. We also extend the Broyden β -family of algorithms to a two-parameter family of rank-three updates which has even better performances. (orig.)
DEFF Research Database (Denmark)
Wang, Jianhua; Hansen, Elo Harald
2005-01-01
Flow injection (FI) analysis, the first generation of this technique, became in the 1990s supplemented by its second generation, sequential injection (SI), and most recently by the third generation (i.e.,Lab-on-Valve). The dominant role played by FI in automatic, on-line, sample pretreatments in ...
Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation.
Yuan, Lifeng; Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi
2016-01-01
After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t', n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely.
Ruotolo, Francesco; Ruggiero, Gennaro; Vinciguerra, Michela; Iachini, Tina
2012-02-01
The aim of this research is to assess whether the crucial factor in determining the characteristics of blind people's spatial mental images is concerned with the visual impairment per se or the processing style that the dominant perceptual modalities used to acquire spatial information impose, i.e. simultaneous (vision) vs sequential (kinaesthesis). Participants were asked to learn six positions in a large parking area via movement alone (congenitally blind, adventitiously blind, blindfolded sighted) or with vision plus movement (simultaneous sighted, sequential sighted), and then to mentally scan between positions in the path. The crucial manipulation concerned the sequential sighted group. Their visual exploration was made sequential by putting visual obstacles within the pathway in such a way that they could not see simultaneously the positions along the pathway. The results revealed a significant time/distance linear relation in all tested groups. However, the linear component was lower in sequential sighted and blind participants, especially congenital. Sequential sighted and congenitally blind participants showed an almost overlapping performance. Differences between groups became evident when mentally scanning farther distances (more than 5m). This threshold effect could be revealing of processing limitations due to the need of integrating and updating spatial information. Overall, the results suggest that the characteristics of the processing style rather than the visual impairment per se affect blind people's spatial mental images. Copyright © 2011 Elsevier B.V. All rights reserved.
Sequential Ensembles Tolerant to Synthetic Aperture Radar (SAR Soil Moisture Retrieval Errors
Directory of Open Access Journals (Sweden)
Ju Hyoung Lee
2016-04-01
Full Text Available Due to complicated and undefined systematic errors in satellite observation, data assimilation integrating model states with satellite observations is more complicated than field measurements-based data assimilation at a local scale. In the case of Synthetic Aperture Radar (SAR soil moisture, the systematic errors arising from uncertainties in roughness conditions are significant and unavoidable, but current satellite bias correction methods do not resolve the problems very well. Thus, apart from the bias correction process of satellite observation, it is important to assess the inherent capability of satellite data assimilation in such sub-optimal but more realistic observational error conditions. To this end, time-evolving sequential ensembles of the Ensemble Kalman Filter (EnKF is compared with stationary ensemble of the Ensemble Optimal Interpolation (EnOI scheme that does not evolve the ensembles over time. As the sensitivity analysis demonstrated that the surface roughness is more sensitive to the SAR retrievals than measurement errors, it is a scope of this study to monitor how data assimilation alters the effects of roughness on SAR soil moisture retrievals. In results, two data assimilation schemes all provided intermediate values between SAR overestimation, and model underestimation. However, under the same SAR observational error conditions, the sequential ensembles approached a calibrated model showing the lowest Root Mean Square Error (RMSE, while the stationary ensemble converged towards the SAR observations exhibiting the highest RMSE. As compared to stationary ensembles, sequential ensembles have a better tolerance to SAR retrieval errors. Such inherent nature of EnKF suggests an operational merit as a satellite data assimilation system, due to the limitation of bias correction methods currently available.
Optimisation of beryllium-7 gamma analysis following BCR sequential extraction
Energy Technology Data Exchange (ETDEWEB)
Taylor, A. [Plymouth University, School of Geography, Earth and Environmental Sciences, 8 Kirkby Place, Plymouth PL4 8AA (United Kingdom); Blake, W.H., E-mail: wblake@plymouth.ac.uk [Plymouth University, School of Geography, Earth and Environmental Sciences, 8 Kirkby Place, Plymouth PL4 8AA (United Kingdom); Keith-Roach, M.J. [Plymouth University, School of Geography, Earth and Environmental Sciences, 8 Kirkby Place, Plymouth PL4 8AA (United Kingdom); Kemakta Konsult, Stockholm (Sweden)
2012-03-30
Graphical abstract: Showing decrease in analytical uncertainty using the optimal (combined preconcentrated sample extract) method. nv (no value) where extract activities were
Adaptive Online Sequential ELM for Concept Drift Tackling
Directory of Open Access Journals (Sweden)
Arif Budiman
2016-01-01
Full Text Available A machine learning method needs to adapt to over time changes in the environment. Such changes are known as concept drift. In this paper, we propose concept drift tackling method as an enhancement of Online Sequential Extreme Learning Machine (OS-ELM and Constructive Enhancement OS-ELM (CEOS-ELM by adding adaptive capability for classification and regression problem. The scheme is named as adaptive OS-ELM (AOS-ELM. It is a single classifier scheme that works well to handle real drift, virtual drift, and hybrid drift. The AOS-ELM also works well for sudden drift and recurrent context change type. The scheme is a simple unified method implemented in simple lines of code. We evaluated AOS-ELM on regression and classification problem by using concept drift public data set (SEA and STAGGER and other public data sets such as MNIST, USPS, and IDS. Experiments show that our method gives higher kappa value compared to the multiclassifier ELM ensemble. Even though AOS-ELM in practice does not need hidden nodes increase, we address some issues related to the increasing of the hidden nodes such as error condition and rank values. We propose taking the rank of the pseudoinverse matrix as an indicator parameter to detect “underfitting” condition.
Gleason-Busch theorem for sequential measurements
Flatt, Kieran; Barnett, Stephen M.; Croke, Sarah
2017-12-01
Gleason's theorem is a statement that, given some reasonable assumptions, the Born rule used to calculate probabilities in quantum mechanics is essentially unique [A. M. Gleason, Indiana Univ. Math. J. 6, 885 (1957), 10.1512/iumj.1957.6.56050]. We show that Gleason's theorem contains within it also the structure of sequential measurements, and along with this the state update rule. We give a small set of axioms, which are physically motivated and analogous to those in Busch's proof of Gleason's theorem [P. Busch, Phys. Rev. Lett. 91, 120403 (2003), 10.1103/PhysRevLett.91.120403], from which the familiar Kraus operator form follows. An axiomatic approach has practical relevance as well as fundamental interest, in making clear those assumptions which underlie the security of quantum communication protocols. Interestingly, the two-time formalism is seen to arise naturally in this approach.
Directory of Open Access Journals (Sweden)
Ufnalski Bartlomiej
2014-12-01
Full Text Available In this paper two different update schemes for the recently developed plug-in direct particle swarm repetitive controller (PDPSRC are investigated and compared. The proposed approach employs the particle swarm optimizer (PSO to solve in on-line mode a dynamic optimization problem (DOP related to the control task in the constant-amplitude constant-frequency voltage-source inverter (CACF VSI with an LC output filter. The effectiveness of synchronous and asynchronous update rules, both commonly used in static optimization problems (SOPs, is assessed and compared in the case of PDPSRC. The performance of the controller, when synthesized using each of the update schemes, is studied numerically.
OPTIMIZATION OF AGGREGATION AND SEQUENTIAL-PARALLEL EXECUTION MODES OF INTERSECTING OPERATION SETS
Directory of Open Access Journals (Sweden)
G. М. Levin
2016-01-01
Full Text Available A mathematical model and a method for the problem of optimization of aggregation and of sequential- parallel execution modes of intersecting operation sets are proposed. The proposed method is based on the two-level decomposition scheme. At the top level the variant of aggregation for groups of operations is selected, and at the lower level the execution modes of operations are optimized for a fixed version of aggregation.
Dancing Twins: Stellar Hierarchies That Formed Sequentially?
Tokovinin, Andrei
2018-04-01
This paper draws attention to the class of resolved triple stars with moderate ratios of inner and outer periods (possibly in a mean motion resonance) and nearly circular, mutually aligned orbits. Moreover, stars in the inner pair are twins with almost identical masses, while the mass sum of the inner pair is comparable to the mass of the outer component. Such systems could be formed either sequentially (inside-out) by disk fragmentation with subsequent accretion and migration, or by a cascade hierarchical fragmentation of a rotating cloud. Orbits of the outer and inner subsystems are computed or updated in four such hierarchies: LHS 1070 (GJ 2005, periods 77.6 and 17.25 years), HIP 9497 (80 and 14.4 years), HIP 25240 (1200 and 47.0 years), and HIP 78842 (131 and 10.5 years).
International Nuclear Information System (INIS)
Hung, Shih-Yu; Shen, Ming-Ho; Chang, Ying-Pin
2009-01-01
The sequential neural-network approximation and orthogonal array (SNAOA) were used to shorten the cooling time for the rapid cooling process such that the normalized maximum resolved stress in silicon wafer was always below one in this study. An orthogonal array was first conducted to obtain the initial solution set. The initial solution set was treated as the initial training sample. Next, a back-propagation sequential neural network was trained to simulate the feasible domain to obtain the optimal parameter setting. The size of the training sample was greatly reduced due to the use of the orthogonal array. In addition, a restart strategy was also incorporated into the SNAOA so that the searching process may have a better opportunity to reach a near global optimum. In this work, we considered three different cooling control schemes during the rapid thermal process: (1) downward axial gas flow cooling scheme; (2) upward axial gas flow cooling scheme; (3) dual axial gas flow cooling scheme. Based on the maximum shear stress failure criterion, the other control factors such as flow rate, inlet diameter, outlet width, chamber height and chamber diameter were also examined with respect to cooling time. The results showed that the cooling time could be significantly reduced using the SNAOA approach
Seghouane, Abd-Krim; Iqbal, Asif
2017-09-01
Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.
Directory of Open Access Journals (Sweden)
Sheng Li
Full Text Available BACKGROUND: Holmium laser enucleation (HoLEP in surgical treatment of benign prostate hyperplasia (BPH potentially offers advantages over transurethral resection of the prostate (TURP. METHODS: Published randomized controlled trials (RCTs were identified from PubMed, EMBASE, Science Citation Index, and the Cochrane Library up to October 10, 2013 (updated on February 5, 2014. After methodological quality assessment and data extraction, meta-analysis was performed using STATA 12.0 and Trial Sequential Analysis (TSA 0.9 software. RESULTS: Fifteen studies including 8 RCTs involving 855 patients met the criteria. The results of meta-analysis showed that: a efficacy indicators: there was no significant difference in quality of life between the two groups (P>0.05, but compared with the TURP group, Qmax was better at 3 months and 12 months, PVR was less at 6, 12 months, and IPSS was lower at 12 months in the HoLEP, b safety indicators: compared with the TURP, HoLEP had less blood transfusion (RR 0.17, 95% CI 0.06 to 0.47, but there was no significant difference in early and late postoperative complications (P>0.05, and c perioperative indicators: HoLEP was associated with longer operation time (WMD 14.19 min, 95% CI 6.30 to 22.08 min, shorter catheterization time (WMD -19.97 h, 95% CI -24.24 to -15.70 h and hospital stay (WMD -25.25 h, 95% CI -29.81 to -20.68 h. CONCLUSIONS: In conventional meta-analyses, there is no clinically relevant difference in early and late postoperative complications between the two techniques, but HoLEP is preferable due to advantage in the curative effect, less blood transfusion rate, shorter catheterization duration time and hospital stay. However, trial sequential analysis does not allow us to draw any solid conclusion in overall clinical benefit comparison between the two approaches. Further large, well-designed, multicentre/international RCTs with long-term data and the comparison between the two approaches remain open.
Butterfly Encryption Scheme for Resource-Constrained Wireless Networks
Directory of Open Access Journals (Sweden)
Raghav V. Sampangi
2015-09-01
Full Text Available Resource-constrained wireless networks are emerging networks such as Radio Frequency Identification (RFID and Wireless Body Area Networks (WBAN that might have restrictions on the available resources and the computations that can be performed. These emerging technologies are increasing in popularity, particularly in defence, anti-counterfeiting, logistics and medical applications, and in consumer applications with growing popularity of the Internet of Things. With communication over wireless channels, it is essential to focus attention on securing data. In this paper, we present an encryption scheme called Butterfly encryption scheme. We first discuss a seed update mechanism for pseudorandom number generators (PRNG, and employ this technique to generate keys and authentication parameters for resource-constrained wireless networks. Our scheme is lightweight, as in it requires less resource when implemented and offers high security through increased unpredictability, owing to continuously changing parameters. Our work focuses on accomplishing high security through simplicity and reuse. We evaluate our encryption scheme using simulation, key similarity assessment, key sequence randomness assessment, protocol analysis and security analysis.
Butterfly Encryption Scheme for Resource-Constrained Wireless Networks.
Sampangi, Raghav V; Sampalli, Srinivas
2015-09-15
Resource-constrained wireless networks are emerging networks such as Radio Frequency Identification (RFID) and Wireless Body Area Networks (WBAN) that might have restrictions on the available resources and the computations that can be performed. These emerging technologies are increasing in popularity, particularly in defence, anti-counterfeiting, logistics and medical applications, and in consumer applications with growing popularity of the Internet of Things. With communication over wireless channels, it is essential to focus attention on securing data. In this paper, we present an encryption scheme called Butterfly encryption scheme. We first discuss a seed update mechanism for pseudorandom number generators (PRNG), and employ this technique to generate keys and authentication parameters for resource-constrained wireless networks. Our scheme is lightweight, as in it requires less resource when implemented and offers high security through increased unpredictability, owing to continuously changing parameters. Our work focuses on accomplishing high security through simplicity and reuse. We evaluate our encryption scheme using simulation, key similarity assessment, key sequence randomness assessment, protocol analysis and security analysis.
The QKD network: model and routing scheme
Yang, Chao; Zhang, Hongqi; Su, Jinhai
2017-11-01
Quantum key distribution (QKD) technology can establish unconditional secure keys between two communicating parties. Although this technology has some inherent constraints, such as the distance and point-to-point mode limits, building a QKD network with multiple point-to-point QKD devices can overcome these constraints. Considering the development level of current technology, the trust relaying QKD network is the first choice to build a practical QKD network. However, the previous research didn't address a routing method on the trust relaying QKD network in detail. This paper focuses on the routing issues, builds a model of the trust relaying QKD network for easily analysing and understanding this network, and proposes a dynamical routing scheme for this network. From the viewpoint of designing a dynamical routing scheme in classical network, the proposed scheme consists of three components: a Hello protocol helping share the network topology information, a routing algorithm to select a set of suitable paths and establish the routing table and a link state update mechanism helping keep the routing table newly. Experiments and evaluation demonstrates the validity and effectiveness of the proposed routing scheme.
Al Jarro, Ahmed
2012-11-01
An explicit marching-on-in-time (MOT) scheme for solving the time domain volume integral equation is presented. The proposed method achieves its stability by employing, at each time step, a corrector scheme, which updates/corrects fields computed by the explicit predictor scheme. The proposedmethod is computationally more efficient when compared to the existing filtering techniques used for the stabilization of explicit MOT schemes. Numerical results presented in this paper demonstrate that the proposed method maintains its stability even when applied to the analysis of electromagnetic wave interactions with electrically large structures meshed using approximately half a million discretization elements.
Al Jarro, Ahmed; Salem, Mohamed; Bagci, Hakan; Benson, Trevor; Sewell, Phillip D.; Vuković, Ana
2012-01-01
An explicit marching-on-in-time (MOT) scheme for solving the time domain volume integral equation is presented. The proposed method achieves its stability by employing, at each time step, a corrector scheme, which updates/corrects fields computed by the explicit predictor scheme. The proposedmethod is computationally more efficient when compared to the existing filtering techniques used for the stabilization of explicit MOT schemes. Numerical results presented in this paper demonstrate that the proposed method maintains its stability even when applied to the analysis of electromagnetic wave interactions with electrically large structures meshed using approximately half a million discretization elements.
Global evaluation of ammonia bidirectional exchange and livestock diurnal variation schemes
Bidirectional air–surface exchange of ammonia (NH3) has been neglected in many air quality models. In this study, we implement the bidirectional exchange of NH3 in the GEOS-Chem global chemical transport model. We also introduce an updated diurnal variability scheme for NH3...
Sequential Power-Dependence Theory
Buskens, Vincent; Rijt, Arnout van de
2008-01-01
Existing methods for predicting resource divisions in laboratory exchange networks do not take into account the sequential nature of the experimental setting. We extend network exchange theory by considering sequential exchange. We prove that Sequential Power-Dependence Theory—unlike
Two approaches for sequential extraction of radionuclides in soils: batch and column methods
International Nuclear Information System (INIS)
Vidal, M.; Rauret, G.
1993-01-01
A three-step sequential extraction designed by Community Bureau of Reference (BCR) is applied to two types of soil (sandy and sandy-loam) which had been previously contaminated with a radionuclide aerosol containing 134 Cs, 85 Sr and 110m Ag. This scheme is applied using both batch and column methods. The radionuclide distribution obtained with this scheme depends both on the method and on soil type. Compared with the batch method, column extraction is an inadvisable method. Kinetic aspects seem to be important, especially in the first and third fractions. The radionuclide distribution shows that radiostrontium has high mobility, radiocaesium is highly retained by clay minerals whereas Fe/Mn oxides and organic matter have an important role in radiosilver retention. (Author)
Status Review of Renewable and Energy Efficiency Support Schemes in Europe
Energy Technology Data Exchange (ETDEWEB)
NONE
2012-09-15
This document forms the latest update to the regular CEER Status Review of Renewable Energy and Energy Efficiency Support Schemes in Europe and builds on the previous CEER report C10-SDE-19-04a. The purpose of Status Review publications is to collect comparable data on RES support in Europe in order to provide policy-makers, regulators and industry participants with information on support schemes for electricity from renewable energy sources, by technology and type of instrument (e.g. Feed-in tariffs and Green Certificates). To collect this data, a questionnaire was circulated to CEER members in July 2012, to explore the renewable electricity support schemes currently in place in Member States across Europe.
Das, Ashok Kumar
2015-03-01
Recent advanced technology enables the telecare medicine information system (TMIS) for the patients to gain the health monitoring facility at home and also to access medical services over the Internet of mobile networks. Several remote user authentication schemes have been proposed in the literature for TMIS. However, most of them are either insecure against various known attacks or they are inefficient. Recently, Tan proposed an efficient user anonymity preserving three-factor authentication scheme for TMIS. In this paper, we show that though Tan's scheme is efficient, it has several security drawbacks such as (1) it fails to provide proper authentication during the login phase, (2) it fails to provide correct updation of password and biometric of a user during the password and biometric update phase, and (3) it fails to protect against replay attack. In addition, Tan's scheme lacks the formal security analysis and verification. Later, Arshad and Nikooghadam also pointed out some security flaws in Tan's scheme and then presented an improvement on Tan's s scheme. However, we show that Arshad and Nikooghadam's scheme is still insecure against the privileged-insider attack through the stolen smart-card attack, and it also lacks the formal security analysis and verification. In order to withstand those security loopholes found in both Tan's scheme, and Arshad and Nikooghadam's scheme, we aim to propose an effective and more secure three-factor remote user authentication scheme for TMIS. Our scheme provides the user anonymity property. Through the rigorous informal and formal security analysis using random oracle models and the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool, we show that our scheme is secure against various known attacks, including the replay and man-in-the-middle attacks. Furthermore, our scheme is also efficient as compared to other related schemes.
IP lookup with low memory requirement and fast update
DEFF Research Database (Denmark)
Berger, Michael Stübert
2003-01-01
The paper presents an IP address lookup algorithm with low memory requirement and fast updates. The scheme, which is denoted prefix-tree, uses a combination of a trie and a tree search, which is efficient in memory usage because the tree contains exactly one node for each prefix in the routing...
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.
2017-11-01
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
Sequential decisions: a computational comparison of observational and reinforcement accounts.
Directory of Open Access Journals (Sweden)
Nazanin Mohammadi Sepahvand
Full Text Available Right brain damaged patients show impairments in sequential decision making tasks for which healthy people do not show any difficulty. We hypothesized that this difficulty could be due to the failure of right brain damage patients to develop well-matched models of the world. Our motivation is the idea that to navigate uncertainty, humans use models of the world to direct the decisions they make when interacting with their environment. The better the model is, the better their decisions are. To explore the model building and updating process in humans and the basis for impairment after brain injury, we used a computational model of non-stationary sequence learning. RELPH (Reinforcement and Entropy Learned Pruned Hypothesis space was able to qualitatively and quantitatively reproduce the results of left and right brain damaged patient groups and healthy controls playing a sequential version of Rock, Paper, Scissors. Our results suggests that, in general, humans employ a sub-optimal reinforcement based learning method rather than an objectively better statistical learning approach, and that differences between right brain damaged and healthy control groups can be explained by different exploration policies, rather than qualitatively different learning mechanisms.
Optimal update with multiple out-of-sequence measurements
Zhang, Shuo; Bar-Shalom, Yaakov
2011-06-01
In multisensor target tracking systems receiving out-of-sequence measurements from local sensors is a common situation. In the last decade many algorithms have been proposed to update a target state with an OOSM optimally or suboptimally. However, what one faces in the real world is multiple OOSMs, which arrive at the fusion center in, generally, arbitrary orders, e.g., in succession or interleaved with in-sequence measurements. A straightforward approach to deal with this multi-OOSM problem is by sequentially applying a given OOSM algorithm; however, this simple solution does not guarantee optimal update under the multi-OOSM scenario. The present paper discusses the differences between the single-OOSM processing and the multi-OOSM processing, and presents the general solution to the multi-OOSM problem, called the complete in-sequence information (CISI) approach. Given an OOSM, in addition to updating the target state at the most recent time, the CISI approach also updates the states between the OOSM time and the most recent time, including the state at the OOSM time. Three novel CISI methods are developed in this paper: the information filter-equivalent measurement (IF-EqM) method, the CISI fixed-point smoothing (CISI-FPS) method and the CISI fixed-interval smoothing (CISI-FIS) method. Numerical examples are given to show the optimality of these CISI methods under various multi-OOSM scenarios.
SMR-Based Adaptive Mobility Management Scheme in Hierarchical SIP Networks
Directory of Open Access Journals (Sweden)
KwangHee Choi
2014-10-01
Full Text Available In hierarchical SIP networks, paging is performed to reduce location update signaling cost for mobility management. However, the cost efficiency largely depends on each mobile node’s session-to-mobility-ratio (SMR, which is defined as a ratio of the session arrival rate to the movement rate. In this paper, we propose the adaptive mobility management scheme that can determine the policy regarding to each mobile node’s SMR. Each mobile node determines whether the paging is applied or not after comparing its SMR with the threshold. In other words, the paging is applied to a mobile node when a mobile node’s SMR is less than the threshold. Therefore, the proposed scheme provides a way to minimize signaling costs according to each mobile node’s SMR. We find out the optimal threshold through performance analysis, and show that the proposed scheme can reduce signaling cost than the existing SIP and paging schemes in hierarchical SIP networks.
Simultaneous optimization of sequential IMRT plans
International Nuclear Information System (INIS)
Popple, Richard A.; Prellop, Perri B.; Spencer, Sharon A.; Santos, Jennifer F. de los; Duan, Jun; Fiveash, John B.; Brezovich, Ivan A.
2005-01-01
Radiotherapy often comprises two phases, in which irradiation of a volume at risk for microscopic disease is followed by a sequential dose escalation to a smaller volume either at a higher risk for microscopic disease or containing only gross disease. This technique is difficult to implement with intensity modulated radiotherapy, as the tolerance doses of critical structures must be respected over the sum of the two plans. Techniques that include an integrated boost have been proposed to address this problem. However, clinical experience with such techniques is limited, and many clinicians are uncomfortable prescribing nonconventional fractionation schemes. To solve this problem, we developed an optimization technique that simultaneously generates sequential initial and boost IMRT plans. We have developed an optimization tool that uses a commercial treatment planning system (TPS) and a high level programming language for technical computing. The tool uses the TPS to calculate the dose deposition coefficients (DDCs) for optimization. The DDCs were imported into external software and the treatment ports duplicated to create the boost plan. The initial, boost, and tolerance doses were specified and used to construct cost functions. The initial and boost plans were optimized simultaneously using a gradient search technique. Following optimization, the fluence maps were exported to the TPS for dose calculation. Seven patients treated using sequential techniques were selected from our clinical database. The initial and boost plans used to treat these patients were developed independently of each other by dividing the tolerance doses proportionally between the initial and boost plans and then iteratively optimizing the plans until a summation that met the treatment goals was obtained. We used the simultaneous optimization technique to generate plans that met the original planning goals. The coverage of the initial and boost target volumes in the simultaneously optimized
The Bacterial Sequential Markov Coalescent.
De Maio, Nicola; Wilson, Daniel J
2017-05-01
Bacteria can exchange and acquire new genetic material from other organisms directly and via the environment. This process, known as bacterial recombination, has a strong impact on the evolution of bacteria, for example, leading to the spread of antibiotic resistance across clades and species, and to the avoidance of clonal interference. Recombination hinders phylogenetic and transmission inference because it creates patterns of substitutions (homoplasies) inconsistent with the hypothesis of a single evolutionary tree. Bacterial recombination is typically modeled as statistically akin to gene conversion in eukaryotes, i.e. , using the coalescent with gene conversion (CGC). However, this model can be very computationally demanding as it needs to account for the correlations of evolutionary histories of even distant loci. So, with the increasing popularity of whole genome sequencing, the need has emerged for a faster approach to model and simulate bacterial genome evolution. We present a new model that approximates the coalescent with gene conversion: the bacterial sequential Markov coalescent (BSMC). Our approach is based on a similar idea to the sequential Markov coalescent (SMC)-an approximation of the coalescent with crossover recombination. However, bacterial recombination poses hurdles to a sequential Markov approximation, as it leads to strong correlations and linkage disequilibrium across very distant sites in the genome. Our BSMC overcomes these difficulties, and shows a considerable reduction in computational demand compared to the exact CGC, and very similar patterns in simulated data. We implemented our BSMC model within new simulation software FastSimBac. In addition to the decreased computational demand compared to previous bacterial genome evolution simulators, FastSimBac provides more general options for evolutionary scenarios, allowing population structure with migration, speciation, population size changes, and recombination hotspots. FastSimBac is
International Nuclear Information System (INIS)
2007-05-01
WIMS-D (Winfrith Improved Multigroup Scheme-D) is the name of a family of software packages for reactor lattice calculations and is one of the few reactor lattice codes in the public domain and available on noncommercial terms. WIMSD-5B has recently been released from the OECD Nuclear Energy Agency Data Bank, and features major improvements in machine portability, as well as incorporating a few minor corrections. This version supersedes WIMS-D/4, which was released by the Winfrith Technology Centre in the United Kingdom for IBM machines and has been adapted for various other computer platforms in different laboratories. The main weakness of the WIMS-D package is the multigroup constants library, which is based on very old data. The relatively good performance of WIMS-D is attributed to a series of empirical adjustments to the multigroup data. However, the adjustments are not always justified on the basis of more accurate and recent experimental measurements. Following the release of new and revised evaluated nuclear data files, it was felt that the performance of WIMS-D could be improved by updating the associated library. The WIMS-D Library Update Project (WLUP) was initiated in the early 1990s with the support of the IAEA. This project consisted of voluntary contributions from a large number of participants. Several benchmarks for testing the library were identified and analysed, the WIMSR module of the NJOY code system was upgraded and the author of NJOY accepted the proposed updates for the official code system distribution. A detailed parametric study was performed to investigate the effects of various data processing input options on the integral results. In addition, the data processing methods for the main reactor materials were optimized. Several partially updated libraries were produced for testing purposes. The final stage of the WLUP was organized as a coordinated research project (CRP) in order to speed up completion of the fully updated library
Joint multiuser switched diversity and adaptive modulation schemes for spectrum sharing systems
Qaraqe, Marwa
2012-12-01
In this paper, we develop multiuser access schemes for spectrum sharing systems whereby secondary users are allowed to share the spectrum with primary users under the condition that the interference observed at the primary receiver is below a predetermined threshold. In particular, we devise two schemes for selecting a user among those that satisfy the interference constraint and achieve an acceptable signal-to-noise ratio level. The first scheme selects the user that reports the best channel quality. In order to alleviate the high feedback load associated with the first scheme, we develop a second scheme based on the concept of switched diversity where the base station scans the users in a sequential manner until an acceptable user is found. In addition to these two selection schemes, we consider two power adaptive settings at the secondary users based on the amount of interference available at the secondary transmitter. In the On/Off power setting, users are allowed to transmit based on whether the interference constraint is met or not, while in the full power adaptive setting, the users are allowed to vary their transmission power to satisfy the interference constraint. Finally, we present numerical results for our proposed algorithms where we show the trade-off between the average spectral efficiency and average feedback load for both schemes. © 2012 IEEE.
Joint multiuser switched diversity and adaptive modulation schemes for spectrum sharing systems
Qaraqe, Marwa; Abdallah, Mohamed M.; Serpedin, Erchin; Alouini, Mohamed-Slim; Alnuweiri, Hussein M.
2012-01-01
In this paper, we develop multiuser access schemes for spectrum sharing systems whereby secondary users are allowed to share the spectrum with primary users under the condition that the interference observed at the primary receiver is below a predetermined threshold. In particular, we devise two schemes for selecting a user among those that satisfy the interference constraint and achieve an acceptable signal-to-noise ratio level. The first scheme selects the user that reports the best channel quality. In order to alleviate the high feedback load associated with the first scheme, we develop a second scheme based on the concept of switched diversity where the base station scans the users in a sequential manner until an acceptable user is found. In addition to these two selection schemes, we consider two power adaptive settings at the secondary users based on the amount of interference available at the secondary transmitter. In the On/Off power setting, users are allowed to transmit based on whether the interference constraint is met or not, while in the full power adaptive setting, the users are allowed to vary their transmission power to satisfy the interference constraint. Finally, we present numerical results for our proposed algorithms where we show the trade-off between the average spectral efficiency and average feedback load for both schemes. © 2012 IEEE.
Information/disturbance trade-off in single and sequential measurements on a qudit signal
Energy Technology Data Exchange (ETDEWEB)
Genoni, Marco G; Paris, Matteo G A [Dipartimento di Fisica, Universita degli studi di Milano (Italy)
2007-05-15
We address the trade-off between information gain and state disturbance in measurement performed on qudit systems and devise a class of optimal measurement schemes that saturate the ultimate bound imposed by quantum mechanics to estimation and transmission fidelities. The schemes are minimal, i.e. they involve a single additional probe qudit, and optimal, i.e. they provide the maximum amount of information compatible with a given level of disturbance. The performances of optimal single-user schemes in extracting information by sequential measurements in a N-user transmission line are also investigated, and the optimality is analyzed by explicit evaluation of fidelities. We found that the estimation fidelity does not depend on the number of users, neither for single-measure inference nor for collective one, whereas the transmission fidelity decreases with N. The resulting trade-off is no longer optimal and degrades with increasing N. We found that optimality can be restored by an effective preparation of the probe states and present explicitly calculations for the 2-user case.
Update on markets for forestry offsets
International Nuclear Information System (INIS)
Neeff, T.; Eichler, L.; Deecke, I.; Fehse, J.
2007-01-01
This guide is an update of the book 'Guidebook to Markets and Commercialization of CDM forestry projects'. The document provides information on the development of CMD methodologies, projects registered and markets since the publication of the first version. In addition it introduces the emerging non-Kyoto markets, it presents a classification of the existing developments, it describes each market including the buyer's preferences and it discusses the use of standards and quality criteria and transaction costs. We focus on markets for offsets from developing countries, rather than domestic offsets in developed countries. Section 1 is an introduction to the topic and an overview of the most recent developments. Sections 2 and 3 look at recent experiences and market developments for CDM reforestation projects. These sections are meant to be an update of the above mentioned guidebook and thus refrain from an exhaustive description. Section 4 assesses non-Kyoto markets for carbon offsets from forestry projects. It includes a description of the various market schemes and types of buyers. The section attempts to provide the project developer with useful information for developing a project following buyer' requirements. Finally, section 5 puts the assessment of opportunities for forestry in the broader context of the larger carbon markets. The report then concludes with a comparison of advantages and disadvantages of the CDM and non-Kyoto schemes from the project developer's point of view
DEFF Research Database (Denmark)
Buanuam, Janya; Miró, Manuel; Hansen, Elo Harald
2006-01-01
Sequential injection microcolumn extraction (SI-MCE) based on the implementation of a soil containing microcartridge as external reactor in a sequential injection network is, for the first time, proposed for dynamic fractionation of macronutrients in environmental solids, as exemplified by the pa......Sequential injection microcolumn extraction (SI-MCE) based on the implementation of a soil containing microcartridge as external reactor in a sequential injection network is, for the first time, proposed for dynamic fractionation of macronutrients in environmental solids, as exemplified...... by the partitioning of inorganic phosphorous in agricultural soils. The on-line fractionation method capitalises on the accurate metering and sequential exposure of the various extractants to the solid sample by application of programmable flow as precisely coordinated by a syringe pump. Three different soil phase...... associations for phosphorus, that is, exchangeable, Al- and Fe-bound and Ca-bound fractions, were elucidated by accommodation in the flow manifold of the 3 steps of the Hietjles-Litjkema (HL) scheme involving the use of 1.0 M NH4Cl, 0.1 M NaOH and 0.5 M HCl, respectively, as sequential leaching reagents...
Modelling sequentially scored item responses
Akkermans, W.
2000-01-01
The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is
High-Order Multioperator Compact Schemes for Numerical Simulation of Unsteady Subsonic Airfoil Flow
Savel'ev, A. D.
2018-02-01
On the basis of high-order schemes, the viscous gas flow over the NACA2212 airfoil is numerically simulated at a free-stream Mach number of 0.3 and Reynolds numbers ranging from 103 to 107. Flow regimes sequentially varying due to variations in the free-stream viscosity are considered. Vortex structures developing on the airfoil surface are investigated, and a physical interpretation of this phenomenon is given.
Energy Technology Data Exchange (ETDEWEB)
Touma, Rony [Department of Computer Science & Mathematics, Lebanese American University, Beirut (Lebanon); Zeidan, Dia [School of Basic Sciences and Humanities, German Jordanian University, Amman (Jordan)
2016-06-08
In this paper we extend a central finite volume method on nonuniform grids to the case of drift-flux two-phase flow problems. The numerical base scheme is an unstaggered, non oscillatory, second-order accurate finite volume scheme that evolves a piecewise linear numerical solution on a single grid and uses dual cells intermediately while updating the numerical solution to avoid the resolution of the Riemann problems arising at the cell interfaces. We then apply the numerical scheme and solve a classical drift-flux problem. The obtained results are in good agreement with corresponding ones appearing in the recent literature, thus confirming the potential of the proposed scheme.
Programming scheme based optimization of hybrid 4T-2R OxRAM NVSRAM
Majumdar, Swatilekha; Kingra, Sandeep Kaur; Suri, Manan
2017-09-01
In this paper, we present a novel single-cycle programming scheme for 4T-2R NVSRAM, exploiting pulse engineered input signals. OxRAM devices based on 3 nm thick bi-layer active switching oxide and 90 nm CMOS technology node were used for all simulations. The cell design is implemented for real-time non-volatility rather than last-bit, or power-down non-volatility. Detailed analysis of the proposed single-cycle, parallel RRAM device programming scheme is presented in comparison to the two-cycle sequential RRAM programming used for similar 4T-2R NVSRAM bit-cells. The proposed single-cycle programming scheme coupled with the 4T-2R architecture leads to several benefits such as- possibility of unconventional transistor sizing, 50% lower latency, 20% improvement in SNM and ∼20× reduced energy requirements, when compared against two-cycle programming approach.
Fade detector for the FODA-TDMA access scheme
Celandroni, Nedo; Ferro, Erina; Marzoli, Antonio
1989-05-01
The First in first out Ordered Demand Assignment-Time Division Multiple Access (FODA-TDMA) satellite access scheme designed for simultaneous transmissions of real time data, like packetized voice and slow-scan images (stream traffic) and data coming from standard EDP applications, such as bulk data tansfer, interactive computer access, mailing, data base enquiry and updating (datagram traffic) is described. When deep fades are experienced due to rain attenuation, the system is able to counter the fade. Techniques to detect the fade are presented.
Deciphering Intrinsic Inter-subunit Couplings that Lead to Sequential Hydrolysis of F 1 -ATPase Ring
Dai, Liqiang; Flechsig, Holger; Yu, Jin
2017-10-01
The rotary sequential hydrolysis of metabolic machine F1-ATPase is a prominent feature to reveal high coordination among multiple chemical sites on the stator F1 ring, which also contributes to tight coupling between the chemical reaction and central {\\gamma}-shaft rotation. High-speed AFM experiments discovered that the sequential hydrolysis was maintained on the F1 ring even in the absence of the {\\gamma} rotor. To explore how the intrinsic sequential performance arises, we computationally investigated essential inter-subunit couplings on the hexameric ring of mitochondrial and bacterial F1. We first reproduced the sequential hydrolysis schemes as experimentally detected, by simulating tri-site ATP hydrolysis cycles on the F1 ring upon kinetically imposing inter-subunit couplings to substantially promote the hydrolysis products release. We found that it is key for certain ATP binding and hydrolysis events to facilitate the neighbor-site ADP and Pi release to support the sequential hydrolysis. The kinetically feasible couplings were then scrutinized through atomistic molecular dynamics simulations as well as coarse-grained simulations, in which we enforced targeted conformational changes for the ATP binding or hydrolysis. Notably, we detected the asymmetrical neighbor-site opening that would facilitate the ADP release upon the enforced ATP binding, and computationally captured the complete Pi release through charge hopping upon the enforced neighbor-site ATP hydrolysis. The ATP-hydrolysis triggered Pi release revealed in current TMD simulation confirms a recent prediction made from statistical analyses of single molecule experimental data in regard to the role ATP hydrolysis plays. Our studies, therefore, elucidate both the concerted chemical kinetics and underlying structural dynamics of the inter-subunit couplings that lead to the rotary sequential hydrolysis of the F1 ring.
Biswas, Samir Kumar; Kanhirodan, Rajan; Vasu, Ram Mohan; Roy, Debasish
2011-08-01
We explore a pseudodynamic form of the quadratic parameter update equation for diffuse optical tomographic reconstruction from noisy data. A few explicit and implicit strategies for obtaining the parameter updates via a semianalytical integration of the pseudodynamic equations are proposed. Despite the ill-posedness of the inverse problem associated with diffuse optical tomography, adoption of the quadratic update scheme combined with the pseudotime integration appears not only to yield higher convergence, but also a muted sensitivity to the regularization parameters, which include the pseudotime step size for integration. These observations are validated through reconstructions with both numerically generated and experimentally acquired data.
Privacy-Preserving Billing Scheme against Free-Riders for Wireless Charging Electric Vehicles
Directory of Open Access Journals (Sweden)
Xingwen Zhao
2017-01-01
Full Text Available Recently, scientists in South Korea developed on-line electric vehicle (OLEV, which is a kind of electric vehicle that can be charged wirelessly while it is moving on the road. The battery in the vehicle can absorb electric energy from the power transmitters buried under the road without any contact with them. Several billing schemes have been presented to offer privacy-preserving billing for OLEV owners. However, they did not consider the existence of free-riders. When some vehicles are being charged after showing the tokens, vehicles that are running ahead or behind can switch on their systems and drive closely for a free charging. We describe a billing scheme against free-riders by using several cryptographic tools. Each vehicle should authenticate with a compensation-prepaid token before it can drive on the wireless-charging-enabled road. The service provider can obtain compensation if it can prove that certain vehicle is a free-rider. Our scheme is privacy-preserving so the charging will not disclose the locations and routine routes of each vehicle. In fact, our scheme is a fast authentication scheme that anonymously authenticates each user on accessing a sequence of services. Thus, it can be applied to sequential data delivering services in future 5G systems.
Multi-agent sequential hypothesis testing
Kim, Kwang-Ki K.
2014-12-15
This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.
Development of a sustainability reporting scheme for biofuels: A UK case study
International Nuclear Information System (INIS)
Chalmers, Jessica; Archer, Greg
2011-01-01
In 2008, the UK launched the first regulatory sustainability reporting scheme for biofuels. The development of the scheme, managed by the Low Carbon Vehicle Partnership for the Department for Transport, involved extensive stakeholder engagement. The scheme has significantly increased understanding by policy-makers, the biofuels industry and its supply chains on how to monitor and manage the sustainability risks of biofuels and increase their greenhouse-gas benefits. It is providing a practical model for similar developments globally. To receive certificates in order to meet volume obligations under the Renewable Transport Fuel Obligation (RTFO), suppliers must provide a monthly carbon and sustainability report on individual batches of renewable fuels they supply into the UK. The Renewable Fuels Agency produces aggregate monthly reports of overall performance and quarterly updates of individual supplier performance. This scheme is an important first step to assist the biofuels industry to demonstrate its environmental credentials and justify the subsidies received. The paper provides a case study of the development of the scheme, its initial outcomes and outstanding challenges.
Son, Seungsik; Jeong, Jongpil
2014-01-01
In this paper, a mobility-aware Dual Pointer Forwarding scheme (mDPF) is applied in Proxy Mobile IPv6 (PMIPv6) networks. The movement of a Mobile Node (MN) is classified as intra-domain and inter-domain handoff. When the MN moves, this scheme can reduce the high signaling overhead for intra-handoff/inter-handoff, because the Local Mobility Anchor (LMA) and Mobile Access Gateway (MAG) are connected by pointer chains. In other words, a handoff is aware of low mobility between the previously attached MAG (pMAG) and newly attached MAG (nMAG), and another handoff between the previously attached LMA (pLMA) and newly attached LMA (nLMA) is aware of high mobility. Based on these mobility-aware binding updates, the overhead of the packet delivery can be reduced. Also, we analyse the binding update cost and packet delivery cost for route optimization, based on the mathematical analytic model. Analytical results show that our mDPF outperforms the PMIPv6 and the other pointer forwarding schemes, in terms of reducing the total cost of signaling.
Privacy-Preserving Outsourced Auditing Scheme for Dynamic Data Storage in Cloud
Directory of Open Access Journals (Sweden)
Tengfei Tu
2017-01-01
Full Text Available As information technology develops, cloud storage has been widely accepted for keeping volumes of data. Remote data auditing scheme enables cloud user to confirm the integrity of her outsourced file via the auditing against cloud storage, without downloading the file from cloud. In view of the significant computational cost caused by the auditing process, outsourced auditing model is proposed to make user outsource the heavy auditing task to third party auditor (TPA. Although the first outsourced auditing scheme can protect against the malicious TPA, this scheme enables TPA to have read access right over user’s outsourced data, which is a potential risk for user data privacy. In this paper, we introduce the notion of User Focus for outsourced auditing, which emphasizes the idea that lets user dominate her own data. Based on User Focus, our proposed scheme not only can prevent user’s data from leaking to TPA without depending on data encryption but also can avoid the use of additional independent random source that is very difficult to meet in practice. We also describe how to make our scheme support dynamic updates. According to the security analysis and experimental evaluations, our proposed scheme is provably secure and significantly efficient.
Sequential charged particle reaction
International Nuclear Information System (INIS)
Hori, Jun-ichi; Ochiai, Kentaro; Sato, Satoshi; Yamauchi, Michinori; Nishitani, Takeo
2004-01-01
The effective cross sections for producing the sequential reaction products in F82H, pure vanadium and LiF with respect to the 14.9-MeV neutron were obtained and compared with the estimation ones. Since the sequential reactions depend on the secondary charged particles behavior, the effective cross sections are corresponding to the target nuclei and the material composition. The effective cross sections were also estimated by using the EAF-libraries and compared with the experimental ones. There were large discrepancies between estimated and experimental values. Additionally, we showed the contribution of the sequential reaction on the induced activity and dose rate in the boundary region with water. From the present study, it has been clarified that the sequential reactions are of great importance to evaluate the dose rates around the surface of cooling pipe and the activated corrosion products. (author)
Dobolyi, David G; Dodson, Chad S
2013-12-01
Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Innovative process scheme for removal of organic matter, phosphorus and nitrogen from pig manure
DEFF Research Database (Denmark)
Karakashev, Dimitar Borisov; Schmidt, Jens Ejbye; Angelidaki, Irini
2008-01-01
blanket (UASB) reactor, partial oxidation), nitrogen (oxygen-limited autotrophic nitrification-denitrification, OLAND) and phosphorus (phosphorus removal by precipitation as struvite, PRS) from pig manure were tested. Results obtained showed that microfiltration was unsuitable for pig manure treatment....... PRS treated effluent was negatively affecting the further processing of the pig manure in UASB, and was therefore not included in the final process flow scheme. In a final scheme (PIGMAN concept) combination of the following successive process steps was used: thermophilic anaerobic digestion...... with sequential separation by decanter centrifuge, post-digestion in UASB reactor, partial oxidation and finally OLAND process. This combination resulted in reduction of the total organic, nitrogen and phosphorus contents by 96%, 88%, and 81%, respectively....
Modeling two-phase ferroelectric composites by sequential laminates
International Nuclear Information System (INIS)
Idiart, Martín I
2014-01-01
Theoretical estimates are given for the overall dissipative response of two-phase ferroelectric composites with complex particulate microstructures under arbitrary loading histories. The ferroelectric behavior of the constituent phases is described via a stored energy density and a dissipation potential in accordance with the theory of generalized standard materials. An implicit time-discretization scheme is used to generate a variational representation of the overall response in terms of a single incremental potential. Estimates are then generated by constructing sequentially laminated microgeometries of particulate type whose overall incremental potential can be computed exactly. Because they are realizable, by construction, these estimates are guaranteed to conform with any material constraints, to satisfy all pertinent bounds and to exhibit the required convexity properties with no duality gap. Predictions for representative composite and porous systems are reported and discussed in the light of existing experimental data. (paper)
Optimisation of beryllium-7 gamma analysis following BCR sequential extraction
International Nuclear Information System (INIS)
Taylor, A.; Blake, W.H.; Keith-Roach, M.J.
2012-01-01
Graphical abstract: Showing decrease in analytical uncertainty using the optimal (combined preconcentrated sample extract) method. nv (no value) where extract activities were 7 Be geochemical behaviour is required to support tracer studies. ► Sequential extraction with natural 7 Be returns high analytical uncertainties. ► Preconcentrating extracts from a large sample mass improved analytical uncertainty. ► This optimised method can be readily employed in studies using low activity samples. - Abstract: The application of cosmogenic 7 Be as a sediment tracer at the catchment-scale requires an understanding of its geochemical associations in soil to underpin the assumption of irreversible adsorption. Sequential extractions offer a readily accessible means of determining the associations of 7 Be with operationally defined soil phases. However, the subdivision of the low activity concentrations of fallout 7 Be in soils into geochemical fractions can introduce high gamma counting uncertainties. Extending analysis time significantly is not always an option for batches of samples, owing to the on-going decay of 7 Be (t 1/2 = 53.3 days). Here, three different methods of preparing and quantifying 7 Be extracted using the optimised BCR three-step scheme have been evaluated and compared with a focus on reducing analytical uncertainties. The optimal method involved carrying out the BCR extraction in triplicate, sub-sampling each set of triplicates for stable Be analysis before combining each set and coprecipitating the 7 Be with metal oxyhydroxides to produce a thin source for gamma analysis. This method was applied to BCR extractions of natural 7 Be in four agricultural soils. The approach gave good counting statistics from a 24 h analysis period (∼10% (2σ) where extract activity >40% of total activity) and generated statistically useful sequential extraction profiles. Total recoveries of 7 Be fell between 84 and 112%. The stable Be data demonstrated that the
Directory of Open Access Journals (Sweden)
Chulhee Cho
Full Text Available Lately, we see that Internet of things (IoT is introduced in medical services for global connection among patients, sensors, and all nearby things. The principal purpose of this global connection is to provide context awareness for the purpose of bringing convenience to a patient's life and more effectively implementing clinical processes. In health care, monitoring of biosignals of a patient has to be continuously performed while the patient moves inside and outside the hospital. Also, to monitor the accurate location and biosignals of the patient, appropriate mobility management is necessary to maintain connection between the patient and the hospital network. In this paper, a binding update scheme on PMIPv6, which reduces signal traffic during location updates by Virtual LMA (VLMA on the top original Local Mobility Anchor (LMA Domain, is proposed to reduce the total cost. If a Mobile Node (MN moves to a Mobile Access Gateway (MAG-located boundary of an adjacent LMA domain, the MN changes itself into a virtual mode, and this movement will be assumed to be a part of the VLMA domain. In the proposed scheme, MAGs eliminate global binding updates for MNs between LMA domains and significantly reduce the packet loss and latency by eliminating the handoff between LMAs. In conclusion, the performance analysis results show that the proposed scheme improves performance significantly versus PMIPv6 and HMIPv6 in terms of the binding update rate per user and average handoff latency.
Cho, Chulhee; Choi, Jae-Young; Jeong, Jongpil; Chung, Tai-Myoung
2017-01-01
Lately, we see that Internet of things (IoT) is introduced in medical services for global connection among patients, sensors, and all nearby things. The principal purpose of this global connection is to provide context awareness for the purpose of bringing convenience to a patient's life and more effectively implementing clinical processes. In health care, monitoring of biosignals of a patient has to be continuously performed while the patient moves inside and outside the hospital. Also, to monitor the accurate location and biosignals of the patient, appropriate mobility management is necessary to maintain connection between the patient and the hospital network. In this paper, a binding update scheme on PMIPv6, which reduces signal traffic during location updates by Virtual LMA (VLMA) on the top original Local Mobility Anchor (LMA) Domain, is proposed to reduce the total cost. If a Mobile Node (MN) moves to a Mobile Access Gateway (MAG)-located boundary of an adjacent LMA domain, the MN changes itself into a virtual mode, and this movement will be assumed to be a part of the VLMA domain. In the proposed scheme, MAGs eliminate global binding updates for MNs between LMA domains and significantly reduce the packet loss and latency by eliminating the handoff between LMAs. In conclusion, the performance analysis results show that the proposed scheme improves performance significantly versus PMIPv6 and HMIPv6 in terms of the binding update rate per user and average handoff latency.
The CLIC Multi-Drive Beam Scheme
Corsini, R
1998-01-01
The CLIC study of an e+ / e- linear collider in the TeV energy range is based on Two-Beam Acceleration (TBA) in which the RF power needed to accelerate the beam is extracted from high intensity relativistic electron beams, the so-called drive beams. The generation, acceleration and transport of the high-intensity drive beams in an efficient and reliable way constitute a challenging task. An overview of a potentially very effective scheme is presented. It is based on the generation of trains of short bunches, accelerated sequentially in low frequency superconducting cavities in a c.w. mode, stored in an isochronous ring and combined at high energy by funnelling before injection by sectors into the drive linac for RF power production. The various systems of the complex are discussed.
Sequential assessment of prey through the use of multiple sensory cues by an eavesdropping bat
Page, Rachel A.; Schnelle, Tanja; Kalko, Elisabeth K. V.; Bunge, Thomas; Bernal, Ximena E.
2012-06-01
Predators are often confronted with a broad diversity of potential prey. They rely on cues associated with prey quality and palatability to optimize their hunting success and to avoid consuming toxic prey. Here, we investigate a predator's ability to assess prey cues during capture, handling, and consumption when confronted with conflicting information about prey quality. We used advertisement calls of a preferred prey item (the túngara frog) to attract fringe-lipped bats, Trachops cirrhosus, then offered palatable, poisonous, and chemically manipulated anurans as prey. Advertisement calls elicited an attack response, but as bats approached, they used additional sensory cues in a sequential manner to update their information about prey size and palatability. While both palatable and poisonous small anurans were readily captured, large poisonous toads were approached but not contacted suggesting the use of echolocation for assessment of prey size at close range. Once prey was captured, bats used chemical cues to make final, post-capture decisions about whether to consume the prey. Bats dropped small, poisonous toads as well as palatable frogs coated in toad toxins either immediately or shortly after capture. Our study suggests that echolocation and chemical cues obtained at close range supplement information obtained from acoustic cues at long range. Updating information about prey quality minimizes the occurrence of costly errors and may be advantageous in tracking temporal and spatial fluctuations of prey and exploiting novel food sources. These findings emphasize the sequential, complex nature of prey assessment that may allow exploratory and flexible hunting behaviors.
Sequentially optimized reconstruction strategy: A meta-strategy for perimetry testing.
Directory of Open Access Journals (Sweden)
Şerife Seda Kucur
Full Text Available Perimetry testing is an automated method to measure visual function and is heavily used for diagnosing ophthalmic and neurological conditions. Its working principle is to sequentially query a subject about perceived light using different brightness levels at different visual field locations. At a given location, this query-patient-feedback process is expected to converge at a perceived sensitivity, such that a shown stimulus intensity is observed and reported 50% of the time. Given this inherently time-intensive and noisy process, fast testing strategies are necessary in order to measure existing regions more effectively and reliably. In this work, we present a novel meta-strategy which relies on the correlative nature of visual field locations in order to strongly reduce the necessary number of locations that need to be examined. To do this, we sequentially determine locations that most effectively reduce visual field estimation errors in an initial training phase. We then exploit these locations at examination time and show that our approach can easily be combined with existing perceived sensitivity estimation schemes to speed up the examinations. Compared to state-of-the-art strategies, our approach shows marked performance gains with a better accuracy-speed trade-off regime for both mixed and sub-populations.
Effect of asynchronous updating on the stability of cellular automata
International Nuclear Information System (INIS)
Baetens, J.M.; Van der Weeën, P.; De Baets, B.
2012-01-01
Highlights: ► An upper bound on the Lyapunov exponent of asynchronously updated CA is established. ► The employed update method has repercussions on the stability of CAs. ► A decision on the employed update method should be taken with care. ► Substantial discrepancies arise between synchronously and asynchronously updated CA. ► Discrepancies between different asynchronous update schemes are less pronounced. - Abstract: Although cellular automata (CAs) were conceptualized as utter discrete mathematical models in which the states of all their spatial entities are updated simultaneously at every consecutive time step, i.e. synchronously, various CA-based models that rely on so-called asynchronous update methods have been constructed in order to overcome the limitations that are tied up with the classical way of evolving CAs. So far, only a few researchers have addressed the consequences of this way of updating on the evolved spatio-temporal patterns, and the reachable stationary states. In this paper, we exploit Lyapunov exponents to determine to what extent the stability of the rules within a family of totalistic CAs is affected by the underlying update method. For that purpose, we derive an upper bound on the maximum Lyapunov exponent of asynchronously iterated CAs, and show its validity, after which we present a comparative study between the Lyapunov exponents obtained for five different update methods, namely one synchronous method and four well-established asynchronous methods. It is found that the stability of CAs is seriously affected if one of the latter methods is employed, whereas the discrepancies arising between the different asynchronous methods are far less pronounced and, finally, we discuss the repercussions of our findings on the development of CA-based models.
Acoustic classification schemes in Europe – Applicability for new, existing and renovated housing
DEFF Research Database (Denmark)
Rasmussen, Birgit
2016-01-01
The first acoustic classification schemes for dwellings were published in the 1990’es as national standards with the main purpose to introduce the possibility of specifying easily stricter acoustic criteria for new-build than the minimum requirements found in building regulations. Since then, more...... countries have introduced acoustic classification schemes, the first countries updated more times and some countries introduced acoustic classification also for other building categories. However, the classification schemes continued to focus on new buildings and have in general limited applicability...... for existing buildings from before implementation of acoustic regulations, typically in the 1950’es or later. The paper will summarize main characteristics, differences and similarities of the current national quality classes for housing in ten countries in Europe. In addition, the status and challenges...
High-efficiency wavefunction updates for large scale Quantum Monte Carlo
Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed
Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.
Metzger, Christoph
2016-01-01
Due to demographic change, the fiscal sustainability of pension schemes financed on a pay-as-you-go (PAYGO) basis is of more interest for policy makers than ever. Unsustainable financing brings along a future burden to pensioners through pension cuts and/or to the working population through increasing contribution rates. With comparable data about the unfunded accrued-to-date pension liabilities of social security pension schemes soon being available due to a recent update of the internationa...
Remarks on sequential designs in risk assessment
International Nuclear Information System (INIS)
Seidenfeld, T.
1982-01-01
The special merits of sequential designs are reviewed in light of particular challenges that attend risk assessment for human population. The kinds of ''statistical inference'' are distinguished and the problem of design which is pursued is the clash between Neyman-Pearson and Bayesian programs of sequential design. The value of sequential designs is discussed and the Neyman-Pearson vs. Bayesian sequential designs are probed in particular. Finally, warnings with sequential designs are considered, especially in relation to utilitarianism
Sequential lineup laps and eyewitness accuracy.
Steblay, Nancy K; Dietrich, Hannah L; Ryan, Shannon L; Raczynski, Jeanette L; James, Kali A
2011-08-01
Police practice of double-blind sequential lineups prompts a question about the efficacy of repeated viewings (laps) of the sequential lineup. Two laboratory experiments confirmed the presence of a sequential lap effect: an increase in witness lineup picks from first to second lap, when the culprit was a stranger. The second lap produced more errors than correct identifications. In Experiment 2, lineup diagnosticity was significantly higher for sequential lineup procedures that employed a single versus double laps. Witnesses who elected to view a second lap made significantly more errors than witnesses who chose to stop after one lap or those who were required to view two laps. Witnesses with prior exposure to the culprit did not exhibit a sequential lap effect.
Directory of Open Access Journals (Sweden)
Shahid Iqbal
2012-10-01
Full Text Available A sequential solvent extraction scheme was employed for the extraction of antioxidant compounds from kenaf (Hibiscus cannabinus L. seeds. Yield of extracts varied widely among the solvents and was the highest for hexane extract (16.6% based on dry weight basis, while water extract exhibited the highest total phenolic content (18.78 mg GAE/g extract, total flavonoid content (2.49 mg RE/g extract, and antioxidant activities (p < 0.05. DPPH and hydroxyl radical scavenging, β-carotene bleaching, metal chelating activity, ferric thiocyanate and thiobarbituric acid reactive substances assays were employed to comprehensively assess the antioxidant potential of different solvent extracts prepared sequentially. Besides water, methanolic extract also exhibited high retardation towards the formation of hydroperoxides and thiobarbituric acid reactive substances in the total antioxidant activity tests (p < 0.05. As conclusion, water and methanol extracts of kenaf seed may potentially serve as new sources of antioxidants for food and nutraceutical applications.
Robustness of the Sequential Lineup Advantage
Gronlund, Scott D.; Carlson, Curt A.; Dailey, Sarah B.; Goodsell, Charles A.
2009-01-01
A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup…
Energy Technology Data Exchange (ETDEWEB)
Li Shaoping, E-mail: shaoping_li_2000@yahoo.com [Western Digital Inc. 1250 Reliance Way, Fremont, CA 94539 (United States); Mendez, Hector; Terrill, Dave; Liu Feng; Bai, Daniel; Mao Sining [Western Digital Inc. 1250 Reliance Way, Fremont, CA 94539 (United States)
2012-02-15
A systematic experimental study of the reverse overwrite (ReOVW) process in the shingled recording scheme has been conducted in conjunction with characterization of corresponding recording performances from recording heads with different geometries. It was found that there is no ReOVW reduction as the track density increases in a strict shingled recording fashion. Nonetheless, ReOVW is indeed slightly decreased from 300 to 700 kpi in a so-called one write shingled recording process. Overall our obtained data suggest that conventional magnetic recording technology might be able to extend all the way beyond an areal density of one Tbit/in{sup 2} by using the shingled recording scheme. - Research Highlights: > This paper discusses the most advanced recording scheme, e.g., shingled recording process, for next generation magnetic data storage devices. > The paper shows that the write-ability of magnetic recording is sufficient in the shingled recording scheme even when the areal density is beyond 1.0 Tb/in{sup 2}. > Our results also shows that the writer's edge write-ability is essential for reducing noise during the write process in shingled recording scheme. > The paper also demonstrates that a multiple and sequential write process ensures the normal erasure-ability in shingled recording scheme. > Our results also indicate that the noise nature in the write process still could be attributed to the hard-easy transition and imprint effect.
International Nuclear Information System (INIS)
Li Shaoping; Mendez, Hector; Terrill, Dave; Liu Feng; Bai, Daniel; Mao Sining
2012-01-01
A systematic experimental study of the reverse overwrite (ReOVW) process in the shingled recording scheme has been conducted in conjunction with characterization of corresponding recording performances from recording heads with different geometries. It was found that there is no ReOVW reduction as the track density increases in a strict shingled recording fashion. Nonetheless, ReOVW is indeed slightly decreased from 300 to 700 kpi in a so-called one write shingled recording process. Overall our obtained data suggest that conventional magnetic recording technology might be able to extend all the way beyond an areal density of one Tbit/in 2 by using the shingled recording scheme. - Research highlights: → This paper discusses the most advanced recording scheme, e.g., shingled recording process, for next generation magnetic data storage devices. → The paper shows that the write-ability of magnetic recording is sufficient in the shingled recording scheme even when the areal density is beyond 1.0 Tb/in 2 . → Our results also shows that the writer's edge write-ability is essential for reducing noise during the write process in shingled recording scheme. → The paper also demonstrates that a multiple and sequential write process ensures the normal erasure-ability in shingled recording scheme. → Our results also indicate that the noise nature in the write process still could be attributed to the hard-easy transition and imprint effect.
Multi-agent sequential hypothesis testing
Kim, Kwang-Ki K.; Shamma, Jeff S.
2014-01-01
incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well
Sequential stochastic optimization
Cairoli, Renzo
1996-01-01
Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet
International Nuclear Information System (INIS)
Drahota, Petr; Grösslová, Zuzana; Kindlová, Helena
2014-01-01
Highlights: • Extraction efficiency and selectivity of phosphate and oxalate were tested. • Pure As-bearing mineral phases and mine wastes were used. • The reagents were found to be specific and selective for most major forms of As. • An optimized sequential extraction scheme for mine wastes has been developed. • It has been tested over a model mineral mixtures and natural mine waste materials. - Abstract: An optimized sequential extraction (SE) scheme for mine waste materials has been developed and tested for As partitioning over a range of pure As-bearing mineral phases, their model mixtures, and natural mine waste materials. This optimized SE procedure employs five extraction steps: (1) nitrogen-purged deionized water, 10 h; (2) 0.01 M NH 4 H 2 PO 4 , 16 h; (3) 0.2 M NH 4 -oxalate in the dark, pH3, 2 h; (4) 0.2 M NH 4 -oxalate, pH3/80 °C, 4 h; (5) KClO 3 /HCl/HNO 3 digestion. Selectivity and specificity tests on natural mine wastes and major pure As-bearing mineral phases showed that these As fractions appear to be primarily associated with: (1) readily soluble; (2) adsorbed; (3) amorphous and poorly-crystalline arsenates, oxides and hydroxosulfates of Fe; (4) well-crystalline arsenates, oxides, and hydroxosulfates of Fe; as well as (5) sulfides and arsenides. The specificity and selectivity of extractants, and the reproducibility of the optimized SE procedure were further verified by artificial model mineral mixtures and different natural mine waste materials. Partitioning data for extraction steps 3, 4, and 5 showed good agreement with those calculated in the model mineral mixtures (<15% difference), as well as that expected in different natural mine waste materials. The sum of the As recovered in the different extractant pools was not significantly different (89–112%) than the results for acid digestion. This suggests that the optimized SE scheme can reliably be employed for As partitioning in mine waste materials
Christofides, Stelios; Isidoro, Jorge; Pesznyak, Csilla; Bumbure, Lada; Cremers, Florian; Schmidt, Werner F O
2016-01-01
This EFOMP Policy Statement is an update of Policy Statement No. 6 first published in 1994. The present version takes into account the European Union Parliament and Council Directive 2013/55/EU that amends Directive 2005/36/EU on the recognition of professional qualifications and the European Union Council Directive 2013/59/EURATOM laying down the basic safety standards for protection against the dangers arising from exposure to ionising radiation. The European Commission Radiation Protection Report No. 174, Guidelines on Medical Physics Expert and the EFOMP Policy Statement No. 12.1, Recommendations on Medical Physics Education and Training in Europe 2014, are also taken into consideration. The EFOMP National Member Organisations are encouraged to update their Medical Physics registration schemes where these exist or to develop registration schemes taking into account the present version of this EFOMP Policy Statement (Policy Statement No. 6.1"Recommended Guidelines on National Registration Schemes for Medical Physicists"). Copyright © 2016. Published by Elsevier Ltd.
Additive operator-difference schemes splitting schemes
Vabishchevich, Petr N
2013-01-01
Applied mathematical modeling isconcerned with solving unsteady problems. This bookshows how toconstruct additive difference schemes to solve approximately unsteady multi-dimensional problems for PDEs. Two classes of schemes are highlighted: methods of splitting with respect to spatial variables (alternating direction methods) and schemes of splitting into physical processes. Also regionally additive schemes (domain decomposition methods)and unconditionally stable additive schemes of multi-component splitting are considered for evolutionary equations of first and second order as well as for sy
Exploring the sequential lineup advantage using WITNESS.
Goodsell, Charles A; Gronlund, Scott D; Carlson, Curt A
2010-12-01
Advocates claim that the sequential lineup is an improvement over simultaneous lineup procedures, but no formal (quantitatively specified) explanation exists for why it is better. The computational model WITNESS (Clark, Appl Cogn Psychol 17:629-654, 2003) was used to develop theoretical explanations for the sequential lineup advantage. In its current form, WITNESS produced a sequential advantage only by pairing conservative sequential choosing with liberal simultaneous choosing. However, this combination failed to approximate four extant experiments that exhibited large sequential advantages. Two of these experiments became the focus of our efforts because the data were uncontaminated by likely suspect position effects. Decision-based and memory-based modifications to WITNESS approximated the data and produced a sequential advantage. The next step is to evaluate the proposed explanations and modify public policy recommendations accordingly.
Sequential and simultaneous choices: testing the diet selection and sequential choice models.
Freidin, Esteban; Aw, Justine; Kacelnik, Alex
2009-03-01
We investigate simultaneous and sequential choices in starlings, using Charnov's Diet Choice Model (DCM) and Shapiro, Siller and Kacelnik's Sequential Choice Model (SCM) to integrate function and mechanism. During a training phase, starlings encountered one food-related option per trial (A, B or R) in random sequence and with equal probability. A and B delivered food rewards after programmed delays (shorter for A), while R ('rejection') moved directly to the next trial without reward. In this phase we measured latencies to respond. In a later, choice, phase, birds encountered the pairs A-B, A-R and B-R, the first implementing a simultaneous choice and the second and third sequential choices. The DCM predicts when R should be chosen to maximize intake rate, and SCM uses latencies of the training phase to predict choices between any pair of options in the choice phase. The predictions of both models coincided, and both successfully predicted the birds' preferences. The DCM does not deal with partial preferences, while the SCM does, and experimental results were strongly correlated to this model's predictions. We believe that the SCM may expose a very general mechanism of animal choice, and that its wider domain of success reflects the greater ecological significance of sequential over simultaneous choices.
DEFF Research Database (Denmark)
Rasmussen, Birgit
2018-01-01
schemes define limit values for a number of acoustic performance areas, typically airborne and impact sound insulation, service equipment noise, traffic noise and reverberation time, i.e. the same as in regulations. Comparative studies of the national acoustic classification schemes in Europe show main......Building regulations specify minimum requirements, and more than ten countries in Europe have published national acoustic classification schemes with quality classes, the main purpose being to introduce easy specification of stricter acoustic criteria than defined in regulations. The very first...... classification schemes were published in the mid 1990’es and for dwellings only. Since then, more countries have introduced such schemes, some including also other building categories like e.g. schools, hospitals and office buildings, and the first countries have made updates more times. Acoustic classification...
Sequential memory: Binding dynamics
Afraimovich, Valentin; Gong, Xue; Rabinovich, Mikhail
2015-10-01
Temporal order memories are critical for everyday animal and human functioning. Experiments and our own experience show that the binding or association of various features of an event together and the maintaining of multimodality events in sequential order are the key components of any sequential memories—episodic, semantic, working, etc. We study a robustness of binding sequential dynamics based on our previously introduced model in the form of generalized Lotka-Volterra equations. In the phase space of the model, there exists a multi-dimensional binding heteroclinic network consisting of saddle equilibrium points and heteroclinic trajectories joining them. We prove here the robustness of the binding sequential dynamics, i.e., the feasibility phenomenon for coupled heteroclinic networks: for each collection of successive heteroclinic trajectories inside the unified networks, there is an open set of initial points such that the trajectory going through each of them follows the prescribed collection staying in a small neighborhood of it. We show also that the symbolic complexity function of the system restricted to this neighborhood is a polynomial of degree L - 1, where L is the number of modalities.
Energy Technology Data Exchange (ETDEWEB)
Dai, Xiubin [College of Geographic and Biologic Information, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu 210015, China and IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 130 Mason Farm Road, Chapel Hill, North Carolina 27510 (United States); Gao, Yaozong [IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 130 Mason Farm Road, Chapel Hill, North Carolina 27510 (United States); Shen, Dinggang, E-mail: dgshen@med.unc.edu [IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 130 Mason Farm Road, Chapel Hill, North Carolina 27510 and Department of Brain and Cognitive Engineering, Korea University, Seoul (Korea, Republic of)
2015-05-15
Purpose: In image guided radiation therapy, it is crucial to fast and accurately localize the prostate in the daily treatment images. To this end, the authors propose an online update scheme for landmark-guided prostate segmentation, which can fully exploit valuable patient-specific information contained in the previous treatment images and can achieve improved performance in landmark detection and prostate segmentation. Methods: To localize the prostate in the daily treatment images, the authors first automatically detect six anatomical landmarks on the prostate boundary by adopting a context-aware landmark detection method. Specifically, in this method, a two-layer regression forest is trained as a detector for each target landmark. Once all the newly detected landmarks from new treatment images are reviewed or adjusted (if necessary) by clinicians, they are further included into the training pool as new patient-specific information to update all the two-layer regression forests for the next treatment day. As more and more treatment images of the current patient are acquired, the two-layer regression forests can be continually updated by incorporating the patient-specific information into the training procedure. After all target landmarks are detected, a multiatlas random sample consensus (multiatlas RANSAC) method is used to segment the entire prostate by fusing multiple previously segmented prostates of the current patient after they are aligned to the current treatment image. Subsequently, the segmented prostate of the current treatment image is again reviewed (or even adjusted if needed) by clinicians before including it as a new shape example into the prostate shape dataset for helping localize the entire prostate in the next treatment image. Results: The experimental results on 330 images of 24 patients show the effectiveness of the authors’ proposed online update scheme in improving the accuracies of both landmark detection and prostate segmentation
International Nuclear Information System (INIS)
Dai, Xiubin; Gao, Yaozong; Shen, Dinggang
2015-01-01
Purpose: In image guided radiation therapy, it is crucial to fast and accurately localize the prostate in the daily treatment images. To this end, the authors propose an online update scheme for landmark-guided prostate segmentation, which can fully exploit valuable patient-specific information contained in the previous treatment images and can achieve improved performance in landmark detection and prostate segmentation. Methods: To localize the prostate in the daily treatment images, the authors first automatically detect six anatomical landmarks on the prostate boundary by adopting a context-aware landmark detection method. Specifically, in this method, a two-layer regression forest is trained as a detector for each target landmark. Once all the newly detected landmarks from new treatment images are reviewed or adjusted (if necessary) by clinicians, they are further included into the training pool as new patient-specific information to update all the two-layer regression forests for the next treatment day. As more and more treatment images of the current patient are acquired, the two-layer regression forests can be continually updated by incorporating the patient-specific information into the training procedure. After all target landmarks are detected, a multiatlas random sample consensus (multiatlas RANSAC) method is used to segment the entire prostate by fusing multiple previously segmented prostates of the current patient after they are aligned to the current treatment image. Subsequently, the segmented prostate of the current treatment image is again reviewed (or even adjusted if needed) by clinicians before including it as a new shape example into the prostate shape dataset for helping localize the entire prostate in the next treatment image. Results: The experimental results on 330 images of 24 patients show the effectiveness of the authors’ proposed online update scheme in improving the accuracies of both landmark detection and prostate segmentation
Sequential Probability Ration Tests : Conservative and Robust
Kleijnen, J.P.C.; Shi, Wen
2017-01-01
In practice, most computers generate simulation outputs sequentially, so it is attractive to analyze these outputs through sequential statistical methods such as sequential probability ratio tests (SPRTs). We investigate several SPRTs for choosing between two hypothesized values for the mean output
Sequential lineup presentation: Patterns and policy
Lindsay, R C L; Mansour, Jamal K; Beaudry, J L; Leach, A-M; Bertrand, M I
2009-01-01
Sequential lineups were offered as an alternative to the traditional simultaneous lineup. Sequential lineups reduce incorrect lineup selections; however, the accompanying loss of correct identifications has resulted in controversy regarding adoption of the technique. We discuss the procedure and research relevant to (1) the pattern of results found using sequential versus simultaneous lineups; (2) reasons (theory) for differences in witness responses; (3) two methodological issues; and (4) im...
Sequential Product of Quantum Effects: An Overview
Gudder, Stan
2010-12-01
This article presents an overview for the theory of sequential products of quantum effects. We first summarize some of the highlights of this relatively recent field of investigation and then provide some new results. We begin by discussing sequential effect algebras which are effect algebras endowed with a sequential product satisfying certain basic conditions. We then consider sequential products of (discrete) quantum measurements. We next treat transition effect matrices (TEMs) and their associated sequential product. A TEM is a matrix whose entries are effects and whose rows form quantum measurements. We show that TEMs can be employed for the study of quantum Markov chains. Finally, we prove some new results concerning TEMs and vector densities.
Optimal Sequential Rules for Computer-Based Instruction.
Vos, Hans J.
1998-01-01
Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…
Weighted-Bit-Flipping-Based Sequential Scheduling Decoding Algorithms for LDPC Codes
Directory of Open Access Journals (Sweden)
Qing Zhu
2013-01-01
Full Text Available Low-density parity-check (LDPC codes can be applied in a lot of different scenarios such as video broadcasting and satellite communications. LDPC codes are commonly decoded by an iterative algorithm called belief propagation (BP over the corresponding Tanner graph. The original BP updates all the variable-nodes simultaneously, followed by all the check-nodes simultaneously as well. We propose a sequential scheduling algorithm based on weighted bit-flipping (WBF algorithm for the sake of improving the convergence speed. Notoriously, WBF is a low-complexity and simple algorithm. We combine it with BP to obtain advantages of these two algorithms. Flipping function used in WBF is borrowed to determine the priority of scheduling. Simulation results show that it can provide a good tradeoff between FER performance and computation complexity for short-length LDPC codes.
DEFF Research Database (Denmark)
Cai, Junping; Stoustrup, Jakob; Rasmussen, Bjarne Dindler
2008-01-01
This paper introduces food quality as a new parameter, together with energy, to determine an optimal cooling time between defrost cycles. A new defrost-on-demand scheme is proposed. It uses a feedback loop consisting of on-line model updating and estimation as well as a model based optimization. ...
Lu, Qiang; Han, Qing-Long; Zhang, Botao; Liu, Dongliang; Liu, Shirong
2017-12-01
This paper deals with the problem of environmental monitoring by developing an event-triggered finite-time control scheme for mobile sensor networks. The proposed control scheme can be executed by each sensor node independently and consists of two parts: one part is a finite-time consensus algorithm while the other part is an event-triggered rule. The consensus algorithm is employed to enable the positions and velocities of sensor nodes to quickly track the position and velocity of a virtual leader in finite time. The event-triggered rule is used to reduce the updating frequency of controllers in order to save the computational resources of sensor nodes. Some stability conditions are derived for mobile sensor networks with the proposed control scheme under both a fixed communication topology and a switching communication topology. Finally, simulation results illustrate the effectiveness of the proposed control scheme for the problem of environmental monitoring.
Accelerated successive substitution schemes for bubble-point and dew-point calculations
Energy Technology Data Exchange (ETDEWEB)
Peng, D.-Y. (Univ. of Saskatchewan, Saskatoon, SK (Canada))
1991-08-01
Phase equilibrium calculations form an important part of the process design operations in the hydrocarbon and petroleum industry. The accelerated successive substitution (SS) algorithms developed by Mehra et al. (1983) for flash calculations have been extended to the prediction of saturation points. A transformation matrix which is used to calculate the acceleration parameter has been rewritten in a form that is applicable at the saturation conditions. Simple equations for estimating the initial values and recursive formulae according to which the iterates can be updated are presented. The proposed schemes were compared with the conventional SS method and a multivariate Newton's method. The comparison suggests that the accelerated SS schemes are more tolerant of poor initial values and sometimes more efficient than Newton's method. The features of the acceleration schemes and those of the empirical equations developed in this study are illustrated using three hydrocarbon mixtures: a 5-component mixture of n-alkanes, a typical natural gas system, and a volatile oil. 19 refs., 6 figs., 6 tabs.
Quantum Inequalities and Sequential Measurements
International Nuclear Information System (INIS)
Candelpergher, B.; Grandouz, T.; Rubinx, J.L.
2011-01-01
In this article, the peculiar context of sequential measurements is chosen in order to analyze the quantum specificity in the two most famous examples of Heisenberg and Bell inequalities: Results are found at some interesting variance with customary textbook materials, where the context of initial state re-initialization is described. A key-point of the analysis is the possibility of defining Joint Probability Distributions for sequential random variables associated to quantum operators. Within the sequential context, it is shown that Joint Probability Distributions can be defined in situations where not all of the quantum operators (corresponding to random variables) do commute two by two. (authors)
Anonymous authentication and location privacy preserving schemes for LTE-A networks
Directory of Open Access Journals (Sweden)
Zaher Jabr Haddad
2017-11-01
Full Text Available Long Term Evaluation Advanced (LTE-A is the third generation partnership project for cellular network that allows subscribers to roam into networks (i.e., the Internet and wireless connections using spacial purpose base-stations, such as wireless access points and home node B. In such LTE-A based networks, neither base-stations, nor the Internet and wireless connections are trusted because base-stations are operated by un-trusted subscribers. Attackers may exploit these vulnerabilities to violate the privacy of the LTE-A subscribers. On the other hand, the tradeoff between privacy and authentication is another challenge in such networks. Therefore, in this paper, we propose two anonymous authentication schemes based on one-time pseudonymes and Schnorr Zero Knowledge Protocols. Instead of the international mobile subscriber identity, these schemes enable the user equipment, base-stations and mobility management entity to mutually authenticate each others and update the location of the user equipment without evolving the home subscriber server. The security analysis demonstrate that the proposed schemes thwart security and privacy attacks, such as malicious, international mobile subscriber identity catching, and tracking attacks. Additionally, our proposed schemes preserve the location privacy of user equipment since no entity except the mobility management entity and Gate-Way Mobile Location Center can link between the pseudonymes and the international mobile subscriber identity. Also attackers have no knowledge about international mobile subscriber identity. Hence, the proposed schemes achieve backward/forward secrecy. Furthermore, the performance evaluation shows that the proposed handover schemes impose a small overhead on the mobile nodes and it has smaller computation and communication overheads than those in other schemes.
Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin
2011-01-01
Objective this article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on CT examinations. Methods we developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. Results the scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing dataset of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. Conclusions The proposed method is able to robustly and accurately disconnect all connections between left and right lungs and the guided dynamic programming algorithm is able to remove redundant processing. PMID:21412104
Reinforcement Learning Based Data Self-Destruction Scheme for Secured Data Management
Directory of Open Access Journals (Sweden)
Young Ki Kim
2018-04-01
Full Text Available As technologies and services that leverage cloud computing have evolved, the number of businesses and individuals who use them are increasing rapidly. In the course of using cloud services, as users store and use data that include personal information, research on privacy protection models to protect sensitive information in the cloud environment is becoming more important. As a solution to this problem, a self-destructing scheme has been proposed that prevents the decryption of encrypted user data after a certain period of time using a Distributed Hash Table (DHT network. However, the existing self-destructing scheme does not mention how to set the number of key shares and the threshold value considering the environment of the dynamic DHT network. This paper proposes a method to set the parameters to generate the key shares needed for the self-destructing scheme considering the availability and security of data. The proposed method defines state, action, and reward of the reinforcement learning model based on the similarity of the graph, and applies the self-destructing scheme process by updating the parameter based on the reinforcement learning model. Through the proposed technique, key sharing parameters can be set in consideration of data availability and security in dynamic DHT network environments.
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
Update of the BIG 1-98 Trial: where do we stand?
Joerger, Markus; Thürlimann, Beat
2009-10-01
There is accumulating data on the clinical benefit of aromatase inhibitors in the adjuvant treatment of early-stage breast cancer in postmenopausal women. The Breast International Group (BIG) 1-98 study is a randomized, phase 3, double-blind trial comparing four adjuvant endocrine treatments of 5 years duration in postmenopausal women with hormone-receptor-positive breast cancer: letrozole or tamoxifen monotherapy, sequential treatment with tamoxifen followed by letrozole, or vice versa. This article summarizes data presented at the 2009 St. Gallen early breast cancer conference: an update on the monotherapy arms of the BIG 1-98 study, and results from the sequential treatment arms. Implications for daily practice from BIG 1-98 and from other adjuvant trials will be discussed. Despite cross-over from tamoxifen to letrozole by 25% of the patients after unblinding of the tamoxifen monotherapy arm, the improvement of disease-free survival (HR 0.88, 0.78-0.99, p = 0.03) and time to distant recurrence (HR 0.85, 0.72-1.00, p = 0.05) for letrozole monotherapy as compared to tamoxifen monotherapy remained significant in the intention-to-treat (ITT) analysis. A trend for an overall survival advantage for letrozole was seen in the ITT analysis (HR 0.87, 0.75-1.02, p = 0.08). No statistically significant differences were found for the sequential treatment arms versus letrozole monotherapy, with respect to disease-free survival, time to distant recurrence or overall survival. Cumulative incidence analysis of breast cancer recurrence favors the initiation of adjuvant endocrine treatment with letrozole instead of tamoxifen, especially in patients at higher risk for early recurrence. Similarly, data suggest that patients commenced on letrozole can be switched to tamoxifen after 2 years, if required. The BIG 1-98 study update with median follow up of 76 months confirms a significant reduction in the risk of breast cancer recurrence and a trend towards improved overall survival
Directory of Open Access Journals (Sweden)
Simon Heru Prassetyo
2018-04-01
Full Text Available Explicit solution techniques have been widely used in geotechnical engineering for simulating the coupled hydro-mechanical (H-M interaction of fluid flow and deformation induced by structures built above and under saturated ground, i.e. circular footing and deep tunnel. However, the technique is only conditionally stable and requires small time steps, portending its inefficiency for simulating large-scale H-M problems. To improve its efficiency, the unconditionally stable alternating direction explicit (ADE scheme could be used to solve the flow problem. The standard ADE scheme, however, is only moderately accurate and is restricted to uniform grids and plane strain flow conditions. This paper aims to remove these drawbacks by developing a novel high-order ADE scheme capable of solving flow problems in non-uniform grids and under axisymmetric conditions. The new scheme is derived by performing a fourth-order finite difference (FD approximation to the spatial derivatives of the axisymmetric fluid–diffusion equation in a non-uniform grid configuration. The implicit Crank-Nicolson technique is then applied to the resulting approximation, and the subsequent equation is split into two alternating direction sweeps, giving rise to a new axisymmetric ADE scheme. The pore pressure solutions from the new scheme are then sequentially coupled with an existing geomechanical simulator in the computer code fast Lagrangian analysis of continua (FLAC. This coupling procedure is called the sequentially-explicit coupling technique based on the fourth-order axisymmetric ADE scheme or SEA-4-AXI. Application of SEA-4-AXI for solving axisymmetric consolidation of a circular footing and of advancing tunnel in deep saturated ground shows that SEA-4-AXI reduces computer runtime up to 42%–50% that of FLAC's basic scheme without numerical instability. In addition, it produces high numerical accuracy of the H-M solutions with average percentage difference of only 0.5%
PF-WFS Shell Inspection Update December 2016
Energy Technology Data Exchange (ETDEWEB)
Vigil, Anthony Eugene [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ledoux, Reina Rebecca [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gonzales, Antonio R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Montano, Joshua Daniel [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Savage, Lowell Curtis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Randles, Wayne Alfred [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-01-26
Since the last project update in FY16:Q2, PF-WFS personnel have advanced in understanding of shell inspection on Coordinate Measuring Machines {CMM} and refined the PF-WFS process to the point it was decided to convert shell inspection from the Sheffield #1 gage to Lietz CM Ms. As a part of introspection on the quality of this process many sets of data have been reviewed and analyzed. This analysis included Sheffield to CMM comparisons, CMM inspection repeatability, fixturing differences, quality check development, probing approach changes. This update report will touch on these improvements that have built the confidence in this process to mainstream it inspecting shells. In addition to the CMM programming advancements, the continuation in refinement of input and outputs for the CMM program has created an archiving scheme, input spline files, an output metafile, and inspection report package. This project will continue to mature. Part designs may require program modifications to accommodate "new to this process" part designs. Technology limitations tied to security and performance are requiring possible changes to computer configurations to support an automated process.
Sequential Generalized Transforms on Function Space
Directory of Open Access Journals (Sweden)
Jae Gil Choi
2013-01-01
Full Text Available We define two sequential transforms on a function space Ca,b[0,T] induced by generalized Brownian motion process. We then establish the existence of the sequential transforms for functionals in a Banach algebra of functionals on Ca,b[0,T]. We also establish that any one of these transforms acts like an inverse transform of the other transform. Finally, we give some remarks about certain relations between our sequential transforms and other well-known transforms on Ca,b[0,T].
Sequential probability ratio controllers for safeguards radiation monitors
International Nuclear Information System (INIS)
Fehlau, P.E.; Coop, K.L.; Nixon, K.V.
1984-01-01
Sequential hypothesis tests applied to nuclear safeguards accounting methods make the methods more sensitive to detecting diversion. The sequential tests also improve transient signal detection in safeguards radiation monitors. This paper describes three microprocessor control units with sequential probability-ratio tests for detecting transient increases in radiation intensity. The control units are designed for three specific applications: low-intensity monitoring with Poisson probability ratios, higher intensity gamma-ray monitoring where fixed counting intervals are shortened by sequential testing, and monitoring moving traffic where the sequential technique responds to variable-duration signals. The fixed-interval controller shortens a customary 50-s monitoring time to an average of 18 s, making the monitoring delay less bothersome. The controller for monitoring moving vehicles benefits from the sequential technique by maintaining more than half its sensitivity when the normal passage speed doubles
Method for updating pipelined, single port Z-buffer by segments on a scan line
International Nuclear Information System (INIS)
Hannah, M.R.
1990-01-01
This patent describes, in a raster scan, computer controlled video display system for presenting an image to an observer. Having Z-buffer for storing Z values and a frame buffer for storing pixel values, a method for updating the Z-buffer with new Z values to replace old Z values. It comprises: calculating a new pixel value and a new Z value for each pixel location in pixel locations, performing a Z comparison for each new Z value by comparing the old Z value with the new Z value for each pixel location, the Z comparison being performed sequentially in one direction through the plurality of pixel locations, and updating the Z-buffer only after the Z comparison produces a combination of a fail condition for a current pixel location subsequent to producing a pass condition for a pixel location immediately preceding the current pixel location
Qaraqe, Marwa
2014-04-01
This paper focuses on the development of multiuser access schemes for spectrum sharing systems whereby secondary users are allowed to share the spectrum with primary users under the condition that the interference observed at the primary receiver is below a predetermined threshold. In particular, two scheduling schemes are proposed for selecting a user among those that satisfy the interference constraint and achieve an acceptable signal-to-noise ratio level. The first scheme focuses on optimizing the average spectral efficiency by selecting the user that reports the best channel quality. In order to alleviate the relatively high feedback required by the first scheme, a second scheme based on the concept of switched diversity is proposed, where the base station (BS) scans the secondary users in a sequential manner until a user whose channel quality is above an acceptable predetermined threshold is found. We develop expressions for the statistics of the signal-to-interference and noise ratio as well as the average spectral efficiency, average feedback load, and the delay at the secondary BS. We then present numerical results for the effect of the number of users and the interference constraint on the optimal switching threshold and the system performance and show that our analysis results are in perfect agreement with the numerical results. © 2014 John Wiley & Sons, Ltd.
Biased lineups: sequential presentation reduces the problem.
Lindsay, R C; Lea, J A; Nosworthy, G J; Fulford, J A; Hector, J; LeVan, V; Seabrook, C
1991-12-01
Biased lineups have been shown to increase significantly false, but not correct, identification rates (Lindsay, Wallbridge, & Drennan, 1987; Lindsay & Wells, 1980; Malpass & Devine, 1981). Lindsay and Wells (1985) found that sequential lineup presentation reduced false identification rates, presumably by reducing reliance on relative judgment processes. Five staged-crime experiments were conducted to examine the effect of lineup biases and sequential presentation on eyewitness recognition accuracy. Sequential lineup presentation significantly reduced false identification rates from fair lineups as well as from lineups biased with regard to foil similarity, instructions, or witness attire, and from lineups biased in all of these ways. The results support recommendations that police present lineups sequentially.
A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements.
Mohsenzadeh, Yalda; Dash, Suryadeep; Crawford, J Douglas
2016-01-01
In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks.
Lineup composition, suspect position, and the sequential lineup advantage.
Carlson, Curt A; Gronlund, Scott D; Clark, Steven E
2008-06-01
N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate in the simultaneous lineup, and no sequential lineup advantage was found. This led the authors to hypothesize that protection from a sequential lineup might emerge only when an innocent suspect stands out from the other lineup members. In Experiment 2, participants viewed a simultaneous or sequential lineup with either the guilty suspect or 1 of 3 innocent suspects. Lineup fairness was varied to influence the degree to which a suspect stood out. A sequential lineup advantage was found only for the unfair lineups. Additional analyses of suspect position in the sequential lineups showed an increase in the diagnosticity of suspect identifications as the suspect was placed later in the sequential lineup. These results suggest that the sequential lineup advantage is dependent on lineup composition and suspect position. (c) 2008 APA, all rights reserved
Xanthones of Lichen Source: A 2016 Update.
Le Pogam, Pierre; Boustie, Joël
2016-03-02
An update of xanthones encountered in lichens is proposed as more than 20 new xanthones have been described since the publication of the compendium of lichen metabolites by Huneck and Yoshimura in 1996. The last decades witnessed major advances regarding the elucidation of biosynthetic schemes leading to these fascinating compounds, accounting for the unique substitution patterns of a very vast majority of lichen xanthones. Besides a comprehensive analysis of the structures of xanthones described in lichens, their bioactivities and the emerging analytical strategies used to pinpoint them within lichens are presented here together with physico-chemical properties (including NMR data) as reported since 1996.
Aisenberg, D; Sapir, A; Close, A; Henik, A; d'Avossa, G
2018-01-31
Participants are slower to report a feature, such as color, when the target appears on the side opposite the instructed response, than when the target appears on the same side. This finding suggests that target location, even when task-irrelevant, interferes with response selection. This effect is magnified in older adults. Lengthening the inter-trial interval, however, suffices to normalize the congruency effect in older adults, by re-establishing young-like sequential effects (Aisenberg et al., 2014). We examined the neurological correlates of age related changes by comparing BOLD signals in young and old participants performing a visual version of the Simon task. Participants reported the color of a peripheral target, by a left or right-hand keypress. Generally, BOLD responses were greater following incongruent than congruent targets. Also, they were delayed and of smaller amplitude in old than young participants. BOLD responses in visual and motor regions were also affected by the congruency of the previous target, suggesting that sequential effects may reflect remapping of stimulus location onto the hand used to make a response. Crucially, young participants showed larger BOLD responses in right anterior cerebellum to incongruent targets, when the previous target was congruent, but smaller BOLD responses to incongruent targets when the previous target was incongruent. Old participants, however, showed larger BOLD responses to congruent than incongruent targets, irrespective of the previous target congruency. We conclude that aging may interfere with the trial by trial updating of the mapping between the task-irrelevant target location and response, which takes place during the inter-trial interval in the cerebellum and underlays sequential effects in a Simon task. Copyright © 2017 Elsevier Ltd. All rights reserved.
Effortless assignment with 4D covariance sequential correlation maps.
Harden, Bradley J; Mishra, Subrata H; Frueh, Dominique P
2015-11-01
Traditional Nuclear Magnetic Resonance (NMR) assignment procedures for proteins rely on preliminary peak-picking to identify and label NMR signals. However, such an approach has severe limitations when signals are erroneously labeled or completely neglected. The consequences are especially grave for proteins with substantial peak overlap, and mistakes can often thwart entire projects. To overcome these limitations, we previously introduced an assignment technique that bypasses traditional pick peaking altogether. Covariance Sequential Correlation Maps (COSCOMs) transform the indirect connectivity information provided by multiple 3D backbone spectra into direct (H, N) to (H, N) correlations. Here, we present an updated method that utilizes a single four-dimensional spectrum rather than a suite of three-dimensional spectra. We demonstrate the advantages of 4D-COSCOMs relative to their 3D counterparts. We introduce improvements accelerating their calculation. We discuss practical considerations affecting their quality. And finally we showcase their utility in the context of a 52 kDa cyclization domain from a non-ribosomal peptide synthetase. Copyright © 2015 Elsevier Inc. All rights reserved.
El Gharamti, Mohamad; Valstar, Johan R.; Hoteit, Ibrahim
2014-01-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system's parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.
El Gharamti, Mohamad
2014-09-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.
Tradable permit allocations and sequential choice
Energy Technology Data Exchange (ETDEWEB)
MacKenzie, Ian A. [Centre for Economic Research, ETH Zuerich, Zurichbergstrasse 18, 8092 Zuerich (Switzerland)
2011-01-15
This paper investigates initial allocation choices in an international tradable pollution permit market. For two sovereign governments, we compare allocation choices that are either simultaneously or sequentially announced. We show sequential allocation announcements result in higher (lower) aggregate emissions when announcements are strategic substitutes (complements). Whether allocation announcements are strategic substitutes or complements depends on the relationship between the follower's damage function and governments' abatement costs. When the marginal damage function is relatively steep (flat), allocation announcements are strategic substitutes (complements). For quadratic abatement costs and damages, sequential announcements provide a higher level of aggregate emissions. (author)
Applying the minimax principle to sequential mastery testing
Vos, Hendrik J.
2002-01-01
The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master, a nonmaster, or to continue sampling and administering another random item. The framework of minimax sequential decision theory (minimum
Qi, Haikun; Huang, Feng; Zhou, Hongmei; Chen, Huijun
2017-03-01
k-t principle component analysis (k-t PCA) is a distinguished method for high spatiotemporal resolution dynamic MRI. To further improve the accuracy of k-t PCA, a combination with partial parallel imaging (PPI), k-t PCA/SENSE, has been tested. However, k-t PCA/SENSE suffers from long reconstruction time and limited improvement. This study aims to improve the combination of k-t PCA and PPI on both reconstruction speed and accuracy. A sequential combination scheme called k-t PCA GROWL (GRAPPA operator for wider readout line) was proposed. The GRAPPA operator was performed before k-t PCA to extend each readout line into a wider band, which improved the condition of the encoding matrix in the following k-t PCA reconstruction. k-t PCA GROWL was tested and compared with k-t PCA and k-t PCA/SENSE on cardiac imaging. k-t PCA GROWL consistently resulted in better image quality compared with k-t PCA/SENSE at high acceleration factors for both retrospectively and prospectively undersampled cardiac imaging, with a much lower computation cost. The improvement in image quality became greater with the increase of acceleration factor. By sequentially combining the GRAPPA operator and k-t PCA, the proposed k-t PCA GROWL method outperformed k-t PCA/SENSE in both reconstruction speed and accuracy, suggesting that k-t PCA GROWL is a better combination scheme than k-t PCA/SENSE. Magn Reson Med 77:1058-1067, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Amin, Ruhul; Islam, S K Hafizul; Biswas, G P; Khan, Muhammad Khurram; Kumar, Neeraj
2015-11-01
In the last few years, numerous remote user authentication and session key agreement schemes have been put forwarded for Telecare Medical Information System, where the patient and medical server exchange medical information using Internet. We have found that most of the schemes are not usable for practical applications due to known security weaknesses. It is also worth to note that unrestricted number of patients login to the single medical server across the globe. Therefore, the computation and maintenance overhead would be high and the server may fail to provide services. In this article, we have designed a medical system architecture and a standard mutual authentication scheme for single medical server, where the patient can securely exchange medical data with the doctor(s) via trusted central medical server over any insecure network. We then explored the security of the scheme with its resilience to attacks. Moreover, we formally validated the proposed scheme through the simulation using Automated Validation of Internet Security Schemes and Applications software whose outcomes confirm that the scheme is protected against active and passive attacks. The performance comparison demonstrated that the proposed scheme has lower communication cost than the existing schemes in literature. In addition, the computation cost of the proposed scheme is nearly equal to the exiting schemes. The proposed scheme not only efficient in terms of different security attacks, but it also provides an efficient login, mutual authentication, session key agreement and verification and password update phases along with password recovery.
Classical and sequential limit analysis revisited
Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi
2018-04-01
Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.
Simultaneous versus sequential penetrating keratoplasty and cataract surgery.
Hayashi, Ken; Hayashi, Hideyuki
2006-10-01
To compare the surgical outcomes of simultaneous penetrating keratoplasty and cataract surgery with those of sequential surgery. Thirty-nine eyes of 39 patients scheduled for simultaneous keratoplasty and cataract surgery and 23 eyes of 23 patients scheduled for sequential keratoplasty and secondary phacoemulsification surgery were recruited. Refractive error, regular and irregular corneal astigmatism determined by Fourier analysis, and endothelial cell loss were studied at 1 week and 3, 6, and 12 months after combined surgery in the simultaneous surgery group or after subsequent phacoemulsification surgery in the sequential surgery group. At 3 and more months after surgery, mean refractive error was significantly greater in the simultaneous surgery group than in the sequential surgery group, although no difference was seen at 1 week. The refractive error at 12 months was within 2 D of that targeted in 15 eyes (39%) in the simultaneous surgery group and within 2 D in 16 eyes (70%) in the sequential surgery group; the incidence was significantly greater in the sequential group (P = 0.0344). The regular and irregular astigmatism was not significantly different between the groups at 3 and more months after surgery. No significant difference was also found in the percentage of endothelial cell loss between the groups. Although corneal astigmatism and endothelial cell loss were not different, refractive error from target refraction was greater after simultaneous keratoplasty and cataract surgery than after sequential surgery, indicating a better outcome after sequential surgery than after simultaneous surgery.
Tank waste remediation system optimized processing strategy with an altered treatment scheme
International Nuclear Information System (INIS)
Slaathaug, E.J.
1996-03-01
This report provides an alternative strategy evolved from the current Hanford Site Tank Waste Remediation System (TWRS) programmatic baseline for accomplishing the treatment and disposal of the Hanford Site tank wastes. This optimized processing strategy with an altered treatment scheme performs the major elements of the TWRS Program, but modifies the deployment of selected treatment technologies to reduce the program cost. The present program for development of waste retrieval, pretreatment, and vitrification technologies continues, but the optimized processing strategy reuses a single facility to accomplish the separations/low-activity waste (LAW) vitrification and the high-level waste (HLW) vitrification processes sequentially, thereby eliminating the need for a separate HLW vitrification facility
DEFF Research Database (Denmark)
Hansen, Elo Harald
Determination of low or trace-level amounts of metals by electrothermal atomic absorption spectrometry (ETAAS) often requires the use of suitable preconcentration and/or separation procedures in order to attain the necessary sensitivity and selectivity. Such schemes are advantageously executed...... by superior performance and versatility. In fact, two approaches are conceivable: The analyte-loaded ion-exchange beads might either be transported directly into the graphite tube where they are pyrolized and the measurand is atomized and quantified; or the loaded beads can be eluted and the eluate forwarded...
Trial Sequential Methods for Meta-Analysis
Kulinskaya, Elena; Wood, John
2014-01-01
Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…
Sequentially pulsed traveling wave accelerator
Caporaso, George J [Livermore, CA; Nelson, Scott D [Patterson, CA; Poole, Brian R [Tracy, CA
2009-08-18
A sequentially pulsed traveling wave compact accelerator having two or more pulse forming lines each with a switch for producing a short acceleration pulse along a short length of a beam tube, and a trigger mechanism for sequentially triggering the switches so that a traveling axial electric field is produced along the beam tube in synchronism with an axially traversing pulsed beam of charged particles to serially impart energy to the particle beam.
Collection of sequential imaging events for research in breast cancer screening
Patel, M. N.; Young, K.; Halling-Brown, M. D.
2016-03-01
Due to the huge amount of research involving medical images, there is a widely accepted need for comprehensive collections of medical images to be made available for research. This demand led to the design and implementation of a flexible image repository, which retrospectively collects images and data from multiple sites throughout the UK. The OPTIMAM Medical Image Database (OMI-DB) was created to provide a centralized, fully annotated dataset for research. The database contains both processed and unprocessed images, associated data, annotations and expert-determined ground truths. Collection has been ongoing for over three years, providing the opportunity to collect sequential imaging events. Extensive alterations to the identification, collection, processing and storage arms of the system have been undertaken to support the introduction of sequential events, including interval cancers. These updates to the collection systems allow the acquisition of many more images, but more importantly, allow one to build on the existing high-dimensional data stored in the OMI-DB. A research dataset of this scale, which includes original normal and subsequent malignant cases along with expert derived and clinical annotations, is currently unique. These data provide a powerful resource for future research and has initiated new research projects, amongst which, is the quantification of normal cases by applying a large number of quantitative imaging features, with a priori knowledge that eventually these cases develop a malignancy. This paper describes, extensions to the OMI-DB collection systems and tools and discusses the prospective applications of having such a rich dataset for future research applications.
Huang, Xia; Li, Chunqiang; Xiao, Chuan; Sun, Wenqing; Qian, Wei
2017-03-01
The temporal focusing two-photon microscope (TFM) is developed to perform depth resolved wide field fluorescence imaging by capturing frames sequentially. However, due to strong nonignorable noises and diffraction rings surrounding particles, further researches are extremely formidable without a precise particle localization technique. In this paper, we developed a fully-automated scheme to locate particles positions with high noise tolerance. Our scheme includes the following procedures: noise reduction using a hybrid Kalman filter method, particle segmentation based on a multiscale kernel graph cuts global and local segmentation algorithm, and a kinematic estimation based particle tracking method. Both isolated and partial-overlapped particles can be accurately identified with removal of unrelated pixels. Based on our quantitative analysis, 96.22% isolated particles and 84.19% partial-overlapped particles were successfully detected.
On Converting Secret Sharing Scheme to Visual Secret Sharing Scheme
Directory of Open Access Journals (Sweden)
Wang Daoshun
2010-01-01
Full Text Available Abstract Traditional Secret Sharing (SS schemes reconstruct secret exactly the same as the original one but involve complex computation. Visual Secret Sharing (VSS schemes decode the secret without computation, but each share is m times as big as the original and the quality of the reconstructed secret image is reduced. Probabilistic visual secret sharing (Prob.VSS schemes for a binary image use only one subpixel to share the secret image; however the probability of white pixels in a white area is higher than that in a black area in the reconstructed secret image. SS schemes, VSS schemes, and Prob. VSS schemes have various construction methods and advantages. This paper first presents an approach to convert (transform a -SS scheme to a -VSS scheme for greyscale images. The generation of the shadow images (shares is based on Boolean XOR operation. The secret image can be reconstructed directly by performing Boolean OR operation, as in most conventional VSS schemes. Its pixel expansion is significantly smaller than that of VSS schemes. The quality of the reconstructed images, measured by average contrast, is the same as VSS schemes. Then a novel matrix-concatenation approach is used to extend the greyscale -SS scheme to a more general case of greyscale -VSS scheme.
Directory of Open Access Journals (Sweden)
S K Hafizul Islam
Full Text Available Over the past few years, secure and privacy-preserving user authentication scheme has become an integral part of the applications of the healthcare systems. Recently, Wen has designed an improved user authentication system over the Lee et al.'s scheme for integrated electronic patient record (EPR information system, which has been analyzed in this study. We have found that Wen's scheme still has the following inefficiencies: (1 the correctness of identity and password are not verified during the login and password change phases; (2 it is vulnerable to impersonation attack and privileged-insider attack; (3 it is designed without the revocation of lost/stolen smart card; (4 the explicit key confirmation and the no key control properties are absent, and (5 user cannot update his/her password without the help of server and secure channel. Then we aimed to propose an enhanced two-factor user authentication system based on the intractable assumption of the quadratic residue problem (QRP in the multiplicative group. Our scheme bears more securities and functionalities than other schemes found in the literature.
Directory of Open Access Journals (Sweden)
Yoo-Geun Ham
2016-01-01
Full Text Available This study introduces a modified version of the incremental analysis updates (IAU, called the nonstationary IAU (NIAU method, to improve the assimilation accuracy of the IAU while keeping the continuity of the analysis. Similar to the IAU, the NIAU is designed to add analysis increments at every model time step to improve the continuity in the intermittent data assimilation. However, unlike the IAU, the NIAU procedure uses time-evolved forcing using the forward operator as corrections to the model. The solution of the NIAU is superior to that of the forward IAU, of which analysis is performed at the beginning of the time window for adding the IAU forcing, in terms of the accuracy of the analysis field. It is because, in the linear systems, the NIAU solution equals that in an intermittent data assimilation method at the end of the assimilation interval. To have the filtering property in the NIAU, a forward operator to propagate the increment is reconstructed with only dominant singular vectors. An illustration of those advantages of the NIAU is given using the simple 40-variable Lorenz model.
A Single Multilocus Sequence Typing (MLST) Scheme for Seven Pathogenic Leptospira Species
Amornchai, Premjit; Wuthiekanun, Vanaporn; Bailey, Mark S.; Holden, Matthew T. G.; Zhang, Cuicai; Jiang, Xiugao; Koizumi, Nobuo; Taylor, Kyle; Galloway, Renee; Hoffmaster, Alex R.; Craig, Scott; Smythe, Lee D.; Hartskeerl, Rudy A.; Day, Nicholas P.; Chantratita, Narisara; Feil, Edward J.; Aanensen, David M.; Spratt, Brian G.; Peacock, Sharon J.
2013-01-01
Background The available Leptospira multilocus sequence typing (MLST) scheme supported by a MLST website is limited to L. interrogans and L. kirschneri. Our aim was to broaden the utility of this scheme to incorporate a total of seven pathogenic species. Methodology and Findings We modified the existing scheme by replacing one of the seven MLST loci (fadD was changed to caiB), as the former gene did not appear to be present in some pathogenic species. Comparison of the original and modified schemes using data for L. interrogans and L. kirschneri demonstrated that the discriminatory power of the two schemes was not significantly different. The modified scheme was used to further characterize 325 isolates (L. alexanderi [n = 5], L. borgpetersenii [n = 34], L. interrogans [n = 222], L. kirschneri [n = 29], L. noguchii [n = 9], L. santarosai [n = 10], and L. weilii [n = 16]). Phylogenetic analysis using concatenated sequences of the 7 loci demonstrated that each species corresponded to a discrete clade, and that no strains were misclassified at the species level. Comparison between genotype and serovar was possible for 254 isolates. Of the 31 sequence types (STs) represented by at least two isolates, 18 STs included isolates assigned to two or three different serovars. Conversely, 14 serovars were identified that contained between 2 to 10 different STs. New observations were made on the global phylogeography of Leptospira spp., and the utility of MLST in making associations between human disease and specific maintenance hosts was demonstrated. Conclusion The new MLST scheme, supported by an updated MLST website, allows the characterization and species assignment of isolates of the seven major pathogenic species associated with leptospirosis. PMID:23359622
An Efficient System Based On Closed Sequential Patterns for Web Recommendations
Utpala Niranjan; R.B.V. Subramanyam; V-Khana
2010-01-01
Sequential pattern mining, since its introduction has received considerable attention among the researchers with broad applications. The sequential pattern algorithms generally face problems when mining long sequential patterns or while using very low support threshold. One possible solution of such problems is by mining the closed sequential patterns, which is a condensed representation of sequential patterns. Recently, several researchers have utilized the sequential pattern discovery for d...
DEFF Research Database (Denmark)
Sueviriyapan, Natthapong; Suriyapraphadilok, Uthaiporn; Siemanond, Kitipat
2015-01-01
a generic model-based synthesis and design framework for retrofit wastewater treatment networks (WWTN) of an existing industrial process. The developed approach is suitable for grassroots and retrofit systems and adaptable to a wide range of wastewater treatment problems. A sequential solution procedure...... is employed to solve a network superstructure-based optimization problem formulated as Mixed Integer Linear and/or Non-Linear Programming (MILP/MINLP). Data from a petroleum refinery effluent treatment plant together with special design constraints are employed to formulate different design schemes based...... for the future development of the existing wastewater treatment process....
International Nuclear Information System (INIS)
Papassiopi, Nymphodora; Kontoyianni, Athina; Vaxevanidou, Katerina; Xenidis, Anthimos
2009-01-01
The iron reducing microorganism Desulfuromonas palmitatis was evaluated as potential biostabilization agent for the remediation of chromate contaminated soils. D. palmitatis were used for the treatment of soil samples artificially contaminated with Cr(VI) at two levels, i.e. 200 and 500 mg kg -1 . The efficiency of the treatment was evaluated by applying several standard extraction techniques on the soil samples before and after treatment, such as the EN12457 standard leaching test, the US EPA 3060A alkaline digestion method and the BCR sequential extraction procedure. The water soluble chromium as evaluated with the EN leaching test, was found to decrease after the biostabilization treatment from 13 to less than 0.5 mg kg -1 and from 120 to 5.6 mg kg -1 for the soil samples contaminated with 200 and 500 mg Cr(VI) per kg soil respectively. The BCR sequential extraction scheme, although not providing accurate estimates about the initial chromium speciation in contaminated soils, proved to be a useful tool for monitoring the relative changes in element partitioning, as a consequence of the stabilization treatment. After bioreduction, the percentage of chromium retained in the two least soluble BCR fractions, i.e. the 'oxidizable' and 'residual' fractions, increased from 54 and 73% to more than 96% in both soils
Model Predictive Engine Air-Ratio Control Using Online Sequential Relevance Vector Machine
Directory of Open Access Journals (Sweden)
Hang-cheong Wong
2012-01-01
Full Text Available Engine power, brake-specific fuel consumption, and emissions relate closely to air ratio (i.e., lambda among all the engine variables. An accurate and adaptive model for lambda prediction is essential to effective lambda control for long term. This paper utilizes an emerging technique, relevance vector machine (RVM, to build a reliable time-dependent lambda model which can be continually updated whenever a sample is added to, or removed from, the estimated lambda model. The paper also presents a new model predictive control (MPC algorithm for air-ratio regulation based on RVM. This study shows that the accuracy, training, and updating time of the RVM model are superior to the latest modelling methods, such as diagonal recurrent neural network (DRNN and decremental least-squares support vector machine (DLSSVM. Moreover, the control algorithm has been implemented on a real car to test. Experimental results reveal that the control performance of the proposed relevance vector machine model predictive controller (RVMMPC is also superior to DRNNMPC, support vector machine-based MPC, and conventional proportional-integral (PI controller in production cars. Therefore, the proposed RVMMPC is a promising scheme to replace conventional PI controller for engine air-ratio control.
A splitting integration scheme for the SPH simulation of concentrated particle suspensions
Bian, Xin; Ellero, Marco
2014-01-01
Simulating nearly contacting solid particles in suspension is a challenging task due to the diverging behavior of short-range lubrication forces, which pose a serious time-step limitation for explicit integration schemes. This general difficulty limits severely the total duration of simulations of concentrated suspensions. Inspired by the ideas developed in [S. Litvinov, M. Ellero, X.Y. Hu, N.A. Adams, J. Comput. Phys. 229 (2010) 5457-5464] for the simulation of highly dissipative fluids, we propose in this work a splitting integration scheme for the direct simulation of solid particles suspended in a Newtonian liquid. The scheme separates the contributions of different forces acting on the solid particles. In particular, intermediate- and long-range multi-body hydrodynamic forces, which are computed from the discretization of the Navier-Stokes equations using the smoothed particle hydrodynamics (SPH) method, are taken into account using an explicit integration; for short-range lubrication forces, velocities of pairwise interacting solid particles are updated implicitly by sweeping over all the neighboring pairs iteratively, until convergence in the solution is obtained. By using the splitting integration, simulations can be run stably and efficiently up to very large solid particle concentrations. Moreover, the proposed scheme is not limited to the SPH method presented here, but can be easily applied to other simulation techniques employed for particulate suspensions.
International Nuclear Information System (INIS)
Wang Jianhua; Hansen, Elo Harald; Miro, Manuel
2003-01-01
This communication presents an overview of the state-of-the-art of the exploitation of sequential injection (SI)-bead injection (BI)-lab-on-valve (LOV) schemes for automatic on-line sample pre-treatments interfaced with ETAAS and ICPMS detection as conducted in the authors' group. The discussions are focused on the applications of SI-BI-LOV protocols for on-line microcolumn based solid phase extraction of ultra-trace levels of heavy metals, employing the so-called renewable surface separation and preconcentration manipulatory scheme. Two types of sorbents have been employed as packing material, that is, the hydrophilic SP Sephadex C-25 cation exchange and iminodiacetate based Muromac A-1 chelating resins, and the hydrophobic poly(tetrafluoroethylene) (PTFE) and poly(styrene-divinylbenzene) copolymer alkylated with octadecyl groups (C 18 -PS/DVB). Using ETAAS as detection device, the easy-to-handle hydrophilic renewable reactors hold the features of improved R.S.D.s and LODs as compared to those operated in the conventional, permanent mode, in addition to the elimination of flow resistance. The hydrophobic columns fall into two categories, that is, the renewable one packed with C 18 -PS/DVB beads entails analogous R.S.D.s and LODs with respect to the conventional approach, while those with PTFE beads result in slightly inferior R.S.D.s and LODs by similar comparison, yet offering a wider dynamic range than when using an external permanent column. Moreover, the hydrophilic materials result in much higher enrichment of the analyte than the hydrophobic ones, although PTFE is the packing material that exhibits the best retention efficiency
Directory of Open Access Journals (Sweden)
Laurent Dewasme
2017-02-01
Full Text Available Hybridoma cells are commonly grown for the production of monoclonal antibodies (MAb. For monitoring and control purposes of the bioreactors, dynamic models of the cultures are required. However these models are difficult to infer from the usually limited amount of available experimental data and do not focus on target protein production optimization. This paper explores an experimental case study where hybridoma cells are grown in a sequential batch reactor. The simplest macroscopic reaction scheme translating the data is first derived using a maximum likelihood principal component analysis. Subsequently, nonlinear least-squares estimation is used to determine the kinetic laws. The resulting dynamic model reproduces quite satisfactorily the experimental data, as evidenced in direct and cross-validation tests. Furthermore, model predictions can also be used to predict optimal medium renewal time and composition.
A sequential threshold cure model for genetic analysis of time-to-event data
DEFF Research Database (Denmark)
Ødegård, J; Madsen, Per; Labouriau, Rodrigo S.
2011-01-01
In analysis of time-to-event data, classical survival models ignore the presence of potential nonsusceptible (cured) individuals, which, if present, will invalidate the inference procedures. Existence of nonsusceptible individuals is particularly relevant under challenge testing with specific...... pathogens, which is a common procedure in aquaculture breeding schemes. A cure model is a survival model accounting for a fraction of nonsusceptible individuals in the population. This study proposes a mixed cure model for time-to-event data, measured as sequential binary records. In a simulation study...... survival data were generated through 2 underlying traits: susceptibility and endurance (risk of dying per time-unit), associated with 2 sets of underlying liabilities. Despite considerable phenotypic confounding, the proposed model was largely able to distinguish the 2 traits. Furthermore, if selection...
Discrimination between sequential and simultaneous virtual channels with electrical hearing.
Landsberger, David; Galvin, John J
2011-09-01
In cochlear implants (CIs), simultaneous or sequential stimulation of adjacent electrodes can produce intermediate pitch percepts between those of the component electrodes. However, it is unclear whether simultaneous and sequential virtual channels (VCs) can be discriminated. In this study, CI users were asked to discriminate simultaneous and sequential VCs; discrimination was measured for monopolar (MP) and bipolar + 1 stimulation (BP + 1), i.e., relatively broad and focused stimulation modes. For sequential VCs, the interpulse interval (IPI) varied between 0.0 and 1.8 ms. All stimuli were presented at comfortably loud, loudness-balanced levels at a 250 pulse per second per electrode (ppse) stimulation rate. On average, CI subjects were able to reliably discriminate between sequential and simultaneous VCs. While there was no significant effect of IPI or stimulation mode on VC discrimination, some subjects exhibited better VC discrimination with BP + 1 stimulation. Subjects' discrimination between sequential and simultaneous VCs was correlated with electrode discrimination, suggesting that spatial selectivity may influence perception of sequential VCs. To maintain equal loudness, sequential VC amplitudes were nearly double those of simultaneous VCs, presumably resulting in a broader spread of excitation. These results suggest that perceptual differences between simultaneous and sequential VCs might be explained by differences in the spread of excitation. © 2011 Acoustical Society of America
Islam, SK Hafizul; Khan, Muhammad Khurram; Li, Xiong
2015-01-01
Over the past few years, secure and privacy-preserving user authentication scheme has become an integral part of the applications of the healthcare systems. Recently, Wen has designed an improved user authentication system over the Lee et al.’s scheme for integrated electronic patient record (EPR) information system, which has been analyzed in this study. We have found that Wen’s scheme still has the following inefficiencies: (1) the correctness of identity and password are not verified during the login and password change phases; (2) it is vulnerable to impersonation attack and privileged-insider attack; (3) it is designed without the revocation of lost/stolen smart card; (4) the explicit key confirmation and the no key control properties are absent, and (5) user cannot update his/her password without the help of server and secure channel. Then we aimed to propose an enhanced two-factor user authentication system based on the intractable assumption of the quadratic residue problem (QRP) in the multiplicative group. Our scheme bears more securities and functionalities than other schemes found in the literature. PMID:26263401
Li, Xiong; Niu, Jianwei; Karuppiah, Marimuthu; Kumari, Saru; Wu, Fan
2016-12-01
Benefited from the development of network and communication technologies, E-health care systems and telemedicine have got the fast development. By using the E-health care systems, patient can enjoy the remote medical service provided by the medical server. Medical data are important privacy information for patient, so it is an important issue to ensure the secure of transmitted medical data through public network. Authentication scheme can thwart unauthorized users from accessing services via insecure network environments, so user authentication with privacy protection is an important mechanism for the security of E-health care systems. Recently, based on three factors (password, biometric and smart card), an user authentication scheme for E-health care systems was been proposed by Amin et al., and they claimed that their scheme can withstand most of common attacks. Unfortunate, we find that their scheme cannot achieve the untraceability feature of the patient. Besides, their scheme lacks a password check mechanism such that it is inefficient to find the unauthorized login by the mistake of input a wrong password. Due to the same reason, their scheme is vulnerable to Denial of Service (DoS) attack if the patient updates the password mistakenly by using a wrong password. In order improve the security level of authentication scheme for E-health care application, a robust user authentication scheme with privacy protection is proposed for E-health care systems. Then, security prove of our scheme are analysed. Security and performance analyses show that our scheme is more powerful and secure for E-health care systems when compared with other related schemes.
Sequential versus simultaneous market delineation
DEFF Research Database (Denmark)
Haldrup, Niels; Møllgaard, Peter; Kastberg Nielsen, Claus
2005-01-01
and geographical markets. Using a unique data setfor prices of Norwegian and Scottish salmon, we propose a methodologyfor simultaneous market delineation and we demonstrate that comparedto a sequential approach conclusions will be reversed.JEL: C3, K21, L41, Q22Keywords: Relevant market, econometric delineation......Delineation of the relevant market forms a pivotal part of most antitrustcases. The standard approach is sequential. First the product marketis delineated, then the geographical market is defined. Demand andsupply substitution in both the product dimension and the geographicaldimension...
Amezcua, Javier
This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn't represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion
Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No
2015-11-01
One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tan, Maxine; Pu, Jiantao; Zheng, Bin
2014-08-01
Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Among these four methods, SFFS has highest efficacy, which takes 3%-5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC results of the ANNs optimized
Delay-Aware Program Codes Dissemination Scheme in Internet of Everything
Directory of Open Access Journals (Sweden)
Yixuan Xu
2016-01-01
Full Text Available Due to recent advancements in big data, connection technologies, and smart devices, our environment is transforming into an “Internet of Everything” (IoE environment. These smart devices can obtain new or special functions by reprogramming: upgrade their soft systems through receiving new version of program codes. However, bulk codes dissemination suffers from large delay, energy consumption, and number of retransmissions because of the unreliability of wireless links. In this paper, a delay-aware program dissemination (DAPD scheme is proposed to disseminate program codes with fast, reliable, and energy-efficient style. We observe that although total energy is limited in wireless sensor network, there exists residual energy in nodes deployed far from the base station. Therefore, DAPD scheme improves the performance of bulk codes dissemination through the following two aspects. (1 Due to the fact that a high transmitting power can significantly improve the quality of wireless links, transmitting power of sensors with more residual energy is enhanced to improve link quality. (2 Due to the fact that performance of correlated dissemination tends to degrade in a highly dynamic environment, link correlation is autonomously updated in DAPD during codes dissemination to maintain improvements brought by correlated dissemination. Theoretical analysis and experimental results show that, compared with previous work, DAPD scheme improves the dissemination performance in terms of completion time, transmission cost, and the efficiency of energy utilization.
Group-sequential analysis may allow for early trial termination
DEFF Research Database (Denmark)
Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich
2017-01-01
BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...
Sequential logic analysis and synthesis
Cavanagh, Joseph
2007-01-01
Until now, there was no single resource for actual digital system design. Using both basic and advanced concepts, Sequential Logic: Analysis and Synthesis offers a thorough exposition of the analysis and synthesis of both synchronous and asynchronous sequential machines. With 25 years of experience in designing computing equipment, the author stresses the practical design of state machines. He clearly delineates each step of the structured and rigorous design principles that can be applied to practical applications. The book begins by reviewing the analysis of combinatorial logic and Boolean a
Directory of Open Access Journals (Sweden)
Jin-Woo Jung
2013-08-01
Full Text Available One emerging biometric identification method is the use of human footprint. However, in the previous research, there were some limitations resulting from the spatial resolution of sensors. One possible method to overcome this limitation is through the use additional information such as dynamic walking information in sequential walking footprint. In this study, we suggest a new person recognition scheme based on both overlapped foot shape and COP (Center Of Pressure trajectory during one-step walking. And, we show the usefulness of the suggested method, obtaining a 98.6% recognition rate in our experiment with eleven people. In addition, we show an application of the suggested method, automatic door-opening system for intelligent residential space.
Structural Consistency, Consistency, and Sequential Rationality.
Kreps, David M; Ramey, Garey
1987-01-01
Sequential equilibria comprise consistent beliefs and a sequentially ra tional strategy profile. Consistent beliefs are limits of Bayes ratio nal beliefs for sequences of strategies that approach the equilibrium strategy. Beliefs are structurally consistent if they are rationaliz ed by some single conjecture concerning opponents' strategies. Consis tent beliefs are not necessarily structurally consistent, notwithstan ding a claim by Kreps and Robert Wilson (1982). Moreover, the spirit of stru...
AdOn HDP-HMM: An Adaptive Online Model for Segmentation and Classification of Sequential Data.
Bargi, Ava; Xu, Richard Yi Da; Piccardi, Massimo
2017-09-21
Recent years have witnessed an increasing need for the automated classification of sequential data, such as activities of daily living, social media interactions, financial series, and others. With the continuous flow of new data, it is critical to classify the observations on-the-fly and without being limited by a predetermined number of classes. In addition, a model should be able to update its parameters in response to a possible evolution in the distributions of the classes. This compelling problem, however, does not seem to have been adequately addressed in the literature, since most studies focus on offline classification over predefined class sets. In this paper, we present a principled solution for this problem based on an adaptive online system leveraging Markov switching models and hierarchical Dirichlet process priors. This adaptive online approach is capable of classifying the sequential data over an unlimited number of classes while meeting the memory and delay constraints typical of streaming contexts. In this paper, we introduce an adaptive ''learning rate'' that is responsible for balancing the extent to which the model retains its previous parameters or adapts to new observations. Experimental results on stationary and evolving synthetic data and two video data sets, TUM Assistive Kitchen and collated Weizmann, show a remarkable performance in terms of segmentation and classification, particularly for sequences from evolutionary distributions and/or those containing previously unseen classes.
Ilik, Semih C.; Arsoy, Aysen B.
2017-07-01
Integration of distributed generation (DG) such as renewable energy sources to electrical network becomes more prevalent in recent years. Grid connection of DG has effects on load flow directions, voltage profile, short circuit power and especially protection selectivity. Applying traditional overcurrent protection scheme is inconvenient when system reliability and sustainability are considered. If a fault happens in DG connected network, short circuit contribution of DG, creates additional branch element feeding the fault current; compels to consider directional overcurrent (OC) protection scheme. Protection coordination might get lost for changing working conditions when DG sources are connected. Directional overcurrent relay parameters are determined for downstream and upstream relays when different combinations of DG connected singular or plural, on radial test system. With the help of proposed flow chart, relay parameters are updated and coordination between relays kept sustained for different working conditions in DigSILENT PowerFactory program.
Sequential determination of actinides in a variety of matrices
International Nuclear Information System (INIS)
Olsen, S.C.
2002-01-01
A large number of analytical procedures for the actinides have been published, each catering for a specific need. Due to the bioassay programme in our laboratory, a need arose for a method to determine natural (Th and U) and anthropogenic actinides (Np, Pu and Am/Cm) together in a variety of samples. The method would have to be suitable for routine application: simple, inexpensive, rapid and robust. In some cases, the amount of material available is not sufficient for the determination of separate groups of actinides, and a sequential separation and measurement of the analytes would therefore be required. The types of matrices vary from aqueous samples to radiological surveillance (urine and faeces) to environmental studies (soil, sediment and fish), but the separation procedure should be able to service all of these. The working range of the method would have to cater for lower levels of the transuranium actinides in particular sample types containing higher levels of the natural actinides (U and Th). The first analytical problem to be discussed, is how to get the different sample types into the same loading solution required by a single separation approach. This entails sample dissolution or decomposition in some cases, and pre-concentration or pre-separation in others. A separation scheme is presented for the clean separation of all the actinides in a form suitable for alpha spectrometry. The development of a single column separation of the analytes of interest are looked at, as well as observations made during the development of the separation scheme, such as concentration effects. Results for test samples and certified reference materials are be presented. (author)
Generalized infimum and sequential product of quantum effects
International Nuclear Information System (INIS)
Li Yuan; Sun Xiuhong; Chen Zhengli
2007-01-01
The quantum effects for a physical system can be described by the set E(H) of positive operators on a complex Hilbert space H that are bounded above by the identity operator I. For A, B(set-membership sign)E(H), the operation of sequential product A(convolution sign)B=A 1/2 BA 1/2 was proposed as a model for sequential quantum measurements. A nice investigation of properties of the sequential product has been carried over [Gudder, S. and Nagy, G., 'Sequential quantum measurements', J. Math. Phys. 42, 5212 (2001)]. In this note, we extend some results of this reference. In particular, a gap in the proof of Theorem 3.2 in this reference is overcome. In addition, some properties of generalized infimum A sqcap B are studied
Sequential analysis in neonatal research-systematic review.
Lava, Sebastiano A G; Elie, Valéry; Ha, Phuong Thi Viet; Jacqz-Aigrain, Evelyne
2018-05-01
As more new drugs are discovered, traditional designs come at their limits. Ten years after the adoption of the European Paediatric Regulation, we performed a systematic review on the US National Library of Medicine and Excerpta Medica database of sequential trials involving newborns. Out of 326 identified scientific reports, 21 trials were included. They enrolled 2832 patients, of whom 2099 were analyzed: the median number of neonates included per trial was 48 (IQR 22-87), median gestational age was 28.7 (IQR 27.9-30.9) weeks. Eighteen trials used sequential techniques to determine sample size, while 3 used continual reassessment methods for dose-finding. In 16 studies reporting sufficient data, the sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674) with respect to a traditional trial. When the number of neonates finally included in the analysis was considered, the difference became significant: 35 (57%) patients (IQR 10 to 136.5, p = 0.0033). Sequential trial designs have not been frequently used in Neonatology. They might potentially be able to reduce the number of patients in drug trials, although this is not always the case. What is known: • In evaluating rare diseases in fragile populations, traditional designs come at their limits. About 20% of pediatric trials are discontinued, mainly because of recruitment problems. What is new: • Sequential trials involving newborns were infrequently used and only a few (n = 21) are available for analysis. • The sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674).
Adaptive Reference Levels in a Level-Crossing Analog-to-Digital Converter
Directory of Open Access Journals (Sweden)
Andrew C. Singer
2008-11-01
Full Text Available Level-crossing analog-to-digital converters (LC ADCs have been considered in the literature and have been shown to efficiently sample certain classes of signals. One important aspect of their implementation is the placement of reference levels in the converter. The levels need to be appropriately located within the input dynamic range, in order to obtain samples efficiently. In this paper, we study optimization of the performance of such an LC ADC by providing several sequential algorithms that adaptively update the ADC reference levels. The accompanying performance analysis and simulation results show that as the signal length grows, the performance of the sequential algorithms asymptotically approaches that of the best choice that could only have been chosen in hindsight within a family of possible schemes.
Su, Cheng-Kuan; Tseng, Po-Jen; Chiu, Hsien-Ting; Del Vall, Andrea; Huang, Yu-Fen; Sun, Yuh-Chang
2017-03-01
Probing tumor extracellular metabolites is a vitally important issue in current cancer biology. In this study an analytical system was constructed for the in vivo monitoring of mouse tumor extracellular hydrogen peroxide (H 2 O 2 ), lactate, and glucose by means of microdialysis (MD) sampling and fluorescence determination in conjunction with a smart sequential enzymatic derivatization scheme-involving a loading sequence of fluorogenic reagent/horseradish peroxidase, microdialysate, lactate oxidase, pyruvate, and glucose oxidase-for step-by-step determination of sampled H 2 O 2 , lactate, and glucose in mouse tumor microdialysate. After optimization of the overall experimental parameters, the system's detection limit reached as low as 0.002 mM for H 2 O 2 , 0.058 mM for lactate, and 0.055 mM for glucose, based on 3 μL of microdialysate, suggesting great potential for determining tumor extracellular concentrations of lactate and glucose. Spike analyses of offline-collected mouse tumor microdialysate and monitoring of the basal concentrations of mouse tumor extracellular H 2 O 2 , lactate, and glucose, as well as those after imparting metabolic disturbance through intra-tumor administration of a glucose solution through a prior-implanted cannula, were conducted to demonstrate the system's applicability. Our results evidently indicate that hyphenation of an MD sampling device with an optimized sequential enzymatic derivatization scheme and a fluorescence spectrometer can be used successfully for multi-analyte monitoring of tumor extracellular metabolites in living animals. Copyright © 2016 Elsevier B.V. All rights reserved.
Group-sequential analysis may allow for early trial termination
DEFF Research Database (Denmark)
Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich
2017-01-01
BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...... assumed to be normally distributed, and sequential one-sided hypothesis tests on the population standard deviation of the differences against a hypothesised value of 1.5 were performed, employing an alpha spending function. The fixed-sample analysis (N = 45) was compared with the group-sequential analysis...... strategies comprising one (at N = 23), two (at N = 15, 30), or three interim analyses (at N = 11, 23, 34), respectively, which were defined post hoc. RESULTS: When performing interim analyses with one third and two thirds of patients, sufficient agreement could be concluded after the first interim analysis...
Robust sequential learning of feedforward neural networks in the presence of heavy-tailed noise.
Vuković, Najdan; Miljković, Zoran
2015-03-01
Feedforward neural networks (FFNN) are among the most used neural networks for modeling of various nonlinear problems in engineering. In sequential and especially real time processing all neural networks models fail when faced with outliers. Outliers are found across a wide range of engineering problems. Recent research results in the field have shown that to avoid overfitting or divergence of the model, new approach is needed especially if FFNN is to run sequentially or in real time. To accommodate limitations of FFNN when training data contains a certain number of outliers, this paper presents new learning algorithm based on improvement of conventional extended Kalman filter (EKF). Extended Kalman filter robust to outliers (EKF-OR) is probabilistic generative model in which measurement noise covariance is not constant; the sequence of noise measurement covariance is modeled as stochastic process over the set of symmetric positive-definite matrices in which prior is modeled as inverse Wishart distribution. In each iteration EKF-OR simultaneously estimates noise estimates and current best estimate of FFNN parameters. Bayesian framework enables one to mathematically derive expressions, while analytical intractability of the Bayes' update step is solved by using structured variational approximation. All mathematical expressions in the paper are derived using the first principles. Extensive experimental study shows that FFNN trained with developed learning algorithm, achieves low prediction error and good generalization quality regardless of outliers' presence in training data. Copyright © 2014 Elsevier Ltd. All rights reserved.
Comparison of ablation centration after bilateral sequential versus simultaneous LASIK.
Lin, Jane-Ming; Tsai, Yi-Yu
2005-01-01
To compare ablation centration after bilateral sequential and simultaneous myopic LASIK. A retrospective randomized case series was performed of 670 eyes of 335 consecutive patients who had undergone either bilateral sequential (group 1) or simultaneous (group 2) myopic LASIK between July 2000 and July 2001 at the China Medical University Hospital, Taichung, Taiwan. The ablation centrations of the first and second eyes in the two groups were compared 3 months postoperatively. Of 670 eyes, 274 eyes (137 patients) comprised the sequential group and 396 eyes (198 patients) comprised the simultaneous group. Three months post-operatively, 220 eyes of 110 patients (80%) in the sequential group and 236 eyes of 118 patients (60%) in the simultaneous group provided topographic data for centration analysis. For the first eyes, mean decentration was 0.39 +/- 0.26 mm in the sequential group and 0.41 +/- 0.19 mm in the simultaneous group (P = .30). For the second eyes, mean decentration was 0.28 +/- 0.23 mm in the sequential group and 0.30 +/- 0.21 mm in the simultaneous group (P = .36). Decentration in the second eyes significantly improved in both groups (group 1, P = .02; group 2, P sequential group and 0.32 +/- 0.18 mm in the simultaneous group (P = .33). The difference of ablation center angles between the first and second eyes was 43.2 sequential group and 45.1 +/- 50.8 degrees in the simultaneous group (P = .42). Simultaneous bilateral LASIK is comparable to sequential surgery in ablation centration.
A Survey of Multi-Objective Sequential Decision-Making
Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.
2013-01-01
Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential
Sequential lineups: shift in criterion or decision strategy?
Gronlund, Scott D
2004-04-01
R. C. L. Lindsay and G. L. Wells (1985) argued that a sequential lineup enhanced discriminability because it elicited use of an absolute decision strategy. E. B. Ebbesen and H. D. Flowe (2002) argued that a sequential lineup led witnesses to adopt a more conservative response criterion, thereby affecting bias, not discriminability. Height was encoded as absolute (e.g., 6 ft [1.83 m] tall) or relative (e.g., taller than). If a sequential lineup elicited an absolute decision strategy, the principle of transfer-appropriate processing predicted that performance should be best when height was encoded absolutely. Conversely, if a simultaneous lineup elicited a relative decision strategy, performance should be best when height was encoded relatively. The predicted interaction was observed, providing direct evidence for the decision strategies explanation of what happens when witnesses view a sequential lineup.
Predicting FLDs Using a Multiscale Modeling Scheme
Wu, Z.; Loy, C.; Wang, E.; Hegadekatte, V.
2017-09-01
The measurement of a single forming limit diagram (FLD) requires significant resources and is time consuming. We have developed a multiscale modeling scheme to predict FLDs using a combination of limited laboratory testing, crystal plasticity (VPSC) modeling, and dual sequential-stage finite element (ABAQUS/Explicit) modeling with the Marciniak-Kuczynski (M-K) criterion to determine the limit strain. We have established a means to work around existing limitations in ABAQUS/Explicit by using an anisotropic yield locus (e.g., BBC2008) in combination with the M-K criterion. We further apply a VPSC model to reduce the number of laboratory tests required to characterize the anisotropic yield locus. In the present work, we show that the predicted FLD is in excellent agreement with the measured FLD for AA5182 in the O temper. Instead of 13 different tests as for a traditional FLD determination within Novelis, our technique uses just four measurements: tensile properties in three orientations; plane strain tension; biaxial bulge; and the sheet crystallographic texture. The turnaround time is consequently far less than for the traditional laboratory measurement of the FLD.
How to Read the Tractatus Sequentially
Directory of Open Access Journals (Sweden)
Tim Kraft
2016-11-01
Full Text Available One of the unconventional features of Wittgenstein’s Tractatus Logico-Philosophicus is its use of an elaborated and detailed numbering system. Recently, Bazzocchi, Hacker und Kuusela have argued that the numbering system means that the Tractatus must be read and interpreted not as a sequentially ordered book, but as a text with a two-dimensional, tree-like structure. Apart from being able to explain how the Tractatus was composed, the tree reading allegedly solves exegetical issues both on the local (e. g. how 4.02 fits into the series of remarks surrounding it and the global level (e. g. relation between ontology and picture theory, solipsism and the eye analogy, resolute and irresolute readings. This paper defends the sequential reading against the tree reading. After presenting the challenges generated by the numbering system and the two accounts as attempts to solve them, it is argued that Wittgenstein’s own explanation of the numbering system, anaphoric references within the Tractatus and the exegetical issues mentioned above do not favour the tree reading, but a version of the sequential reading. This reading maintains that the remarks of the Tractatus form a sequential chain: The role of the numbers is to indicate how remarks on different levels are interconnected to form a concise, surveyable and unified whole.
A minimax procedure in the context of sequential mastery testing
Vos, Hendrik J.
1999-01-01
The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master or a nonmaster, or to continue sampling and administering another random test item. The framework of minimax sequential decision theory
Improving precipitation simulation from updated surface characteristics in South America
Pereira, Gabriel; Silva, Maria Elisa Siqueira; Moraes, Elisabete Caria; Chiquetto, Júlio Barboza; da Silva Cardozo, Francielle
2017-07-01
Land use and land cover maps and their physical-chemical and biological properties are important variables in the numerical modeling of Earth systems. In this context, the main objective of this study is to analyze the improvements resulting from the land use and land cover map update in numerical simulations performed using the Regional Climate Model system version 4 (RegCM4), as well as the seasonal variations of physical parameters used by the Biosphere Atmosphere Transfer Scheme (BATS). In general, the update of the South America 2007 land use and land cover map, used by the BATS, improved the simulation of precipitation by 10 %, increasing the mean temporal correlation coefficient, compared to observed data, from 0.84 to 0.92 (significant at p Atlantic convergence zone (SACZ) positioning, presenting a spatial pattern of alternated areas with higher and lower precipitation rates. These important differences occur due to the replacement of tropical rainforest for pasture and agriculture and the replacement of agricultural areas for pasture, scrubland, and deciduous forest.
Directory of Open Access Journals (Sweden)
Bhawna Mallick
2013-04-01
Full Text Available Sequential pattern mining is a vital data mining task to discover the frequently occurring patterns in sequence databases. As databases develop, the problem of maintaining sequential patterns over an extensively long period of time turn into essential, since a large number of new records may be added to a database. To reflect the current state of the database where previous sequential patterns would become irrelevant and new sequential patterns might appear, there is a need for efficient algorithms to update, maintain and manage the information discovered. Several efficient algorithms for maintaining sequential patterns have been developed. Here, we have presented an efficient algorithm to handle the maintenance problem of CFM-sequential patterns (Compact, Frequent, Monetary-constraints based sequential patterns. In order to efficiently capture the dynamic nature of data addition and deletion into the mining problem, initially, we construct the updated CFM-tree using the CFM patterns obtained from the static database. Then, the database gets updated from the distributed sources that have data which may be static, inserted, or deleted. Whenever the database is updated from the multiple sources, CFM tree is also updated by including the updated sequence. Then, the updated CFM-tree is used to mine the progressive CFM-patterns using the proposed tree pattern mining algorithm. Finally, the experimentation is carried out using the synthetic and real life distributed databases that are given to the progressive CFM-miner. The experimental results and analysis provides better results in terms of the generated number of sequential patterns, execution time and the memory usage over the existing IncSpan algorithm.
Seluge++: a secure over-the-air programming scheme in wireless sensor networks.
Doroodgar, Farzan; Abdur Razzaque, Mohammad; Isnin, Ismail Fauzi
2014-03-11
Over-the-air dissemination of code updates in wireless sensor networks have been researchers' point of interest in the last few years, and, more importantly, security challenges toward the remote propagation of code updating have occupied the majority of efforts in this context. Many security models have been proposed to establish a balance between the energy consumption and security strength, having their concentration on the constrained nature of wireless sensor network (WSN) nodes. For authentication purposes, most of them have used a Merkle hash tree to avoid using multiple public cryptography operations. These models mostly have assumed an environment in which security has to be at a standard level. Therefore, they have not investigated the tree structure for mission-critical situations in which security has to be at the maximum possible level (e.g., military applications, healthcare). Considering this, we investigate existing security models used in over-the-air dissemination of code updates for possible vulnerabilities, and then, we provide a set of countermeasures, correspondingly named Security Model Requirements. Based on the investigation, we concentrate on Seluge, one of the existing over-the-air programming schemes, and we propose an improved version of it, named Seluge++, which complies with the Security Model Requirements and replaces the use of the inefficient Merkle tree with a novel method. Analytical and simulation results show the improvements in Seluge++ compared to Seluge.
Multichannel, sequential or combined X-ray spectrometry
International Nuclear Information System (INIS)
Florestan, J.
1979-01-01
X-ray spectrometer qualities and defects are evaluated for sequential and multichannel categories. Multichannel X-ray spectrometer has time-coherency advantage and its results could be more reproducible; on the other hand some spatial incoherency limits low percentage and traces applications, specially when backgrounds are very variable. In this last case, sequential X-ray spectrometer would find again great usefulness [fr
Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah
2018-04-01
This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual
Das, Ashok Kumar; Odelu, Vanga; Goswami, Adrijit
2015-09-01
The telecare medicine information system (TMIS) helps the patients to gain the health monitoring facility at home and access medical services over the Internet of mobile networks. Recently, Amin and Biswas presented a smart card based user authentication and key agreement security protocol usable for TMIS system using the cryptographic one-way hash function and biohashing function, and claimed that their scheme is secure against all possible attacks. Though their scheme is efficient due to usage of one-way hash function, we show that their scheme has several security pitfalls and design flaws, such as (1) it fails to protect privileged-insider attack, (2) it fails to protect strong replay attack, (3) it fails to protect strong man-in-the-middle attack, (4) it has design flaw in user registration phase, (5) it has design flaw in login phase, (6) it has design flaw in password change phase, (7) it lacks of supporting biometric update phase, and (8) it has flaws in formal security analysis. In order to withstand these security pitfalls and design flaws, we aim to propose a secure and robust user authenticated key agreement scheme for the hierarchical multi-server environment suitable in TMIS using the cryptographic one-way hash function and fuzzy extractor. Through the rigorous security analysis including the formal security analysis using the widely-accepted Burrows-Abadi-Needham (BAN) logic, the formal security analysis under the random oracle model and the informal security analysis, we show that our scheme is secure against possible known attacks. Furthermore, we simulate our scheme using the most-widely accepted and used Automated Validation of Internet Security Protocols and Applications (AVISPA) tool. The simulation results show that our scheme is also secure. Our scheme is more efficient in computation and communication as compared to Amin-Biswas's scheme and other related schemes. In addition, our scheme supports extra functionality features as compared to
Induction of simultaneous and sequential malolactic fermentation in durian wine.
Taniasuri, Fransisca; Lee, Pin-Rou; Liu, Shao-Quan
2016-08-02
This study represented for the first time the impact of malolactic fermentation (MLF) induced by Oenococcus oeni and its inoculation strategies (simultaneous vs. sequential) on the fermentation performance as well as aroma compound profile of durian wine. There was no negative impact of simultaneous inoculation of O. oeni and Saccharomyces cerevisiae on the growth and fermentation kinetics of S. cerevisiae as compared to sequential fermentation. Simultaneous MLF did not lead to an excessive increase in volatile acidity as compared to sequential MLF. The kinetic changes of organic acids (i.e. malic, lactic, succinic, acetic and α-ketoglutaric acids) varied with simultaneous and sequential MLF relative to yeast alone. MLF, regardless of inoculation mode, resulted in higher production of fermentation-derived volatiles as compared to control (alcoholic fermentation only), including esters, volatile fatty acids, and terpenes, except for higher alcohols. Most indigenous volatile sulphur compounds in durian were decreased to trace levels with little differences among the control, simultaneous and sequential MLF. Among the different wines, the wine with simultaneous MLF had higher concentrations of terpenes and acetate esters while sequential MLF had increased concentrations of medium- and long-chain ethyl esters. Relative to alcoholic fermentation only, both simultaneous and sequential MLF reduced acetaldehyde substantially with sequential MLF being more effective. These findings illustrate that MLF is an effective and novel way of modulating the volatile and aroma compound profile of durian wine. Copyright © 2016 Elsevier B.V. All rights reserved.
Bizer, David S; DeMarzo, Peter M
1992-01-01
The authors study environments in which agents may borrow sequentially from more than one leader. Although debt is prioritized, additional lending imposes an externality on prior debt because, with moral hazard, the probability of repayment of prior loans decreases. Equilibrium interest rates are higher than they would be if borrowers could commit to borrow from at most one bank. Even though the loan terms are less favorable than they would be under commitment, the indebtedness of borrowers i...
Directory of Open Access Journals (Sweden)
A. G. Xia
2011-07-01
Full Text Available A new method is proposed to simplify complex atmospheric chemistry reaction schemes, while preserving SOA formation properties, using genetic algorithms. The method is first applied in this study to the gas-phase α-pinene oxidation scheme. The simple unified volatility-based scheme (SUVS reflects the multi-generation evolution of chemical species from a near-explicit master chemical mechanism (MCM and, at the same time, uses the volatility-basis set speciation for condensable products. The SUVS also unifies reactions between SOA precursors with different oxidants under different atmospheric conditions. A total of 412 unknown parameters (product yields of parameterized products, reaction rates, etc. from the SUVS are estimated by using genetic algorithms operating on the detailed mechanism. The number of organic species was reduced from 310 in the detailed mechanism to 31 in the SUVS. Output species profiles, obtained from the original subset of the MCM reaction scheme for α-pinene oxidation, are reproduced with maximum fractional error at 0.10 for scenarios under a wide range of ambient HC/NOx conditions. Ultimately, the same SUVS with updated parameters could be used to describe the SOA formation from different precursors.
Directory of Open Access Journals (Sweden)
Ying-Qun Zhou
2012-01-01
Full Text Available Objective. Antimicrobial resistance has decreased eradication rates for Helicobacter pylori infection worldwide. To observe the effect of eradicating Helicobacter pylori (H. pylori and the treatment of duodenal ulcer by 2 kinds of modified sequential therapy through comparing with that of 10-day standard triple therapy. Methods. A total of 210 patients who were confirmed in duodenal ulcer active or heal period by gastroscopy and H. pylori positive confirmed by rapid urease test, serum anti-H. pylori antibody (ELASE, or histological examination enrolled in the study. All the patients were randomly divided into three groups: group A (70 cases and group B (70 cases were provided 10-day modified sequential therapy; group C (70 cases was provided 10-day standard triple therapy. Patients of group A received 20 mg of Esomeprazole, 500 mg of Clarithromycin for the first 5 days, followed by 20 mg of Esomeprazole, 500 mg of Clarithromycin, and 1000 mg of Amoxicillin for the remaining 5 days. Group B received 20 mg of Esomeprazole, 1000 mg of Amoxicillin for the first 5 days, followed by 20 mg of Esomeprazole, 500 mg of Clarithromycin, and 1000 mg of Amoxicillin for the remaining 5 days. Group C received 20 mg of Esomeprazole, 500 mg of Clarithromycin, and 1000 mg of Amoxicillin for standard 10-day therapy. All drugs were given twice daily. H. pylori eradication rate was checked four to eight weeks after taking the medicine by using a 13C urea breath test. In the first, second, third, seventh, twenty-first, thirty-fifth days respectively, the symptoms of patients such as epigastric gnawing, burning pain, and acidity were evaluated simultaneously. Results. Overall, 210 patients accomplished all therapy schemes, 9 case patients were excluded. The examination result indicated that the H. pylori eradication rate of each group was as follows: group A 92.5% (62/67, group B 86.8% (59/68, and group C 78.8% (52/66. The H. pylori
Directory of Open Access Journals (Sweden)
Yanjiao Li
2017-08-01
Full Text Available Gas utilization ratio (GUR is an important indicator used to measure the operating status and energy consumption of blast furnaces (BFs. In this paper, we present a soft-sensor approach, i.e., a novel online sequential extreme learning machine (OS-ELM named DU-OS-ELM, to establish a data-driven model for GUR prediction. In DU-OS-ELM, firstly, the old collected data are discarded gradually and the newly acquired data are given more attention through a novel dynamic forgetting factor (DFF, depending on the estimation errors to enhance the dynamic tracking ability. Furthermore, we develop an updated selection strategy (USS to judge whether the model needs to be updated with the newly coming data, so that the proposed approach is more in line with the actual production situation. Then, the convergence analysis of the proposed DU-OS-ELM is presented to ensure the estimation of output weight converge to the true value with the new data arriving. Meanwhile, the proposed DU-OS-ELM is applied to build a soft-sensor model to predict GUR. Experimental results demonstrate that the proposed DU-OS-ELM obtains better generalization performance and higher prediction accuracy compared with a number of existing related approaches using the real production data from a BF and the created GUR prediction model can provide an effective guidance for further optimization operation.
Li, Yanjiao; Zhang, Sen; Yin, Yixin; Xiao, Wendong; Zhang, Jie
2017-08-10
Gas utilization ratio (GUR) is an important indicator used to measure the operating status and energy consumption of blast furnaces (BFs). In this paper, we present a soft-sensor approach, i.e., a novel online sequential extreme learning machine (OS-ELM) named DU-OS-ELM, to establish a data-driven model for GUR prediction. In DU-OS-ELM, firstly, the old collected data are discarded gradually and the newly acquired data are given more attention through a novel dynamic forgetting factor (DFF), depending on the estimation errors to enhance the dynamic tracking ability. Furthermore, we develop an updated selection strategy (USS) to judge whether the model needs to be updated with the newly coming data, so that the proposed approach is more in line with the actual production situation. Then, the convergence analysis of the proposed DU-OS-ELM is presented to ensure the estimation of output weight converge to the true value with the new data arriving. Meanwhile, the proposed DU-OS-ELM is applied to build a soft-sensor model to predict GUR. Experimental results demonstrate that the proposed DU-OS-ELM obtains better generalization performance and higher prediction accuracy compared with a number of existing related approaches using the real production data from a BF and the created GUR prediction model can provide an effective guidance for further optimization operation.
Equivalence between quantum simultaneous games and quantum sequential games
Kobayashi, Naoki
2007-01-01
A framework for discussing relationships between different types of games is proposed. Within the framework, quantum simultaneous games, finite quantum simultaneous games, quantum sequential games, and finite quantum sequential games are defined. In addition, a notion of equivalence between two games is defined. Finally, the following three theorems are shown: (1) For any quantum simultaneous game G, there exists a quantum sequential game equivalent to G. (2) For any finite quantum simultaneo...
Accounting for Heterogeneous Returns in Sequential Schooling Decisions
Zamarro, G.
2006-01-01
This paper presents a method for estimating returns to schooling that takes into account that returns may be heterogeneous among agents and that educational decisions are made sequentially.A sequential decision model is interesting because it explicitly considers that the level of education of each
Simultaneous Versus Sequential Ptosis and Strabismus Surgery in Children.
Revere, Karen E; Binenbaum, Gil; Li, Jonathan; Mills, Monte D; Katowitz, William R; Katowitz, James A
The authors sought to compare the clinical outcomes of simultaneous versus sequential ptosis and strabismus surgery in children. Retrospective, single-center cohort study of children requiring both ptosis and strabismus surgery on the same eye. Simultaneous surgeries were performed during a single anesthetic event; sequential surgeries were performed at least 7 weeks apart. Outcomes were ptosis surgery success (margin reflex distance 1 ≥ 2 mm, good eyelid contour, and good eyelid crease); strabismus surgery success (ocular alignment within 10 prism diopters of orthophoria and/or improved head position); surgical complications; and reoperations. Fifty-six children were studied, 38 had simultaneous surgery and 18 sequential. Strabismus surgery was performed first in 38/38 simultaneous and 6/18 sequential cases. Mean age at first surgery was 64 months, with mean follow up 27 months. A total of 75% of children had congenital ptosis; 64% had comitant strabismus. A majority of ptosis surgeries were frontalis sling (59%) or Fasanella-Servat (30%) procedures. There were no significant differences between simultaneous and sequential groups with regards to surgical success rates, complications, or reoperations (all p > 0.28). In the first comparative study of simultaneous versus sequential ptosis and strabismus surgery, no advantage for sequential surgery was seen. Despite a theoretical risk of postoperative eyelid malposition or complications when surgeries were performed in a combined manner, the rate of such outcomes was not increased with simultaneous surgeries. Performing ptosis and strabismus surgery together appears to be clinically effective and safe, and reduces anesthesia exposure during childhood.
A hierarchical updating method for finite element model of airbag buffer system under landing impact
Directory of Open Access Journals (Sweden)
He Huan
2015-12-01
Full Text Available In this paper, we propose an impact finite element (FE model for an airbag landing buffer system. First, an impact FE model has been formulated for a typical airbag landing buffer system. We use the independence of the structure FE model from the full impact FE model to develop a hierarchical updating scheme for the recovery module FE model and the airbag system FE model. Second, we define impact responses at key points to compare the computational and experimental results to resolve the inconsistency between the experimental data sampling frequency and experimental triggering. To determine the typical characteristics of the impact dynamics response of the airbag landing buffer system, we present the impact response confidence factors (IRCFs to evaluate how consistent the computational and experiment results are. An error function is defined between the experimental and computational results at key points of the impact response (KPIR to serve as a modified objective function. A radial basis function (RBF is introduced to construct updating variables for a surrogate model for updating the objective function, thereby converting the FE model updating problem to a soluble optimization problem. Finally, the developed method has been validated using an experimental and computational study on the impact dynamics of a classic airbag landing buffer system.
Reading Remediation Based on Sequential and Simultaneous Processing.
Gunnison, Judy; And Others
1982-01-01
The theory postulating a dichotomy between sequential and simultaneous processing is reviewed and its implications for remediating reading problems are reviewed. Research is cited on sequential-simultaneous processing for early and advanced reading. A list of remedial strategies based on the processing dichotomy addresses decoding and lexical…
Mauz, Elvira; von der Lippe, Elena; Allen, Jennifer; Schilling, Ralph; Müters, Stephan; Hoebel, Jens; Schmich, Patrick; Wetzstein, Matthias; Kamtsiuris, Panagiotis; Lange, Cornelia
2018-01-01
Population-based surveys currently face the problem of decreasing response rates. Mixed-mode designs are now being implemented more often to account for this, to improve sample composition and to reduce overall costs. This study examines whether a concurrent or sequential mixed-mode design achieves better results on a number of indicators of survey quality. Data were obtained from a population-based health interview survey of adults in Germany that was conducted as a methodological pilot study as part of the German Health Update (GEDA). Participants were randomly allocated to one of two surveys; each of the surveys had a different design. In the concurrent mixed-mode design ( n = 617) two types of self-administered questionnaires (SAQ-Web and SAQ-Paper) and computer-assisted telephone interviewing were offered simultaneously to the respondents along with the invitation to participate. In the sequential mixed-mode design ( n = 561), SAQ-Web was initially provided, followed by SAQ-Paper, with an option for a telephone interview being sent out together with the reminders at a later date. Finally, this study compared the response rates, sample composition, health indicators, item non-response, the scope of fieldwork and the costs of both designs. No systematic differences were identified between the two mixed-mode designs in terms of response rates, the socio-demographic characteristics of the achieved samples, or the prevalence rates of the health indicators under study. The sequential design gained a higher rate of online respondents. Very few telephone interviews were conducted for either design. With regard to data quality, the sequential design (which had more online respondents) showed less item non-response. There were minor differences between the designs in terms of their costs. Postage and printing costs were lower in the concurrent design, but labour costs were lower in the sequential design. No differences in health indicators were found between
International Nuclear Information System (INIS)
Wu, Xiangjun; Fu, Zhengye; Kurths, Jürgen
2015-01-01
In this paper, a new five-dimensional hyperchaotic system is proposed based on the Lü hyperchaotic system. Some of its basic dynamical properties, such as equilibria, Lyapunov exponents, bifurcations and various attractors are investigated. Furthermore, a new secure communication scheme based on generalized function projective synchronization (GFPS) of this hyperchaotic system with an uncertain parameter is presented. The communication scheme is composed of the modulation, the chaotic receiver, the chaotic transmitter and the demodulation. The modulation mechanism is to modulate the message signal into the system parameter. Then the chaotic signals are sent to the receiver via a public channel. In the receiver end, by designing the controllers and the parameter update rule, GFPS between the transmitter and receiver systems is achieved and the unknown parameter is estimated simultaneously. The message signal can be finally recovered by the identified parameter and the corresponding demodulation method. There is no any limitation on the message size. Numerical simulations are performed to show the validity and feasibility of the presented secure communication scheme. (paper)
Directory of Open Access Journals (Sweden)
Xiaoqiang Di
Full Text Available Both symmetric and asymmetric color image encryption have advantages and disadvantages. In order to combine their advantages and try to overcome their disadvantages, chaos synchronization is used to avoid the key transmission for the proposed semi-symmetric image encryption scheme. Our scheme is a hybrid chaotic encryption algorithm, and it consists of a scrambling stage and a diffusion stage. The control law and the update rule of function projective synchronization between the 3-cell quantum cellular neural networks (QCNN response system and the 6th-order cellular neural network (CNN drive system are formulated. Since the function projective synchronization is used to synchronize the response system and drive system, Alice and Bob got the key by two different chaotic systems independently and avoid the key transmission by some extra security links, which prevents security key leakage during the transmission. Both numerical simulations and security analyses such as information entropy analysis, differential attack are conducted to verify the feasibility, security, and efficiency of the proposed scheme.
Analysis of Adaptive Control Scheme in IEEE 802.11 and IEEE 802.11e Wireless LANs
Lee, Bih-Hwang; Lai, Hui-Cheng
In order to achieve the prioritized quality of service (QoS) guarantee, the IEEE 802.11e EDCAF (the enhanced distributed channel access function) provides the distinguished services by configuring the different QoS parameters to different access categories (ACs). An admission control scheme is needed to maximize the utilization of wireless channel. Most of papers study throughput improvement by solving the complicated multidimensional Markov-chain model. In this paper, we introduce a back-off model to study the transmission probability of the different arbitration interframe space number (AIFSN) and the minimum contention window size (CWmin). We propose an adaptive control scheme (ACS) to dynamically update AIFSN and CWmin based on the periodical monitoring of current channel status and QoS requirements to achieve the specific service differentiation at access points (AP). This paper provides an effective tuning mechanism for improving QoS in WLAN. Analytical and simulation results show that the proposed scheme outperforms the basic EDCAF in terms of throughput and service differentiation especially at high collision rate.
2012-01-01
Background Systematic Reviews (SRs) are an essential part of evidence-based medicine, providing support for clinical practice and policy on a wide range of medical topics. However, producing SRs is resource-intensive, and progress in the research they review leads to SRs becoming outdated, requiring updates. Although the question of how and when to update SRs has been studied, the best method for determining when to update is still unclear, necessitating further research. Methods In this work we study the potential impact of a machine learning-based automated system for providing alerts when new publications become available within an SR topic. Some of these new publications are especially important, as they report findings that are more likely to initiate a review update. To this end, we have designed a classification algorithm to identify articles that are likely to be included in an SR update, along with an annotation scheme designed to identify the most important publications in a topic area. Using an SR database containing over 70,000 articles, we annotated articles from 9 topics that had received an update during the study period. The algorithm was then evaluated in terms of the overall correct and incorrect alert rate for publications meeting the topic inclusion criteria, as well as in terms of its ability to identify important, update-motivating publications in a topic area. Results Our initial approach, based on our previous work in topic-specific SR publication classification, identifies over 70% of the most important new publications, while maintaining a low overall alert rate. Conclusions We performed an initial analysis of the opportunities and challenges in aiding the SR update planning process with an informatics-based machine learning approach. Alerts could be a useful tool in the planning, scheduling, and allocation of resources for SR updates, providing an improvement in timeliness and coverage for the large number of medical topics needing SRs
Shahhosseini, Zohreh; Hamzehgardeshi, Zeinab
2014-11-30
Since several factors affect nurses' participation in Continuing Education, and that nurses' Continuing Education affects patients' and community health status, it is essential to know facilitators and barriers of participation in Continuing Education programs and plan accordingly. This mixed approach study aimed to investigate the facilitators and barriers of nurses' participation, to explore nurses' perception of the most common facilitators and barriers. An explanatory sequential mixed methods design with follow up explanations variant were used, and it involved collecting quantitative data (361 nurses) first and then explaining the quantitative results with in-depth interviews during a qualitative study. The results showed that the mean score of facilitators to nurses' participation in Continuing Education was significantly higher than the mean score of barriers (61.99 ± 10.85 versus 51.17 ± 12.83; pEducation was related to "Update my knowledge". By reviewing the handwritings in qualitative phase, two main levels of updating information and professional skills were extracted as the most common facilitators and lack of support as the most common barrier to nurses' participation in continuing education program. According to important role Continuing Education on professional skills, nurse managers should facilitate the nurse' participation in the Continues Education.
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2015-10-01
The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.
Exact shock profile for the ASEP with sublattice-parallel update
International Nuclear Information System (INIS)
Jafarpour, F H; Ghafari, F E; Masharian, S R
2005-01-01
We analytically study the one-dimensional asymmetric simple exclusion process with open boundaries under sublattice-parallel updating scheme. We investigate the stationary state properties of this model conditioned on finding a given particle number in the system. Recent numerical investigations have shown that the model possesses three different phases in this case. Using a matrix product method we calculate both the exact canonical partition function and also density profiles of the particles in each phase. Application of the Yang-Lee theory reveals that the model undergoes two second-order phase transitions at critical points. These results confirm the correctness of our previous numerical studies
An update on the BQCD Hybrid Monte Carlo program
Haar, Taylor Ryan; Nakamura, Yoshifumi; Stüben, Hinnerk
2018-03-01
We present an update of BQCD, our Hybrid Monte Carlo program for simulating lattice QCD. BQCD is one of the main production codes of the QCDSF collaboration and is used by CSSM and in some Japanese finite temperature and finite density projects. Since the first publication of the code at Lattice 2010 the program has been extended in various ways. New features of the code include: dynamical QED, action modification in order to compute matrix elements by using Feynman-Hellman theory, more trace measurements (like Tr(D-n) for K, cSW and chemical potential reweighting), a more flexible integration scheme, polynomial filtering, term-splitting for RHMC, and a portable implementation of performance critical parts employing SIMD.
Matthes, J. H.; Pederson, N.; David, O.; Martin-Benito, D.
2017-12-01
Understanding the effects of climate change and biotic disturbance within diverse temperate mesic forests is complicated by the need to scale between impacts within individuals and across species in the community. It is not clear how these impacts within individuals and across a community influences the stand- and regional-scale response. Furthermore, co-occurring or sequential disturbances can make it challenging to interpret forest responses from observational data. In the northeastern United States, the 1960s drought was perhaps the most severe period of climatic stress within the past 300 years and negatively impacted the growth of individual trees across all species, but unevenly. Additionally, in 1981 the northeast experienced an outbreak of the defoliator Lymantria dispar, which preferentially consumes oak leaves, but in 1981 impacted a high proportion of other species as well. To investigate the effects of drought (across functional groups) and defoliation (within a functional group), we combined a long-term tree-ring dataset from an old-growth forest within the Palmaghatt Ravine in New York with a version of the Ecosystem Demography model that includes a scheme for representing forest insects and pathogens. We explored the sequential impacts of severe drought and defoliation on tree growth, community composition, and ecosystem-atmosphere interactions (carbon, water, and heat flux). We also conducted a set of modeling experiments with climate and defoliation disturbance scenarios to bound the potential long-term response of this forest to co-occurring and sequential drought-defoliator disturbances over the next fifty years.
International Nuclear Information System (INIS)
Bhunia, C.T.
2007-07-01
Packet combining scheme is a well defined simple error correction scheme for the detection and correction of errors at the receiver. Although it permits a higher throughput when compared to other basic ARQ protocols, packet combining (PC) scheme fails to correct errors when errors occur in the same bit locations of copies. In a previous work, a scheme known as Packet Reversed Packet Combining (PRPC) Scheme that will correct errors which occur at the same bit location of erroneous copies, was studied however PRPC does not handle a situation where a packet has more than 1 error bit. The Modified Packet Combining (MPC) Scheme that can correct double or higher bit errors was studied elsewhere. Both PRPC and MPC schemes are believed to offer higher throughput in previous studies, however neither adequate investigation nor exact analysis was done to substantiate this claim of higher throughput. In this work, an exact analysis of both PRPC and MPC is carried out and the results reported. A combined protocol (PRPC and MPC) is proposed and the analysis shows that it is capable of offering even higher throughput and better error correction capability at high bit error rate (BER) and larger packet size. (author)
C-quence: a tool for analyzing qualitative sequential data.
Duncan, Starkey; Collier, Nicholson T
2002-02-01
C-quence is a software application that matches sequential patterns of qualitative data specified by the user and calculates the rate of occurrence of these patterns in a data set. Although it was designed to facilitate analyses of face-to-face interaction, it is applicable to any data set involving categorical data and sequential information. C-quence queries are constructed using a graphical user interface. The program does not limit the complexity of the sequential patterns specified by the user.
Everstine, Karen; Abt, Eileen; McColl, Diane; Popping, Bert; Morrison-Rowe, Sara; Lane, Richard W; Scimeca, Joseph; Winter, Carl; Ebert, Andrew; Moore, Jeffrey C; Chin, Henry B
2018-01-01
Food fraud, the intentional misrepresentation of the true identity of a food product or ingredient for economic gain, is a threat to consumer confidence and public health and has received increased attention from both regulators and the food industry. Following updates to food safety certification standards and publication of new U.S. regulatory requirements, we undertook a project to (i) develop a scheme to classify food fraud-related adulterants based on their potential health hazard and (ii) apply this scheme to the adulterants in a database of 2,970 food fraud records. The classification scheme was developed by a panel of experts in food safety and toxicology from the food industry, academia, and the U.S. Food and Drug Administration. Categories and subcategories were created through an iterative process of proposal, review, and validation using a subset of substances known to be associated with the fraudulent adulteration of foods. Once developed, the scheme was applied to the adulterants in the database. The resulting scheme included three broad categories: 1, potentially hazardous adulterants; 2, adulterants that are unlikely to be hazardous; and 3, unclassifiable adulterants. Categories 1 and 2 consisted of seven subcategories intended to further define the range of hazard potential for adulterants. Application of the scheme to the 1,294 adulterants in the database resulted in 45% of adulterants classified in category 1 (potentially hazardous). Twenty-seven percent of the 1,294 adulterants had a history of causing consumer illness or death, were associated with safety-related regulatory action, or were classified as allergens. These results reinforce the importance of including a consideration of food fraud-related adulterants in food safety systems. This classification scheme supports food fraud mitigation efforts and hazard identification as required in the U.S. Food Safety Modernization Act Preventive Controls Rules.
Radaydeh, Redha Mahmoud Mesleh
2012-09-01
This paper proposes a collaborative-based scheme for a transmit antenna channel assignment in overloaded multiantenna femtocells, with the aim of reducing the overhead load. It is assumed that multiple femtocell access points (FAPs) are deployed to sequentially allocate the available resources to scheduled users while reducing the interference experienced by each active user. The FAPs operate concurrently and each of them is allocated an orthogonal channel and employs a transmit array of arbitrary size. The suitable FAP and its associated transmit channel are then identified based on the noncoherently predicted interference power levels on available channels when feedback links are capacity limited. The effect of possible FAP failure or infeasibility to collaborate is characterized for different operating conditions. The applicability of the proposed scheme for specific cases, such as the use of directional antennas in each FAP or shared channels among different FAPs, is also discussed. For arbitrary statistical models of interference power levels on different channels, the average numbers of collaboration requests and examined transmit antenna channels are quantified. Moreover, the statistics of the resulting interference power are derived, which are then used to study various system performance measures. The effect of the interference threshold on the aforementioned measures for processing load and achieved performance is investigated. Numerical and simulations results are presented to support the analytical development and to clarify the tradeoff between the achieved performance enhancement using the proposed scheme and the required processing load for different operating scenarios. © 1967-2012 IEEE.
Radaydeh, Redha Mahmoud Mesleh; Alouini, Mohamed-Slim
2012-01-01
This paper proposes a collaborative-based scheme for a transmit antenna channel assignment in overloaded multiantenna femtocells, with the aim of reducing the overhead load. It is assumed that multiple femtocell access points (FAPs) are deployed to sequentially allocate the available resources to scheduled users while reducing the interference experienced by each active user. The FAPs operate concurrently and each of them is allocated an orthogonal channel and employs a transmit array of arbitrary size. The suitable FAP and its associated transmit channel are then identified based on the noncoherently predicted interference power levels on available channels when feedback links are capacity limited. The effect of possible FAP failure or infeasibility to collaborate is characterized for different operating conditions. The applicability of the proposed scheme for specific cases, such as the use of directional antennas in each FAP or shared channels among different FAPs, is also discussed. For arbitrary statistical models of interference power levels on different channels, the average numbers of collaboration requests and examined transmit antenna channels are quantified. Moreover, the statistics of the resulting interference power are derived, which are then used to study various system performance measures. The effect of the interference threshold on the aforementioned measures for processing load and achieved performance is investigated. Numerical and simulations results are presented to support the analytical development and to clarify the tradeoff between the achieved performance enhancement using the proposed scheme and the required processing load for different operating scenarios. © 1967-2012 IEEE.
International Nuclear Information System (INIS)
Chen, Hanying; Gao, Puzhen; Tan, Sichao; Tang, Jiguo; Yuan, Hongsheng
2017-01-01
Highlights: •An online condition prediction method for natural circulation systems in NPP was proposed based on EOS-ELM. •The proposed online prediction method was validated using experimental data. •The training speed of the proposed method is significantly fast. •The proposed method can achieve good accuracy in wide parameter range. -- Abstract: Natural circulation design is widely used in the passive safety systems of advanced nuclear power reactors. The irregular and chaotic flow oscillations are often observed in boiling natural circulation systems so it is difficult for operators to monitor and predict the condition of these systems. An online condition forecasting method for natural circulation system is proposed in this study as an assisting technique for plant operators. The proposed prediction approach was developed based on Ensemble of Online Sequential Extreme Learning Machine (EOS-ELM) and phase space reconstruction. Online Sequential Extreme Learning Machine (OS-ELM) is an online sequential learning neural network algorithm and EOS-ELM is the ensemble method of it. The proposed condition prediction method can be initiated by a small chunk of monitoring data and it can be updated by newly arrived data at very fast speed during the online prediction. Simulation experiments were conducted on the data of two natural circulation loops to validate the performance of the proposed method. The simulation results show that the proposed predication model can successfully recognize different types of flow oscillations and accurately forecast the trend of monitored plant variables. The influence of the number of hidden nodes and neural network inputs on prediction performance was studied and the proposed model can achieve good accuracy in a wide parameter range. Moreover, the comparison results show that the proposed condition prediction method has much faster online learning speed and better prediction accuracy than conventional neural network model.
Top-down attention affects sequential regularity representation in the human visual system.
Kimura, Motohiro; Widmann, Andreas; Schröger, Erich
2010-08-01
Recent neuroscience studies using visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in the visual sensory system, have shown that although sequential regularities embedded in successive visual stimuli can be automatically represented in the visual sensory system, an existence of sequential regularity itself does not guarantee that the sequential regularity will be automatically represented. In the present study, we investigated the effects of top-down attention on sequential regularity representation in the visual sensory system. Our results showed that a sequential regularity (SSSSD) embedded in a modified oddball sequence where infrequent deviant (D) and frequent standard stimuli (S) differing in luminance were regularly presented (SSSSDSSSSDSSSSD...) was represented in the visual sensory system only when participants attended the sequential regularity in luminance, but not when participants ignored the stimuli or simply attended the dimension of luminance per se. This suggests that top-down attention affects sequential regularity representation in the visual sensory system and that top-down attention is a prerequisite for particular sequential regularities to be represented. Copyright 2010 Elsevier B.V. All rights reserved.
Mining compressing sequential problems
Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.
2012-01-01
Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and
Fast sequential Monte Carlo methods for counting and optimization
Rubinstein, Reuven Y; Vaisman, Radislav
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
Computing sequential equilibria for two-player games
DEFF Research Database (Denmark)
Miltersen, Peter Bro
2006-01-01
Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Their algorithm has been used by AI researchers...... for constructing prescriptive strategies for concrete, often fairly large games. Koller and Pfeffer pointed out that the strategies obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency...... by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial...
Computing Sequential Equilibria for Two-Player Games
DEFF Research Database (Denmark)
Miltersen, Peter Bro; Sørensen, Troels Bjerre
2006-01-01
Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Koller and Pfeffer pointed out that the strategies...... obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming...... a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial time. In addition, the equilibrium we find is normal-form perfect. Our technique generalizes to general-sum games...
A distributed authentication and authorization scheme for in-network big data sharing
Directory of Open Access Journals (Sweden)
Ruidong Li
2017-11-01
Full Text Available Big data has a strong demand for a network infrastructure with the capability to support data sharing and retrieval efficiently. Information-centric networking (ICN is an emerging approach to satisfy this demand, where big data is cached ubiquitously in the network and retrieved using data names. However, existing authentication and authorization schemes rely mostly on centralized servers to provide certification and mediation services for data retrieval. This causes considerable traffic overhead for the secure distributed sharing of data. To solve this problem, we employ identity-based cryptography (IBC to propose a Distributed Authentication and Authorization Scheme (DAAS, where an identity-based signature (IBS is used to achieve distributed verifications of the identities of publishers and users. Moreover, Ciphertext-Policy Attribute-based encryption (CP-ABE is used to enable the distributed and fine-grained authorization. DAAS consists of three phases: initialization, secure data publication, and secure data retrieval, which seamlessly integrate authentication and authorization with the interest/data communication paradigm in ICN. In particular, we propose trustworthy registration and Network Operator and Authority Manifest (NOAM dissemination to provide initial secure registration and enable efficient authentication for global data retrieval. Meanwhile, Attribute Manifest (AM distribution coupled with automatic attribute update is proposed to reduce the cost of attribute retrieval. We examine the performance of the proposed DAAS, which shows that it can achieve a lower bandwidth cost than existing schemes.
International Nuclear Information System (INIS)
Huang, Ying; Fang, Xia; Xiao, Hai; Bevans, Wesley James; Chen, Genda; Zhou, Zhi
2013-01-01
Steel buildings are subjected to fire hazards during or immediately after a major earthquake. Under combined gravity and thermal loads, they have non-uniformly distributed stiffness and strength, and thus collapse progressively with large deformation. In this study, large-strain optical fiber sensors for high temperature applications and a temperature-dependent finite element model updating method are proposed for accurate prediction of structural behavior in real time. The optical fiber sensors can measure strains up to 10% at approximately 700 °C. Their measurements are in good agreement with those from strain gauges up to 0.5%. In comparison with the experimental results, the proposed model updating method can reduce the predicted strain errors from over 75% to below 20% at 800 °C. The minimum number of sensors in a fire zone that can properly characterize the vertical temperature distribution of heated air due to the gravity effect should be included in the proposed model updating scheme to achieve a predetermined simulation accuracy. (paper)
Real-time numerical shake prediction and updating for earthquake early warning
Wang, Tianyun; Jin, Xing; Wei, Yongxiang; Huang, Yandan
2017-12-01
Ground motion prediction is important for earthquake early warning systems, because the region's peak ground motion indicates the potential disaster. In order to predict the peak ground motion quickly and precisely with limited station wave records, we propose a real-time numerical shake prediction and updating method. Our method first predicts the ground motion based on the ground motion prediction equation after P waves detection of several stations, denoted as the initial prediction. In order to correct the prediction error of the initial prediction, an updating scheme based on real-time simulation of wave propagation is designed. Data assimilation technique is incorporated to predict the distribution of seismic wave energy precisely. Radiative transfer theory and Monte Carlo simulation are used for modeling wave propagation in 2-D space, and the peak ground motion is calculated as quickly as possible. Our method has potential to predict shakemap, making the potential disaster be predicted before the real disaster happens. 2008 M S8.0 Wenchuan earthquake is studied as an example to show the validity of the proposed method.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Parallel-Sequential Texture Analysis
van den Broek, Egon; Singh, Sameer; Singh, Maneesha; van Rikxoort, Eva M.; Apte, Chid; Perner, Petra
2005-01-01
Color induced texture analysis is explored, using two texture analysis techniques: the co-occurrence matrix and the color correlogram as well as color histograms. Several quantization schemes for six color spaces and the human-based 11 color quantization scheme have been applied. The VisTex texture
The sequential structure of brain activation predicts skill.
Anderson, John R; Bothell, Daniel; Fincham, Jon M; Moon, Jungaa
2016-01-29
In an fMRI study, participants were trained to play a complex video game. They were scanned early and then again after substantial practice. While better players showed greater activation in one region (right dorsal striatum) their relative skill was better diagnosed by considering the sequential structure of whole brain activation. Using a cognitive model that played this game, we extracted a characterization of the mental states that are involved in playing a game and the statistical structure of the transitions among these states. There was a strong correspondence between this measure of sequential structure and the skill of different players. Using multi-voxel pattern analysis, it was possible to recognize, with relatively high accuracy, the cognitive states participants were in during particular scans. We used the sequential structure of these activation-recognized states to predict the skill of individual players. These findings indicate that important features about information-processing strategies can be identified from a model-based analysis of the sequential structure of brain activation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Racz, A.; Lux, I. [Hungarian Academy of Sciences, Budapest (Hungary). Atomic Energy Research Inst.
1996-04-16
The applicability of the classical sequential probability ratio testing (SPRT) for early failure detection problems is limited by the fact that there is an extra time delay between the occurrence of the failure and its first recognition. Chien and Adams developed a method to minimize this time for the case when the problem can be formulated as testing the mean value of a Gaussian signal. In our paper we propose a procedure that can be applied for both mean and variance testing and that minimizes the time delay. The method is based on a special parametrization of the classical SPRT. The one-sided sequential tests (OSST) can reproduce the results of the Chien-Adams test when applied for mean values. (author).
Sman, van der R.G.M.
2006-01-01
In the special case of relaxation parameter = 1 lattice Boltzmann schemes for (convection) diffusion and fluid flow are equivalent to finite difference/volume (FD) schemes, and are thus coined finite Boltzmann (FB) schemes. We show that the equivalence is inherent to the homology of the
Mining Emerging Sequential Patterns for Activity Recognition in Body Sensor Networks
DEFF Research Database (Denmark)
Gu, Tao; Wang, Liang; Chen, Hanhua
2010-01-01
Body Sensor Networks oer many applications in healthcare, well-being and entertainment. One of the emerging applications is recognizing activities of daily living. In this paper, we introduce a novel knowledge pattern named Emerging Sequential Pattern (ESP)|a sequential pattern that discovers...... signicant class dierences|to recognize both simple (i.e., sequential) and complex (i.e., interleaved and concurrent) activities. Based on ESPs, we build our complex activity models directly upon the sequential model to recognize both activity types. We conduct comprehensive empirical studies to evaluate...
Discrimination between sequential and simultaneous virtual channels with electrical hearing
Landsberger, David; Galvin, John J.
2011-01-01
In cochlear implants (CIs), simultaneous or sequential stimulation of adjacent electrodes can produce intermediate pitch percepts between those of the component electrodes. However, it is unclear whether simultaneous and sequential virtual channels (VCs) can be discriminated. In this study, CI users were asked to discriminate simultaneous and sequential VCs; discrimination was measured for monopolar (MP) and bipolar + 1 stimulation (BP + 1), i.e., relatively broad and focused stimulation mode...
Hybrid Computerized Adaptive Testing: From Group Sequential Design to Fully Sequential Design
Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff
2016-01-01
Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…
Sequential dependencies in magnitude scaling of loudness
DEFF Research Database (Denmark)
Joshi, Suyash Narendra; Jesteadt, Walt
2013-01-01
Ten normally hearing listeners used a programmable sone-potentiometer knob to adjust the level of a 1000-Hz sinusoid to match the loudness of numbers presented to them in a magnitude production task. Three different power-law exponents (0.15, 0.30, and 0.60) and a log-law with equal steps in d......B were used to program the sone-potentiometer. The knob settings systematically influenced the form of the loudness function. Time series analysis was used to assess the sequential dependencies in the data, which increased with increasing exponent and were greatest for the log-law. It would be possible......, therefore, to choose knob properties that minimized these dependencies. When the sequential dependencies were removed from the data, the slope of the loudness functions did not change, but the variability decreased. Sequential dependencies were only present when the level of the tone on the previous trial...
Visual short-term memory for sequential arrays.
Kumar, Arjun; Jiang, Yuhong
2005-04-01
The capacity of visual short-term memory (VSTM) for a single visual display has been investigated in past research, but VSTM for multiple sequential arrays has been explored only recently. In this study, we investigate the capacity of VSTM across two sequential arrays separated by a variable stimulus onset asynchrony (SOA). VSTM for spatial locations (Experiment 1), colors (Experiments 2-4), orientations (Experiments 3 and 4), and conjunction of color and orientation (Experiment 4) were tested, with the SOA across the two sequential arrays varying from 100 to 1,500 msec. We find that VSTM for the trailing array is much better than VSTM for the leading array, but when averaged across the two arrays VSTM has a constant capacity independent of the SOA. We suggest that multiple displays compete for retention in VSTM and that separating information into two temporally discrete groups does not enhance the overall capacity of VSTM.
The target-to-foils shift in simultaneous and sequential lineups.
Clark, Steven E; Davey, Sherrie L
2005-04-01
A theoretical cornerstone in eyewitness identification research is the proposition that witnesses, in making decisions from standard simultaneous lineups, make relative judgments. The present research considers two sources of support for this proposal. An experiment by G. L. Wells (1993) showed that if the target is removed from a lineup, witnesses shift their responses to pick foils, rather than rejecting the lineups, a result we will term a target-to-foils shift. Additional empirical support is provided by results from sequential lineups which typically show higher accuracy than simultaneous lineups, presumably because of a decrease in the use of relative judgments in making identification decisions. The combination of these two lines of research suggests that the target-to-foils shift should be reduced in sequential lineups relative to simultaneous lineups. Results of two experiments showed an overall advantage for sequential lineups, but also showed a target-to-foils shift equal in size for simultaneous and sequential lineups. Additional analyses indicated that the target-to-foils shift in sequential lineups was moderated in part by an order effect and was produced with (Experiment 2) or without (Experiment 1) a shift in decision criterion. This complex pattern of results suggests that more work is needed to understand the processes which underlie decisions in simultaneous and sequential lineups.
Staged Optimization Design for Updating Urban Drainage Systems in a City of China
Directory of Open Access Journals (Sweden)
Kui Xu
2018-01-01
Full Text Available Flooding has been reported more often than in the past in most cities of China in recent years. In response, China’s State Council has urged the 36 largest cities to update the preparedness to handle the 50-year rainfall, which would be a massive project with large investments. We propose a staged optimization design for updating urban drainage that is not only a flexible option against environmental changes, but also an effective way to reduce the cost of the project. The staged cost optimization model involving the hydraulic model was developed in Fuzhou City, China. This model was established to minimize the total present costs, including intervention costs and flooding costs, with full consideration of the constraints of specific local conditions. The results show that considerable financial savings could be achieved by a staged design rather than the implement-once scheme. The model’s sensitivities to four data parameters were analyzed, including rainfall increase rate, flood unit cost, storage unit cost, and discount rate. The results confirm the applicability and robustness of the model for updating drainage systems to meet the requirements. The findings of this study may have important implications on urban flood management in the cities of developing countries with limited construction investments.
International Nuclear Information System (INIS)
Kanelis, Voula; Donaldson, Logan; Muhandiram, D.R.; Rotin, Daniela; Forman-Kay, Julie D.; Kay, Lewis E.
2000-01-01
Many protein-protein interactions involve amino acid sequences containing proline-rich motifs and even poly-proline stretches. The lack of amide protons in such regions complicates assignment, since 1 HN-based triple-resonance assignment strategies cannot be employed. Two such systems that we are currently studying include an SH2 domain from the protein Crk with a region containing 9 prolines in a 14 amino acid sequence, as well as a WW domain that interacts with a proline-rich target. A modified version of the HACAN pulse scheme, originally described by Bax and co-workers [Wang et al. (1995) J. Biomol. NMR, 5, 376-382], and an experiment which correlates the intra-residue 1 H α , 13 C α / 13 C β chemical shifts with the 15 N shift of the subsequent residue are presented and applied to the two systems listed above, allowing sequential assignment of the molecules
International Nuclear Information System (INIS)
Molloy, Janelle A.
2010-01-01
Purpose: Improvements in delivery techniques for total body irradiation (TBI) using Tomotherapy and intensity modulated radiation therapy have been proven feasible. Despite the promise of improved dose conformality, the application of these ''sequential'' techniques has been hampered by concerns over dose heterogeneity to circulating blood. The present study was conducted to provide quantitative evidence regarding the potential clinical impact of this heterogeneity. Methods: Blood perfusion was modeled analytically as possessing linear, sinusoidal motion in the craniocaudal dimension. The average perfusion period for human circulation was estimated to be approximately 78 s. Sequential treatment delivery was modeled as a Gaussian-shaped dose cloud with a 10 cm length that traversed a 183 cm patient length at a uniform speed. Total dose to circulating blood voxels was calculated via numerical integration and normalized to 2 Gy per fraction. Dose statistics and equivalent uniform dose (EUD) were calculated for relevant treatment times, radiobiological parameters, blood perfusion rates, and fractionation schemes. The model was then refined to account for random dispersion superimposed onto the underlying periodic blood flow. Finally, a fully stochastic model was developed using binomial and trinomial probability distributions. These models allowed for the analysis of nonlinear sequential treatment modalities and treatment designs that incorporate deliberate organ sparing. Results: The dose received by individual blood voxels exhibited asymmetric behavior that depended on the coherence among the blood velocity, circulation phase, and the spatiotemporal characteristics of the irradiation beam. Heterogeneity increased with the perfusion period and decreased with the treatment time. Notwithstanding, heterogeneity was less than ±10% for perfusion periods less than 150 s. The EUD was compromised for radiosensitive cells, long perfusion periods, and short treatment times
Molloy, Janelle A
2010-11-01
Improvements in delivery techniques for total body irradiation (TBI) using Tomotherapy and intensity modulated radiation therapy have been proven feasible. Despite the promise of improved dose conformality, the application of these "sequential" techniques has been hampered by concerns over dose heterogeneity to circulating blood. The present study was conducted to provide quantitative evidence regarding the potential clinical impact of this heterogeneity. Blood perfusion was modeled analytically as possessing linear, sinusoidal motion in the craniocaudal dimension. The average perfusion period for human circulation was estimated to be approximately 78 s. Sequential treatment delivery was modeled as a Gaussian-shaped dose cloud with a 10 cm length that traversed a 183 cm patient length at a uniform speed. Total dose to circulating blood voxels was calculated via numerical integration and normalized to 2 Gy per fraction. Dose statistics and equivalent uniform dose (EUD) were calculated for relevant treatment times, radiobiological parameters, blood perfusion rates, and fractionation schemes. The model was then refined to account for random dispersion superimposed onto the underlying periodic blood flow. Finally, a fully stochastic model was developed using binomial and trinomial probability distributions. These models allowed for the analysis of nonlinear sequential treatment modalities and treatment designs that incorporate deliberate organ sparing. The dose received by individual blood voxels exhibited asymmetric behavior that depended on the coherence among the blood velocity, circulation phase, and the spatiotemporal characteristics of the irradiation beam. Heterogeneity increased with the perfusion period and decreased with the treatment time. Notwithstanding, heterogeneity was less than +/- 10% for perfusion periods less than 150 s. The EUD was compromised for radiosensitive cells, long perfusion periods, and short treatment times. However, the EUD was
UD-WCMA: An Energy Estimation and Forecast Scheme for Solar Powered Wireless Sensor Networks
Dehwah, Ahmad H.
2017-04-11
Energy estimation and forecast represents an important role for energy management in solar-powered wireless sensor networks (WSNs). In general, the energy in such networks is managed over a finite time horizon in the future based on input solar power forecasts to enable continuous operation of the WSNs and achieve the sensing objectives while ensuring that no node runs out of energy. In this article, we propose a dynamic version of the weather conditioned moving average technique (UD-WCMA) to estimate and predict the variations of the solar power in a wireless sensor network. The presented approach combines the information from the real-time measurement data and a set of stored profiles representing the energy patterns in the WSNs location to update the prediction model. The UD-WCMA scheme is based on adaptive weighting parameters depending on the weather changes which makes it flexible compared to the existing estimation schemes without any precalibration. A performance analysis has been performed considering real irradiance profiles to assess the UD-WCMA prediction accuracy. Comparative numerical tests to standard forecasting schemes (EWMA, WCMA, and Pro-Energy) shows the outperformance of the new algorithm. The experimental validation has proven the interesting features of the UD-WCMA in real time low power sensor nodes.
An update on the BQCD Hybrid Monte Carlo program
Directory of Open Access Journals (Sweden)
Haar Taylor Ryan
2018-01-01
Full Text Available We present an update of BQCD, our Hybrid Monte Carlo program for simulating lattice QCD. BQCD is one of the main production codes of the QCDSF collaboration and is used by CSSM and in some Japanese finite temperature and finite density projects. Since the first publication of the code at Lattice 2010 the program has been extended in various ways. New features of the code include: dynamical QED, action modification in order to compute matrix elements by using Feynman-Hellman theory, more trace measurements (like Tr(D-n for K, cSW and chemical potential reweighting, a more flexible integration scheme, polynomial filtering, term-splitting for RHMC, and a portable implementation of performance critical parts employing SIMD.
Dynamics-based sequential memory: Winnerless competition of patterns
International Nuclear Information System (INIS)
Seliger, Philip; Tsimring, Lev S.; Rabinovich, Mikhail I.
2003-01-01
We introduce a biologically motivated dynamical principle of sequential memory which is based on winnerless competition (WLC) of event images. This mechanism is implemented in a two-layer neural model of sequential spatial memory. We present the learning dynamics which leads to the formation of a WLC network. After learning, the system is capable of associative retrieval of prerecorded sequences of patterns
Sequential, progressive, equal-power, reflective beam-splitter arrays
Manhart, Paul K.
2017-11-01
The equations to calculate equal-power reflectivity of a sequential series of beam splitters is presented. Non-sequential optical design examples are offered for uniform illumination using diode lasers. Objects created using Boolean operators and Swept Surfaces can create objects capable of reflecting light into predefined elevation and azimuth angles. Analysis of the illumination patterns for the array are also presented.
Basal ganglia and cortical networks for sequential ordering and rhythm of complex movements
Directory of Open Access Journals (Sweden)
Jeffery G. Bednark
2015-07-01
Full Text Available Voluntary actions require the concurrent engagement and coordinated control of complex temporal (e.g. rhythm and ordinal motor processes. Using high-resolution functional magnetic resonance imaging (fMRI and multi-voxel pattern analysis (MVPA, we sought to determine the degree to which these complex motor processes are dissociable in basal ganglia and cortical networks. We employed three different finger-tapping tasks that differed in the demand on the sequential temporal rhythm or sequential ordering of submovements. Our results demonstrate that sequential rhythm and sequential order tasks were partially dissociable based on activation differences. The sequential rhythm task activated a widespread network centered around the SMA and basal-ganglia regions including the dorsomedial putamen and caudate nucleus, while the sequential order task preferentially activated a fronto-parietal network. There was also extensive overlap between sequential rhythm and sequential order tasks, with both tasks commonly activating bilateral premotor, supplementary motor, and superior/inferior parietal cortical regions, as well as regions of the caudate/putamen of the basal ganglia and the ventro-lateral thalamus. Importantly, within the cortical regions that were active for both complex movements, MVPA could accurately classify different patterns of activation for the sequential rhythm and sequential order tasks. In the basal ganglia, however, overlapping activation for the sequential rhythm and sequential order tasks, which was found in classic motor circuits of the putamen and ventro-lateral thalamus, could not be accurately differentiated by MVPA. Overall, our results highlight the convergent architecture of the motor system, where complex motor information that is spatially distributed in the cortex converges into a more compact representation in the basal ganglia.
The sequential price of anarchy for atomic congestion games
de Jong, Jasper; Uetz, Marc Jochen; Liu, Tie-Yan; Qi, Qi; Ye, Yinyu
2014-01-01
In situations without central coordination, the price of anarchy relates the quality of any Nash equilibrium to the quality of a global optimum. Instead of assuming that all players choose their actions simultaneously, we consider games where players choose their actions sequentially. The sequential
Native Frames: Disentangling Sequential from Concerted Three-Body Fragmentation
Rajput, Jyoti; Severt, T.; Berry, Ben; Jochim, Bethany; Feizollah, Peyman; Kaderiya, Balram; Zohrabi, M.; Ablikim, U.; Ziaee, Farzaneh; Raju P., Kanaka; Rolles, D.; Rudenko, A.; Carnes, K. D.; Esry, B. D.; Ben-Itzhak, I.
2018-03-01
A key question concerning the three-body fragmentation of polyatomic molecules is the distinction of sequential and concerted mechanisms, i.e., the stepwise or simultaneous cleavage of bonds. Using laser-driven fragmentation of OCS into O++C++S+ and employing coincidence momentum imaging, we demonstrate a novel method that enables the clear separation of sequential and concerted breakup. The separation is accomplished by analyzing the three-body fragmentation in the native frame associated with each step and taking advantage of the rotation of the intermediate molecular fragment, CO2 + or CS2 + , before its unimolecular dissociation. This native-frame method works for any projectile (electrons, ions, or photons), provides details on each step of the sequential breakup, and enables the retrieval of the relevant spectra for sequential and concerted breakup separately. Specifically, this allows the determination of the branching ratio of all these processes in OCS3 + breakup. Moreover, we find that the first step of sequential breakup is tightly aligned along the laser polarization and identify the likely electronic states of the intermediate dication that undergo unimolecular dissociation in the second step. Finally, the separated concerted breakup spectra show clearly that the central carbon atom is preferentially ejected perpendicular to the laser field.
International Nuclear Information System (INIS)
Kim, Hyun Keol; Hielscher, Andreas H
2009-01-01
It is well acknowledged that transport-theory-based reconstruction algorithm can provide the most accurate reconstruction results especially when small tissue volumes or high absorbing media are considered. However, these codes have a high computational burden and are often only slowly converging. Therefore, methods that accelerate the computation are highly desirable. To this end, we introduce in this work a partial-differential-equation (PDE) constrained approach to optical tomography that makes use of an all-at-once reduced Hessian sequential quadratic programming (rSQP) scheme. The proposed scheme treats the forward and inverse variables independently, which makes it possible to update the radiation intensities and the optical coefficients simultaneously by solving the forward and inverse problems, all at once. We evaluate the performance of the proposed scheme with numerical and experimental data, and find that the rSQP scheme can reduce the computation time by a factor of 10–25, as compared to the commonly employed limited memory BFGS method. At the same time accuracy and robustness even in the presence of noise are not compromised
Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders
Rußwurm, Marc; Körner, Marco
2018-03-01
Earth observation (EO) sensors deliver data with daily or weekly temporal resolution. Most land use and land cover (LULC) approaches, however, expect cloud-free and mono-temporal observations. The increasing temporal capabilities of today's sensors enables the use of temporal, along with spectral and spatial features. Domains, such as speech recognition or neural machine translation, work with inherently temporal data and, today, achieve impressive results using sequential encoder-decoder structures. Inspired by these sequence-to-sequence models, we adapt an encoder structure with convolutional recurrent layers in order to approximate a phenological model for vegetation classes based on a temporal sequence of Sentinel 2 (S2) images. In our experiments, we visualize internal activations over a sequence of cloudy and non-cloudy images and find several recurrent cells, which reduce the input activity for cloudy observations. Hence, we assume that our network has learned cloud-filtering schemes solely from input data, which could alleviate the need for tedious cloud-filtering as a preprocessing step for many EO approaches. Moreover, using unfiltered temporal series of top-of-atmosphere (TOA) reflectance data, we achieved in our experiments state-of-the-art classification accuracies on a large number of crop classes with minimal preprocessing compared to other classification approaches.
Campbell and moment measures for finite sequential spatial processes
M.N.M. van Lieshout (Marie-Colette)
2006-01-01
textabstractWe define moment and Campbell measures for sequential spatial processes, prove a Campbell-Mecke theorem, and relate the results to their counterparts in the theory of point processes. In particular, we show that any finite sequential spatial process model can be derived as the vector
Sequential Dependencies in Driving
Doshi, Anup; Tran, Cuong; Wilder, Matthew H.; Mozer, Michael C.; Trivedi, Mohan M.
2012-01-01
The effect of recent experience on current behavior has been studied extensively in simple laboratory tasks. We explore the nature of sequential effects in the more naturalistic setting of automobile driving. Driving is a safety-critical task in which delayed response times may have severe consequences. Using a realistic driving simulator, we find…
Research on parallel algorithm for sequential pattern mining
Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao
2008-03-01
Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.
Kernel Clustering with a Differential Harmony Search Algorithm for Scheme Classification
Directory of Open Access Journals (Sweden)
Yu Feng
2017-01-01
Full Text Available This paper presents a kernel fuzzy clustering with a novel differential harmony search algorithm to coordinate with the diversion scheduling scheme classification. First, we employed a self-adaptive solution generation strategy and differential evolution-based population update strategy to improve the classical harmony search. Second, we applied the differential harmony search algorithm to the kernel fuzzy clustering to help the clustering method obtain better solutions. Finally, the combination of the kernel fuzzy clustering and the differential harmony search is applied for water diversion scheduling in East Lake. A comparison of the proposed method with other methods has been carried out. The results show that the kernel clustering with the differential harmony search algorithm has good performance to cooperate with the water diversion scheduling problems.
Yu, Shidi; Liu, Xiao; Liu, Anfeng; Xiong, Naixue; Cai, Zhiping; Wang, Tian
2018-05-10
Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed
An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Shidi Yu
2018-05-01
Full Text Available Due to the Software Defined Network (SDN technology, Wireless Sensor Networks (WSNs are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1 with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2 As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3 The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that
a Bottom-Up Geosptial Data Update Mechanism for Spatial Data Infrastructure Updating
Tian, W.; Zhu, X.; Liu, Y.
2012-08-01
Currently, the top-down spatial data update mechanism has made a big progress and it is wildly applied in many SDI (spatial data infrastructure). However, this mechanism still has some issues. For example, the update schedule is limited by the professional department's project, usually which is too long for the end-user; the data form collection to public cost too much time and energy for professional department; the details of geospatial information does not provide sufficient attribute, etc. Thus, how to deal with the problems has become the effective shortcut. Emerging Internet technology, 3S technique and geographic information knowledge which is popular in the public promote the booming development of geoscience in volunteered geospatial information. Volunteered geospatial information is the current "hotspot", which attracts many researchers to study its data quality and credibility, accuracy, sustainability, social benefit, application and so on. In addition to this, a few scholars also pay attention to the value of VGI to support the SDI updating. And on that basis, this paper presents a bottom-up update mechanism form VGI to SDI, which includes the processes of match homonymous elements between VGI and SDI vector data , change data detection, SDI spatial database update and new data product publication to end-users. Then, the proposed updating cycle is deeply discussed about the feasibility of which can detect the changed elements in time and shorten the update period, provide more accurate geometry and attribute data for spatial data infrastructure and support update propagation.
Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme
Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook
1995-01-01
Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.
Kessler, Yoav; Oberauer, Klaus
2014-01-01
Updating and maintenance of information are 2 conflicting demands on working memory (WM). We examined the time required to update WM (updating latency) as a function of the sequence of updated and not-updated items within a list. Participants held a list of items in WM and updated a variable subset of them in each trial. Four experiments that vary…
Framework for sequential approximate optimization
Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.
2004-01-01
An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python
A Survey of Multi-Objective Sequential Decision-Making
Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.
2013-01-01
Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-obj...
Asynchronous Operators of Sequential Logic Venjunction & Sequention
Vasyukevich, Vadim
2011-01-01
This book is dedicated to new mathematical instruments assigned for logical modeling of the memory of digital devices. The case in point is logic-dynamical operation named venjunction and venjunctive function as well as sequention and sequentional function. Venjunction and sequention operate within the framework of sequential logic. In a form of the corresponding equations, they organically fit analytical expressions of Boolean algebra. Thus, a sort of symbiosis is formed using elements of asynchronous sequential logic on the one hand and combinational logic on the other hand. So, asynchronous
International Nuclear Information System (INIS)
Mueller, P.
1995-01-01
This talks describes updates in the following updates in FRMAC publications concerning radiation emergencies: Monitoring and Analysis Manual; Evaluation and Assessment Manual; Handshake Series (Biannual) including exercises participated in; environmental Data and Instrument Transmission System (EDITS); Plume in a Box with all radiological data stored onto a hand-held computer; and courses given
Computerized Hammer Sounding Interpretation for Concrete Assessment with Online Machine Learning.
Ye, Jiaxing; Kobayashi, Takumi; Iwata, Masaya; Tsuda, Hiroshi; Murakawa, Masahiro
2018-03-09
Developing efficient Artificial Intelligence (AI)-enabled systems to substitute the human role in non-destructive testing is an emerging topic of considerable interest. In this study, we propose a novel hammering response analysis system using online machine learning, which aims at achieving near-human performance in assessment of concrete structures. Current computerized hammer sounding systems commonly employ lab-scale data to validate the models. In practice, however, the response signal patterns can be far more complicated due to varying geometric shapes and materials of structures. To deal with a large variety of unseen data, we propose a sequential treatment for response characterization. More specifically, the proposed system can adaptively update itself to approach human performance in hammering sounding data interpretation. To this end, a two-stage framework has been introduced, including feature extraction and the model updating scheme. Various state-of-the-art online learning algorithms have been reviewed and evaluated for the task. To conduct experimental validation, we collected 10,940 response instances from multiple inspection sites; each sample was annotated by human experts with healthy/defective condition labels. The results demonstrated that the proposed scheme achieved favorable assessment accuracy with high efficiency and low computation load.
Optimization of reliability centered predictive maintenance scheme for inertial navigation system
International Nuclear Information System (INIS)
Jiang, Xiuhong; Duan, Fuhai; Tian, Heng; Wei, Xuedong
2015-01-01
The goal of this study is to propose a reliability centered predictive maintenance scheme for a complex structure Inertial Navigation System (INS) with several redundant components. GO Methodology is applied to build the INS reliability analysis model—GO chart. Components Remaining Useful Life (RUL) and system reliability are updated dynamically based on the combination of components lifetime distribution function, stress samples, and the system GO chart. Considering the redundant design in INS, maintenance time is based not only on components RUL, but also (and mainly) on the timing of when system reliability fails to meet the set threshold. The definition of components maintenance priority balances three factors: components importance to system, risk degree, and detection difficulty. Maintenance Priority Number (MPN) is introduced, which may provide quantitative maintenance priority results for all components. A maintenance unit time cost model is built based on components MPN, components RUL predictive model and maintenance intervals for the optimization of maintenance scope. The proposed scheme can be applied to serve as the reference for INS maintenance. Finally, three numerical examples prove the proposed predictive maintenance scheme is feasible and effective. - Highlights: • A dynamic PdM with a rolling horizon is proposed for INS with redundant components. • GO Methodology is applied to build the system reliability analysis model. • A concept of MPN is proposed to quantify the maintenance sequence of components. • An optimization model is built to select the optimal group of maintenance components. • The optimization goal is minimizing the cost of maintaining system reliability
Human visual system automatically encodes sequential regularities of discrete events.
Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki
2010-06-01
For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential
A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.
Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L
2016-03-01
Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015
Impact of Diagrams on Recalling Sequential Elements in Expository Texts.
Guri-Rozenblit, Sarah
1988-01-01
Examines the instructional effectiveness of abstract diagrams on recall of sequential relations in social science textbooks. Concludes that diagrams assist significantly the recall of sequential relations in a text and decrease significantly the rate of order mistakes. (RS)
Takuma, Takehisa; Masugi, Masao
2009-03-01
This paper presents an approach to the assessment of IP-network traffic in terms of the time variation of self-similarity. To get a comprehensive view in analyzing the degree of long-range dependence (LRD) of IP-network traffic, we use a hierarchical clustering scheme, which provides a way to classify high-dimensional data with a tree-like structure. Also, in the LRD-based analysis, we employ detrended fluctuation analysis (DFA), which is applicable to the analysis of long-range power-law correlations or LRD in non-stationary time-series signals. Based on sequential measurements of IP-network traffic at two locations, this paper derives corresponding values for the LRD-related parameter α that reflects the degree of LRD of measured data. In performing the hierarchical clustering scheme, we use three parameters: the α value, average throughput, and the proportion of network traffic that exceeds 80% of network bandwidth for each measured data set. We visually confirm that the traffic data can be classified in accordance with the network traffic properties, resulting in that the combined depiction of the LRD and other factors can give us an effective assessment of network conditions at different times.
Quantum Probability Zero-One Law for Sequential Terminal Events
Rehder, Wulf
1980-07-01
On the basis of the Jauch-Piron quantum probability calculus a zero-one law for sequential terminal events is proven, and the significance of certain crucial axioms in the quantum probability calculus is discussed. The result shows that the Jauch-Piron set of axioms is appropriate for the non-Boolean algebra of sequential events.
A path-level exact parallelization strategy for sequential simulation
Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.
2018-01-01
Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.
Concatenated coding system with iterated sequential inner decoding
DEFF Research Database (Denmark)
Jensen, Ole Riis; Paaske, Erik
1995-01-01
We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder......We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder...
Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen
2013-01-01
When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either displaced in time or affected with a bias. The results show that for a 10 minute forecast, time displacements of 5 and 10 minutes compare to biases of 60 and 100%, respectively, independent of the catchments time of concentration.
Directory of Open Access Journals (Sweden)
Christelle Garnier
2008-05-01
Full Text Available We address the problem of phase noise (PHN and carrier frequency offset (CFO mitigation in multicarrier receivers. In multicarrier systems, phase distortions cause two effects: the common phase error (CPE and the intercarrier interference (ICI which severely degrade the accuracy of the symbol detection stage. Here, we propose a non-pilot-aided scheme to jointly estimate PHN, CFO, and multicarrier signal in time domain. Unlike existing methods, non-pilot-based estimation is performed without any decision-directed scheme. Our approach to the problem is based on Bayesian estimation using sequential Monte Carlo filtering commonly referred to as particle filtering. The particle filter is efficiently implemented by combining the principles of the Rao-Blackwellization technique and an approximate optimal importance function for phase distortion sampling. Moreover, in order to fully benefit from time-domain processing, we propose a multicarrier signal model which includes the redundancy information induced by the cyclic prefix, thus leading to a significant performance improvement. Simulation results are provided in terms of bit error rate (BER and mean square error (MSE to illustrate the efficiency and the robustness of the proposed algorithm.
2012 Ten-year scheme of network development
International Nuclear Information System (INIS)
2012-01-01
RTE, an independent subsidiary of EDF, is the French electricity transmission system operator. It is a public service company responsible for operating, maintaining and developing the high and extra high voltage network. It guarantees the reliability and proper operation of the power network. RTE transports electricity between electricity suppliers (French and European) and consumers, whether they are electricity distributors or industrial consumers directly connected to the transmission system. The mission of RTE is to balance the electricity supply and demand in real time. With the support of the government authorities, RTE prepares a ten-year scheme of network development in France; This document presents the main electricity transport infrastructures to foresee within the ten coming years and lists the network development investments which must be realised and implemented within 3 years. The document is updated each year and comes to complement at the national level the European ten year network development plan (TYNDP) and the European regional plans as provided for in the 2009/72/CE European directive
Ten-year scheme of network development - 2011
International Nuclear Information System (INIS)
2011-01-01
RTE, an independent subsidiary of EDF, is the French electricity transmission system operator. It is a public service company responsible for operating, maintaining and developing the high and extra high voltage network. It guarantees the reliability and proper operation of the power network. RTE transports electricity between electricity suppliers (French and European) and consumers, whether they are electricity distributors or industrial consumers directly connected to the transmission system. The mission of RTE is to balance the electricity supply and demand in real time. With the support of the government authorities, RTE prepares a ten-year scheme of network development in France; This document presents the main electricity transport infrastructures to foresee within the ten coming years and lists the network development investments which must be realised and implemented within 3 years. The document is updated each year and comes to complement at the national level the European ten year network development plan (TYNDP) and the European regional plans as provided for in the 2009/72/CE European directive
Lineup Composition, Suspect Position, and the Sequential Lineup Advantage
Carlson, Curt A.; Gronlund, Scott D.; Clark, Steven E.
2008-01-01
N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate…
Trial Sequential Analysis in systematic reviews with meta-analysis
Directory of Open Access Journals (Sweden)
Jørn Wetterslev
2017-03-01
Full Text Available Abstract Background Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors and too many false negative conclusions (type II errors. Methods We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. Results The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D2 measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in
Heat accumulation during sequential cortical bone drilling.
Palmisano, Andrew C; Tai, Bruce L; Belmont, Barry; Irwin, Todd A; Shih, Albert; Holmes, James R
2016-03-01
Significant research exists regarding heat production during single-hole bone drilling. No published data exist regarding repetitive sequential drilling. This study elucidates the phenomenon of heat accumulation for sequential drilling with both Kirschner wires (K wires) and standard two-flute twist drills. It was hypothesized that cumulative heat would result in a higher temperature with each subsequent drill pass. Nine holes in a 3 × 3 array were drilled sequentially on moistened cadaveric tibia bone kept at body temperature (about 37 °C). Four thermocouples were placed at the center of four adjacent holes and 2 mm below the surface. A battery-driven hand drill guided by a servo-controlled motion system was used. Six samples were drilled with each tool (2.0 mm K wire and 2.0 and 2.5 mm standard drills). K wire drilling increased temperature from 5 °C at the first hole to 20 °C at holes 6 through 9. A similar trend was found in standard drills with less significant increments. The maximum temperatures of both tools increased from drill sizes was found to be insignificant (P > 0.05). In conclusion, heat accumulated during sequential drilling, with size difference being insignificant. K wire produced more heat than its twist-drill counterparts. This study has demonstrated the heat accumulation phenomenon and its significant effect on temperature. Maximizing the drilling field and reducing the number of drill passes may decrease bone injury. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Cosson, Steffen; Danial, Maarten; Saint-Amans, Julien Rosselgong; Cooper-White, Justin J
2017-04-01
Advanced polymerization methodologies, such as reversible addition-fragmentation transfer (RAFT), allow unprecedented control over star polymer composition, topology, and functionality. However, using RAFT to produce high throughput (HTP) combinatorial star polymer libraries remains, to date, impracticable due to several technical limitations. Herein, the methodology "rapid one-pot sequential aqueous RAFT" or "rosa-RAFT," in which well-defined homo-, copolymer, and mikto-arm star polymers can be prepared in very low to medium reaction volumes (50 µL to 2 mL) via an "arm-first" approach in air within minutes, is reported. Due to the high conversion of a variety of acrylamide/acrylate monomers achieved during each successive short reaction step (each taking 3 min), the requirement for intermediary purification is avoided, drastically facilitating and accelerating the star synthesis process. The presented methodology enables RAFT to be applied to HTP polymeric bio/nanomaterials discovery pipelines, in which hundreds of complex polymeric formulations can be rapidly produced, screened, and scaled up for assessment in a wide range of applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cost-effectiveness of simultaneous versus sequential surgery in head and neck reconstruction.
Wong, Kevin K; Enepekides, Danny J; Higgins, Kevin M
2011-02-01
To determine whether simultaneous (ablation and reconstruction overlaps by two teams) head and neck reconstruction is cost effective compared to sequentially (ablation followed by reconstruction) performed surgery. Case-controlled study. Tertiary care hospital. Oncology patients undergoing free flap reconstruction of the head and neck. A match paired comparison study was performed with a retrospective chart review examining the total time of surgery for sequential and simultaneous surgery. Nine patients were selected for both the sequential and simultaneous groups. Sequential head and neck reconstruction patients were pair matched with patients who had undergone similar oncologic ablative or reconstructive procedures performed in a simultaneous fashion. A detailed cost analysis using the microcosting method was then undertaken looking at the direct costs of the surgeons, anesthesiologist, operating room, and nursing. On average, simultaneous surgery required 3 hours 15 minutes less operating time, leading to a cost savings of approximately $1200/case when compared to sequential surgery. This represents approximately a 15% reduction in the cost of the entire operation. Simultaneous head and neck reconstruction is more cost effective when compared to sequential surgery.
Dihydroazulene photoswitch operating in sequential tunneling regime
DEFF Research Database (Denmark)
Broman, Søren Lindbæk; Lara-Avila, Samuel; Thisted, Christine Lindbjerg
2012-01-01
to electrodes so that the electron transport goes by sequential tunneling. To assure weak coupling, the DHA switching kernel is modified by incorporating p-MeSC6H4 end-groups. Molecules are prepared by Suzuki cross-couplings on suitable halogenated derivatives of DHA. The synthesis presents an expansion of our......, incorporating a p-MeSC6H4 anchoring group in one end, has been placed in a silver nanogap. Conductance measurements justify that transport through both DHA (high resistivity) and VHF (low resistivity) forms goes by sequential tunneling. The switching is fairly reversible and reenterable; after more than 20 ON...
A Trust-region-based Sequential Quadratic Programming Algorithm
DEFF Research Database (Denmark)
Henriksen, Lars Christian; Poulsen, Niels Kjølstad
This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints.......This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints....
de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M
2018-04-01
Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.
Memory updating and mental arithmetic
Directory of Open Access Journals (Sweden)
Cheng-Ching eHan
2016-02-01
Full Text Available Is domain-general memory updating ability predictive of calculation skills or are such skills better predicted by the capacity for updating specifically numerical information? Here, we used multidigit mental multiplication (MMM as a measure for calculating skill as this operation requires the accurate maintenance and updating of information in addition to skills needed for arithmetic more generally. In Experiment 1, we found that only individual differences with regard to a task updating numerical information following addition (MUcalc could predict the performance of MMM, perhaps owing to common elements between the task and MMM. In Experiment 2, new updating tasks were designed to clarify this: a spatial updating task with no numbers, a numerical task with no calculation, and a word task. The results showed that both MUcalc and the spatial task were able to predict the performance of MMM but only with the more difficult problems, while other updating tasks did not predict performance. It is concluded that relevant processes involved in updating the contents of working memory support mental arithmetic in adults.
Properties of the DREAM scheme and its optimization for application to proteins
International Nuclear Information System (INIS)
Westfeld, Thomas; Verel, René; Ernst, Matthias; Böckmann, Anja; Meier, Beat H.
2012-01-01
The DREAM scheme is an efficient adiabatic homonuclear polarization-transfer method suitable for multi-dimensional experiments in biomolecular solid-state NMR. The bandwidth and dynamics of the polarization transfer in the DREAM experiment depend on a number of experimental and spin-system parameters. In order to obtain optimal results, the dependence of the cross-peak intensity on these parameters needs to be understood and carefully controlled. We introduce a simplified model to semi-quantitatively describe the polarization-transfer patterns for the relevant spin systems. Numerical simulations for all natural amino acids (except tryptophane) show the dependence of the cross-peak intensities as a function of the radio-frequency-carrier position. This dependency can be used as a guide to select the desired conditions in protein spectroscopy. Practical guidelines are given on how to set up a DREAM experiment for optimized Cα/Cβ transfer, which is important in sequential assignment experiments.
Synthetic Aperture Sequential Beamforming
DEFF Research Database (Denmark)
Kortbek, Jacob; Jensen, Jørgen Arendt; Gammelmark, Kim Løkke
2008-01-01
A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective is to im......A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective...... is to improve and obtain a more range independent lateral resolution compared to conventional dynamic receive focusing (DRF) without compromising frame rate. SASB is a two-stage procedure using two separate beamformers. First a set of Bmode image lines using a single focal point in both transmit and receive...... is stored. The second stage applies the focused image lines from the first stage as input data. The SASB method has been investigated using simulations in Field II and by off-line processing of data acquired with a commercial scanner. The performance of SASB with a static image object is compared with DRF...
Evaluation Using Sequential Trials Methods.
Cohen, Mark E.; Ralls, Stephen A.
1986-01-01
Although dental school faculty as well as practitioners are interested in evaluating products and procedures used in clinical practice, research design and statistical analysis can sometimes pose problems. Sequential trials methods provide an analytical structure that is both easy to use and statistically valid. (Author/MLW)
Attack Trees with Sequential Conjunction
Jhawar, Ravi; Kordy, Barbara; Mauw, Sjouke; Radomirović, Sasa; Trujillo-Rasua, Rolando
2015-01-01
We provide the first formal foundation of SAND attack trees which are a popular extension of the well-known attack trees. The SAND at- tack tree formalism increases the expressivity of attack trees by intro- ducing the sequential conjunctive operator SAND. This operator enables the modeling of
Directory of Open Access Journals (Sweden)
Guangfeng Liu
2018-04-01
Full Text Available Recently, the well-industry-production-scheme (WIPS has attracted more and more attention to improve tight oil recovery. However, multi-well pressure interference (MWPI induced by well-industry-production-scheme (WIPS strongly challenges the traditional transient pressure analysis methods, which focus on single multi-fractured horizontal wells (SMFHWs without MWPI. Therefore, a semi-analytical methodology for multiwell productivity index (MPI was proposed to study well performance of WIPS scheme in tight reservoir. To facilitate methodology development, the conceptual models of tight formation and WIPS scheme were firstly described. Secondly, seepage models of tight reservoir and hydraulic fractures (HFs were sequentially established and then dynamically coupled. Numerical simulation was utilized to validate our model. Finally, identification of flow regimes and sensitivity analysis were conducted. Our results showed that there was good agreement between our proposed model and numerical simulation; moreover, our approach also gave promising calculation speed over numerical simulation. Some expected flow regimes were significantly distorted due to WIPS. The slope of type curves which characterize the linear or bi-linear flow regime is bigger than 0.5 or 0.25. The horizontal line which characterize radial flow regime is also bigger 0.5. The smaller the oil rate, the more severely flow regimes were distorted. Well rate mainly determines the distortion of MPI curves, while fracture length, well spacing, fracture spacing mainly determine when the distortion of the MPI curves occurs. The bigger the well rate, the more severely the MPI curves are distorted. While as the well spacing decreases, fracture length increases, fracture spacing increases, occurrence of MWPI become earlier. Stress sensitivity coefficient mainly affects the MPI at the formation pseudo-radial flow stage, almost has no influence on the occurrence of MWPI. This work gains some
The impact of eyewitness identifications from simultaneous and sequential lineups.
Wright, Daniel B
2007-10-01
Recent guidelines in the US allow either simultaneous or sequential lineups to be used for eyewitness identification. This paper investigates how potential jurors weight the probative value of the different outcomes from both of these types of lineups. Participants (n=340) were given a description of a case that included some exonerating and some incriminating evidence. There was either a simultaneous or a sequential lineup. Depending on the condition, an eyewitness chose the suspect, chose a filler, or made no identification. The participant had to judge the guilt of the suspect and decide whether to render a guilty verdict. For both simultaneous and sequential lineups an identification had a large effect,increasing the probability of a guilty verdict. There were no reliable effects detected between making no identification and identifying a filler. The effect sizes were similar for simultaneous and sequential lineups. These findings are important for judges and other legal professionals to know for trials involving lineup identifications.
Properties of simultaneous and sequential two-nucleon transfer
International Nuclear Information System (INIS)
Pinkston, W.T.; Satchler, G.R.
1982-01-01
Approximate forms of the first- and second-order distorted-wave Born amplitudes are used to study the overall structure, particularly the selection rules, of the amplitudes for simultaneous and sequential transfer of two nucleons. The role of the spin-state assumed for the intermediate deuterons in sequential (t, p) reactions is stressed. The similarity of one-step and two-step amplitudes for (α, d) reactions is exhibited, and the consequent absence of any obvious J-dependence in their interference is noted. (orig.)
Sequential contrast-enhanced MR imaging of the penis.
Kaneko, K; De Mouy, E H; Lee, B E
1994-04-01
To determine the enhancement patterns of the penis at magnetic resonance (MR) imaging. Sequential contrast material-enhanced MR images of the penis in a flaccid state were obtained in 16 volunteers (12 with normal penile function and four with erectile dysfunction). Subjects with normal erectile function showed gradual and centrifugal enhancement of the corpora cavernosa, while those with erectile dysfunction showed poor enhancement with abnormal progression. Sequential contrast-enhanced MR imaging provides additional morphologic information for the evaluation of erectile dysfunction.
QoE-Driven D2D Media Services Distribution Scheme in Cellular Networks
Directory of Open Access Journals (Sweden)
Mingkai Chen
2017-01-01
Full Text Available Device-to-device (D2D communication has been widely studied to improve network performance and considered as a potential technological component for the next generation communication. Considering the diverse users’ demand, Quality of Experience (QoE is recognized as a new degree of user’s satisfaction for media service transmissions in the wireless communication. Furthermore, we aim at promoting user’s Mean of Score (MOS value to quantify and analyze user’s QoE in the dynamic cellular networks. In this paper, we explore the heterogeneous media service distribution in D2D communications underlaying cellular networks to improve the total users’ QoE. We propose a novel media service scheme based on different QoE models that jointly solve the massive media content dissemination issue for cellular networks. Moreover, we also investigate the so-called Media Service Adaptive Update Scheme (MSAUS framework to maximize users’ QoE satisfaction and we derive the popularity and priority function of different media service QoE expression. Then, we further design Media Service Resource Allocation (MSRA algorithm to schedule limited cellular networks resource, which is based on the popularity function to optimize the total users’ QoE satisfaction and avoid D2D interference. In addition, numerical simulation results indicate that the proposed scheme is more effective in cellular network content delivery, which makes it suitable for various media service propagation.
DEFF Research Database (Denmark)
van Leeuwen, Theo
2013-01-01
This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation.......This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation....
Synthesizing genetic sequential logic circuit with clock pulse generator.
Chuang, Chia-Hua; Lin, Chun-Liang
2014-05-28
Rhythmic clock widely occurs in biological systems which controls several aspects of cell physiology. For the different cell types, it is supplied with various rhythmic frequencies. How to synthesize a specific clock signal is a preliminary but a necessary step to further development of a biological computer in the future. This paper presents a genetic sequential logic circuit with a clock pulse generator based on a synthesized genetic oscillator, which generates a consecutive clock signal whose frequency is an inverse integer multiple to that of the genetic oscillator. An analogous electronic waveform-shaping circuit is constructed by a series of genetic buffers to shape logic high/low levels of an oscillation input in a basic sinusoidal cycle and generate a pulse-width-modulated (PWM) output with various duty cycles. By controlling the threshold level of the genetic buffer, a genetic clock pulse signal with its frequency consistent to the genetic oscillator is synthesized. A synchronous genetic counter circuit based on the topology of the digital sequential logic circuit is triggered by the clock pulse to synthesize the clock signal with an inverse multiple frequency to the genetic oscillator. The function acts like a frequency divider in electronic circuits which plays a key role in the sequential logic circuit with specific operational frequency. A cascaded genetic logic circuit generating clock pulse signals is proposed. Based on analogous implement of digital sequential logic circuits, genetic sequential logic circuits can be constructed by the proposed approach to generate various clock signals from an oscillation signal.
A new approach to develop computer-aided detection schemes of digital mammograms
Tan, Maxine; Qian, Wei; Pu, Jiantao; Liu, Hong; Zheng, Bin
2015-06-01
The purpose of this study is to develop a new global mammographic image feature analysis based computer-aided detection (CAD) scheme and evaluate its performance in detecting positive screening mammography examinations. A dataset that includes images acquired from 1896 full-field digital mammography (FFDM) screening examinations was used in this study. Among them, 812 cases were positive for cancer and 1084 were negative or benign. After segmenting the breast area, a computerized scheme was applied to compute 92 global mammographic tissue density based features on each of four mammograms of the craniocaudal (CC) and mediolateral oblique (MLO) views. After adding three existing popular risk factors (woman’s age, subjectively rated mammographic density, and family breast cancer history) into the initial feature pool, we applied a sequential forward floating selection feature selection algorithm to select relevant features from the bilateral CC and MLO view images separately. The selected CC and MLO view image features were used to train two artificial neural networks (ANNs). The results were then fused by a third ANN to build a two-stage classifier to predict the likelihood of the FFDM screening examination being positive. CAD performance was tested using a ten-fold cross-validation method. The computed area under the receiver operating characteristic curve was AUC = 0.779 ± 0.025 and the odds ratio monotonically increased from 1 to 31.55 as CAD-generated detection scores increased. The study demonstrated that this new global image feature based CAD scheme had a relatively higher discriminatory power to cue the FFDM examinations with high risk of being positive, which may provide a new CAD-cueing method to assist radiologists in reading and interpreting screening mammograms.
Sequential weak continuity of null Lagrangians at the boundary
Czech Academy of Sciences Publication Activity Database
Kalamajska, A.; Kraemer, S.; Kružík, Martin
2014-01-01
Roč. 49, 3/4 (2014), s. 1263-1278 ISSN 0944-2669 R&D Projects: GA ČR GAP201/10/0357 Institutional support: RVO:67985556 Keywords : null Lagrangians * nonhomogeneous nonlinear mappings * sequential weak/in measure continuity Subject RIV: BA - General Mathematics Impact factor: 1.518, year: 2014 http://library.utia.cas.cz/separaty/2013/MTR/kruzik-sequential weak continuity of null lagrangians at the boundary.pdf
International Nuclear Information System (INIS)
Kiefer, B; Bartel, T; Menzel, A
2012-01-01
Several constitutive models for magnetic shape memory alloys (MSMAs) have been proposed in the literature. The implementation of numerical integration schemes, which allow the prediction of constitutive response for general loading cases and ultimately the incorporation of MSMA response into numerical solution algorithms for fully coupled magneto-mechanical boundary value problems, however, has received only very limited attention. In this work, we establish two algorithmic implementations of the internal variable model for MSMAs proposed in (Kiefer and Lagoudas 2005 Phil. Mag. Spec. Issue: Recent Adv. Theor. Mech. 85 4289–329, Kiefer and Lagoudas 2009 J. Intell. Mater. Syst. 20 143–70), where we restrict our attention to pure martensitic variant reorientation to limit complexity. The first updating scheme is based on the numerical integration of the reorientation strain evolution equation and represents a classical predictor–corrector-type general return mapping algorithm. In the second approach, the inequality-constrained optimization problem associated with internal variable evolution is converted into an unconstrained problem via Fischer–Burmeister complementarity functions and then iteratively solved in standard Newton–Raphson format. Simulations are verified by comparison to closed-form solutions for experimentally relevant loading cases. (paper)
Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping
Leberl, F.
1975-01-01
Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.
Energy Technology Data Exchange (ETDEWEB)
Dahlin, Cheryl L.; Williamson, Connie A.; Collins, W. Keith; Dahlin, David C.
2002-06-01
The applicability of sequential extraction as a means to determine species of heavy-metals was examined by a study on soil samples from two Superfund sites: the National Lead Company site in Pedricktown, NJ, and the Roebling Steel, Inc., site in Florence, NJ. Data from a standard sequential extraction procedure were compared to those from a comprehensive study that combined optical- and scanning-electron microscopy, X-ray diffraction, and chemical analyses. The study shows that larger particles of contaminants, encapsulated contaminants, and/or man-made materials such as slags, coke, metals, and plastics are subject to incasement, non-selectivity, and redistribution in the sequential extraction process. The results indicate that standard sequential extraction procedures that were developed for characterizing species of contaminants in river sediments may be unsuitable for stand-alone determinative evaluations of contaminant species in industrial-site materials. However, if employed as part of a comprehensive, site-specific characterization study, sequential extraction could be a very useful tool.
LevelScheme: A level scheme drawing and scientific figure preparation system for Mathematica
Caprio, M. A.
2005-09-01
LevelScheme is a scientific figure preparation system for Mathematica. The main emphasis is upon the construction of level schemes, or level energy diagrams, as used in nuclear, atomic, molecular, and hadronic physics. LevelScheme also provides a general infrastructure for the preparation of publication-quality figures, including support for multipanel and inset plotting, customizable tick mark generation, and various drawing and labeling tasks. Coupled with Mathematica's plotting functions and powerful programming language, LevelScheme provides a flexible system for the creation of figures combining diagrams, mathematical plots, and data plots. Program summaryTitle of program:LevelScheme Catalogue identifier:ADVZ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVZ Operating systems:Any which supports Mathematica; tested under Microsoft Windows XP, Macintosh OS X, and Linux Programming language used:Mathematica 4 Number of bytes in distributed program, including test and documentation:3 051 807 Distribution format:tar.gz Nature of problem:Creation of level scheme diagrams. Creation of publication-quality multipart figures incorporating diagrams and plots. Method of solution:A set of Mathematica packages has been developed, providing a library of level scheme drawing objects, tools for figure construction and labeling, and control code for producing the graphics.
Updating beliefs and combining evidence in adaptive forest management under climate change
DEFF Research Database (Denmark)
Yousefpour, Rasoul; Temperli, Christian; Bugmann, Harald
2013-01-01
We study climate uncertainty and how managers' beliefs about climate change develop and influence their decisions. We develop an approach for updating knowledge and beliefs based on the observation of forest and climate variables and illustrate its application for the adaptive management of an even...... variables may influence a decision maker's beliefs about climate development and thereby management decisions. While forest managers may be inclined to rely on observed forest variables to infer climate change and impacts, we found that observation of climate state, e.g. temperature or precipitation...... on when managers switch to forward-looking management schemes. Thus, robust climate adaptation policies may depend crucially on a better understanding of what factors influence managers' belief in climate change....
Imitation of the sequential structure of actions by chimpanzees (Pan troglodytes).
Whiten, A
1998-09-01
Imitation was studied experimentally by allowing chimpanzees (Pan troglodytes) to observe alternative patterns of actions for opening a specially designed "artificial fruit." Like problematic foods primates deal with naturally, with the test fruit several defenses had to be removed to gain access to an edible core, but the sequential order and method of defense removal could be systematically varied. Each subject repeatedly observed 1 of 2 alternative techniques for removing each defense and 1 of 2 alternative sequential patterns of defense removal. Imitation of sequential organization emerged after repeated cycles of demonstration and attempts at opening the fruit. Imitation in chimpanzees may thus have some power to produce cultural convergence, counter to the supposition that individual learning processes corrupt copied actions. Imitation of sequential organization was accompanied by imitation of some aspects of the techniques that made up the sequence.
Christofides, Stelios; Isidoro, Jorge; Pesznyak, Csilla; Cremers, Florian; Figueira, Rita; van Swol, Christiaan; Evans, Stephen; Torresin, Alberto
2016-01-01
Continuing Professional Development (CPD) is vital to the medical physics profession if it is to embrace the pace of change occurring in medical practice. As CPD is the planned acquisition of knowledge, experience and skills required for professional practice throughout one's working life it promotes excellence and protects the profession and public against incompetence. Furthermore, CPD is a recommended prerequisite of registration schemes (Caruana et al. 2014) and is implied in the Council Directive 2013/59/EURATOM (EU BSS) and the International Basic Safety Standards (BSS). It is to be noted that currently not all national registration schemes require CPD to maintain the registration status necessary to practise medical physics. Such schemes should consider adopting CPD as a prerequisite for renewing registration after a set period of time. This EFOMP Policy Statement, which is an amalgamation and an update of the EFOMP Policy Statements No. 8 and No. 10, presents guidelines for the establishment of national schemes for CPD and activities that should be considered for CPD. Copyright © 2016. Published by Elsevier Ltd.
Sequential determination of important ecotoxic radionuclides in nuclear waste samples
International Nuclear Information System (INIS)
Bilohuscin, J.
2016-01-01
In the dissertation thesis we focused on the development and optimization of a sequential determination method for radionuclides 93 Zr, 94 Nb, 99 Tc and 126 Sn, employing extraction chromatography sorbents TEVA (R) Resin and Anion Exchange Resin, supplied by Eichrom Industries. Prior to the attestation of sequential separation of these proposed radionuclides from radioactive waste samples, a unique sequential procedure of 90 Sr, 239 Pu, 241 Am separation from urine matrices was tried, using molecular recognition sorbents of AnaLig (R) series and extraction chromatography sorbent DGA (R) Resin. On these experiments, four various sorbents were continually used for separation, including PreFilter Resin sorbent, which removes interfering organic materials present in raw urine. After the acquisition of positive results of this sequential procedure followed experiments with a 126 Sn separation using TEVA (R) Resin and Anion Exchange Resin sorbents. Radiochemical recoveries obtained from samples of radioactive evaporate concentrates and sludge showed high efficiency of the separation, while values of 126 Sn were under the minimum detectable activities MDA. Activity of 126 Sn was determined after ingrowth of daughter nuclide 126m Sb on HPGe gamma detector, with minimal contamination of gamma interfering radionuclides with decontamination factors (D f ) higher then 1400 for 60 Co and 47000 for 137 Cs. Based on the acquired experiments and results of these separation procedures, a complex method of sequential separation of 93 Zr, 94 Nb, 99 Tc and 126 Sn was proposed, which included optimization steps similar to those used in previous parts of the dissertation work. Application of the sequential separation method for sorbents TEVA (R) Resin and Anion Exchange Resin on real samples of radioactive wastes provided satisfactory results and an economical, time sparing, efficient method. (author)
A solution for automatic parallelization of sequential assembly code
Directory of Open Access Journals (Sweden)
Kovačević Đorđe
2013-01-01
Full Text Available Since modern multicore processors can execute existing sequential programs only on a single core, there is a strong need for automatic parallelization of program code. Relying on existing algorithms, this paper describes one new software solution tool for parallelization of sequential assembly code. The main goal of this paper is to develop the parallelizator which reads sequential assembler code and at the output provides parallelized code for MIPS processor with multiple cores. The idea is the following: the parser translates assembler input file to program objects suitable for further processing. After that the static single assignment is done. Based on the data flow graph, the parallelization algorithm separates instructions on different cores. Once sequential code is parallelized by the parallelization algorithm, registers are allocated with the algorithm for linear allocation, and the result at the end of the program is distributed assembler code on each of the cores. In the paper we evaluate the speedup of the matrix multiplication example, which was processed by the parallelizator of assembly code. The result is almost linear speedup of code execution, which increases with the number of cores. The speed up on the two cores is 1.99, while on 16 cores the speed up is 13.88.
A Weighted Two-Level Bregman Method with Dictionary Updating for Nonconvex MR Image Reconstruction
Directory of Open Access Journals (Sweden)
Qiegen Liu
2014-01-01
Full Text Available Nonconvex optimization has shown that it needs substantially fewer measurements than l1 minimization for exact recovery under fixed transform/overcomplete dictionary. In this work, two efficient numerical algorithms which are unified by the method named weighted two-level Bregman method with dictionary updating (WTBMDU are proposed for solving lp optimization under the dictionary learning model and subjecting the fidelity to the partial measurements. By incorporating the iteratively reweighted norm into the two-level Bregman iteration method with dictionary updating scheme (TBMDU, the modified alternating direction method (ADM solves the model of pursuing the approximated lp-norm penalty efficiently. Specifically, the algorithms converge after a relatively small number of iterations, under the formulation of iteratively reweighted l1 and l2 minimization. Experimental results on MR image simulations and real MR data, under a variety of sampling trajectories and acceleration factors, consistently demonstrate that the proposed method can efficiently reconstruct MR images from highly undersampled k-space data and presents advantages over the current state-of-the-art reconstruction approaches, in terms of higher PSNR and lower HFEN values.
LPPS: A Distributed Cache Pushing Based K-Anonymity Location Privacy Preserving Scheme
Directory of Open Access Journals (Sweden)
Ming Chen
2016-01-01
Full Text Available Recent years have witnessed the rapid growth of location-based services (LBSs for mobile social network applications. To enable location-based services, mobile users are required to report their location information to the LBS servers and receive answers of location-based queries. Location privacy leak happens when such servers are compromised, which has been a primary concern for information security. To address this issue, we propose the Location Privacy Preservation Scheme (LPPS based on distributed cache pushing. Unlike existing solutions, LPPS deploys distributed cache proxies to cover users mostly visited locations and proactively push cache content to mobile users, which can reduce the risk of leaking users’ location information. The proposed LPPS includes three major process. First, we propose an algorithm to find the optimal deployment of proxies to cover popular locations. Second, we present cache strategies for location-based queries based on the Markov chain model and propose update and replacement strategies for cache content maintenance. Third, we introduce a privacy protection scheme which is proved to achieve k-anonymity guarantee for location-based services. Extensive experiments illustrate that the proposed LPPS achieves decent service coverage ratio and cache hit ratio with lower communication overhead compared to existing solutions.
Multi-stage robust scheme for citrus identification from high resolution airborne images
Amorós-López, Julia; Izquierdo Verdiguier, Emma; Gómez-Chova, Luis; Muñoz-Marí, Jordi; Zoilo Rodríguez-Barreiro, Jorge; Camps-Valls, Gustavo; Calpe-Maravilla, Javier
2008-10-01
Identification of land cover types is one of the most critical activities in remote sensing. Nowadays, managing land resources by using remote sensing techniques is becoming a common procedure to speed up the process while reducing costs. However, data analysis procedures should satisfy the accuracy figures demanded by institutions and governments for further administrative actions. This paper presents a methodological scheme to update the citrus Geographical Information Systems (GIS) of the Comunidad Valenciana autonomous region, Spain). The proposed approach introduces a multi-stage automatic scheme to reduce visual photointerpretation and ground validation tasks. First, an object-oriented feature extraction process is carried out for each cadastral parcel from very high spatial resolution (VHR) images (0.5m) acquired in the visible and near infrared. Next, several automatic classifiers (decision trees, multilayer perceptron, and support vector machines) are trained and combined to improve the final accuracy of the results. The proposed strategy fulfills the high accuracy demanded by policy makers by means of combining automatic classification methods with visual photointerpretation available resources. A level of confidence based on the agreement between classifiers allows us an effective management by fixing the quantity of parcels to be reviewed. The proposed methodology can be applied to similar problems and applications.
Packet reversed packet combining scheme
International Nuclear Information System (INIS)
Bhunia, C.T.
2006-07-01
The packet combining scheme is a well defined simple error correction scheme with erroneous copies at the receiver. It offers higher throughput combined with ARQ protocols in networks than that of basic ARQ protocols. But packet combining scheme fails to correct errors when the errors occur in the same bit locations of two erroneous copies. In the present work, we propose a scheme that will correct error if the errors occur at the same bit location of the erroneous copies. The proposed scheme when combined with ARQ protocol will offer higher throughput. (author)
Documentscape: Intertextuality, Sequentiality & Autonomy at Work
DEFF Research Database (Denmark)
Christensen, Lars Rune; Bjørn, Pernille
2014-01-01
On the basis of an ethnographic field study, this article introduces the concept of documentscape to the analysis of document-centric work practices. The concept of documentscape refers to the entire ensemble of documents in their mutual intertextual interlocking. Providing empirical data from...... a global software development case, we show how hierarchical structures and sequentiality across the interlocked documents are critical to how actors make sense of the work of others and what to do next in a geographically distributed setting. Furthermore, we found that while each document is created...... as part of a quasi-sequential order, this characteristic does not make the document, as a single entity, into a stable object. Instead, we found that the documents were malleable and dynamic while suspended in intertextual structures. Our concept of documentscape points to how the hierarchical structure...
Ten-year scheme of network development - 2013 edition
International Nuclear Information System (INIS)
2013-01-01
RTE, an independent subsidiary of EDF, is the French electricity transmission system operator. It is a public service company responsible for operating, maintaining and developing the high and extra high voltage network. It guarantees the reliability and proper operation of the power network. RTE transports electricity between electricity suppliers (French and European) and consumers, whether they are electricity distributors or industrial consumers directly connected to the transmission system. The mission of RTE is to balance the electricity supply and demand in real time. With the support of the government authorities, RTE prepares a ten-year scheme of network development in France; This document presents the main electricity transport infrastructures to foresee within the ten coming years and lists the network development investments which must be realised and implemented within 3 years. The document is updated each year and comes to complement at the national level the European ten year network development plan (TYNDP) and the European regional plans as provided for in the 2009/72/CE European directive
Directory of Open Access Journals (Sweden)
Pak Kin Wong
2014-01-01
Full Text Available Most adaptive neural control schemes are based on stochastic gradient-descent backpropagation (SGBP, which suffers from local minima problem. Although the recently proposed regularized online sequential-extreme learning machine (ReOS-ELM can overcome this issue, it requires a batch of representative initial training data to construct a base model before online learning. The initial data is usually difficult to collect in adaptive control applications. Therefore, this paper proposes an improved version of ReOS-ELM, entitled fully online sequential-extreme learning machine (FOS-ELM. While retaining the advantages of ReOS-ELM, FOS-ELM discards the initial training phase, and hence becomes suitable for adaptive control applications. To demonstrate its effectiveness, FOS-ELM was applied to the adaptive control of engine air-fuel ratio based on a simulated engine model. Besides, controller parameters were also analyzed, in which it is found that large hidden node number with small regularization parameter leads to the best performance. A comparison among FOS-ELM and SGBP was also conducted. The result indicates that FOS-ELM achieves better tracking and convergence performance than SGBP, since FOS-ELM tends to learn the unknown engine model globally whereas SGBP tends to “forget” what it has learnt. This implies that FOS-ELM is more preferable for adaptive control applications.
An Update on Design Tools for Optimization of CMC 3D Fiber Architectures
Lang, J.; DiCarlo, J.
2012-01-01
Objective: Describe and up-date progress for NASA's efforts to develop 3D architectural design tools for CMC in general and for SIC/SiC composites in particular. Describe past and current sequential work efforts aimed at: Understanding key fiber and tow physical characteristics in conventional 2D and 3D woven architectures as revealed by microstructures in the literature. Developing an Excel program for down-selecting and predicting key geometric properties and resulting key fiber-controlled properties for various conventional 3D architectures. Developing a software tool for accurately visualizing all the key geometric details of conventional 3D architectures. Validating tools by visualizing and predicting the Internal geometry and key mechanical properties of a NASA SIC/SIC panel with a 3D orthogonal architecture. Applying the predictive and visualization tools toward advanced 3D orthogonal SiC/SIC composites, and combining them into a user-friendly software program.
Sequential series for nuclear reactions
International Nuclear Information System (INIS)
Izumo, Ko
1975-01-01
A new time-dependent treatment of nuclear reactions is given, in which the wave function of compound nucleus is expanded by a sequential series of the reaction processes. The wave functions of the sequential series form another complete set of compound nucleus at the limit Δt→0. It is pointed out that the wave function is characterized by the quantities: the number of degrees of freedom of motion n, the period of the motion (Poincare cycle) tsub(n), the delay time t sub(nμ) and the relaxation time tausub(n) to the equilibrium of compound nucleus, instead of the usual quantum number lambda, the energy eigenvalue Esub(lambda) and the total width GAMMAsub(lambda) of resonance levels, respectively. The transition matrix elements and the yields of nuclear reactions also become the functions of time given by the Fourier transform of the usual ones. The Poincare cycles of compound nuclei are compared with the observed correlations among resonance levels, which are about 10 -17 --10 -16 sec for medium and heavy nuclei and about 10 -20 sec for the intermediate resonances. (auth.)
The PMIPv6-Based Group Binding Update for IoT Devices
Directory of Open Access Journals (Sweden)
Jianfeng Guan
2016-01-01
Full Text Available Internet of Things (IoT has been booming with rapid increase of the various wearable devices, vehicle embedded devices, and so on, and providing the effective mobility management for these IoT devices becomes a challenge due to the different application scenarios as well as the limited energy and bandwidth. Recently, lots of researchers have focused on this topic and proposed several solutions based on the combination of IoT features and traditional mobility management protocols, in which most of these schemes take the IoT devices as mobile networks and adopt the NEtwork MObility (NEMO and its variants to provide the mobility support. However, these solutions are in face of the heavy signaling cost problem. Since IoT devices are generally combined to realize the complex functions, these devices may have similar movement behaviors. Clearly analyzing these characters and using them in the mobility management will reduce the signaling cost and improve the scalability. Motivated by this, we propose a PMIPv6-based group binding update method. In particular, we describe its group creation procedure, analyze its impact on the mobility management, and derive its reduction ratio in terms of signaling cost. The final results show that the introduction of group binding update can remarkably reduce the signaling cost.
Update on Heavy-Meson Spectrum Tests of the Oktay--Kronfeld Action
Energy Technology Data Exchange (ETDEWEB)
Bailey, Jon A. [Seoul Natl. U.; Jang, Yong-Chull [Seoul Natl. U.; Lee, Weonjong [Seoul Natl. U.; DeTar, Carleton [Utah U.; Kronfeld, Andreas S. [TUM-IAS, Munich; Oktay, Mehmet B. [Iowa U.
2016-01-18
We present updated results of a numerical improvement test with heavy-meson spectrum for the Oktay--Kronfeld (OK) action. The OK action is an extension of the Fermilab improvement program for massive Wilson fermions including all dimension-six and some dimension-seven bilinear terms. Improvement terms are truncated by HQET power counting at $\\mathrm{O}(\\Lambda^3/m_Q^3)$ for heavy-light systems, and by NRQCD power counting at $\\mathrm{O}(v^6)$ for quarkonium. They suffice for tree-level matching to QCD to the given order in the power-counting schemes. To assess the improvement, we generate new data with the OK and Fermilab action that covers both charm and bottom quark mass regions on a MILC coarse $(a \\approx 0.12~\\text{fm})$ $2+1$ flavor, asqtad-staggered ensemble. We update the analyses of the inconsistency quantity and the hyperfine splittings for the rest and kinetic masses. With one exception, the results clearly show that the OK action significantly reduces heavy-quark discretization effects in the meson spectrum. The exception is the hyperfine splitting of the heavy-light system near the $B_s$ meson mass, where statistics are too low to draw a firm conclusion, despite promising results.
Sequential Change-Point Detection via Online Convex Optimization
Directory of Open Access Journals (Sweden)
Yang Cao
2018-02-01
Full Text Available Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization algorithms such as online mirror descent, which provides a more versatile approach to tackling complex situations where recursive maximum likelihood estimators cannot be found. When the underlying distributions belong to a exponential family and the estimators satisfy the logarithm regret property, we show that this approach is nearly second-order asymptotically optimal. This means that the upper bound for the false alarm rate of the algorithm (measured by the average-run-length meets the lower bound asymptotically up to a log-log factor when the threshold tends to infinity. Our proof is achieved by making a connection between sequential change-point and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical and real data examples validate our theory.
Sequential decoders for large MIMO systems
Ali, Konpal S.; Abediseid, Walid; Alouini, Mohamed-Slim
2014-01-01
the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity
Update of CERN exchange network
2003-01-01
An update of the CERN exchange network will be done next April. Disturbances or even interruptions of telephony services may occur from 4th to 24th April during evenings from 18:30 to 00:00 but will not exceed more than 4 consecutive hours (see tentative planning below). CERN divisions are invited to avoid any change requests (set-ups, move or removals) of telephones and fax machines from 4th to 25th April. Everything will be done to minimize potential inconveniences which may occur during this update. There will be no loss of telephone functionalities. CERN GSM portable phones won't be affected by this change. Should you need more details, please send us your questions by email to Standard.Telephone@cern.ch. DateChange typeAffected areas April 11 Update of switch in LHC 4 LHC 4 Point April 14 Update of switch in LHC 5 LHC 5 Point April 15 Update of switches in LHC 3 and LHC 2 Points LHC 3 and LHC 2 April 22 Update of switch N4 Meyrin Ouest April 23 Update of switch N6 Prévessin Site Ap...
International Nuclear Information System (INIS)
Ma Hai-Qiang; Wei Ke-Jin; Yang Jian-Hui; Li Rui-Xue; Zhu Wu
2014-01-01
We present a full quantum network scheme using a modified BB84 protocol. Unlike other quantum network schemes, it allows quantum keys to be distributed between two arbitrary users with the help of an intermediary detecting user. Moreover, it has good expansibility and prevents all potential attacks using loopholes in a detector, so it is more practical to apply. Because the fiber birefringence effects are automatically compensated, the scheme is distinctly stable in principle and in experiment. The simple components for every user make our scheme easier for many applications. The experimental results demonstrate the stability and feasibility of this scheme. (general)
Active inference and learning.
Friston, Karl; FitzGerald, Thomas; Rigoli, Francesco; Schwartenbeck, Philipp; O Doherty, John; Pezzulo, Giovanni
2016-09-01
This paper offers an active inference account of choice behaviour and learning. It focuses on the distinction between goal-directed and habitual behaviour and how they contextualise each other. We show that habits emerge naturally (and autodidactically) from sequential policy optimisation when agents are equipped with state-action policies. In active inference, behaviour has explorative (epistemic) and exploitative (pragmatic) aspects that are sensitive to ambiguity and risk respectively, where epistemic (ambiguity-resolving) behaviour enables pragmatic (reward-seeking) behaviour and the subsequent emergence of habits. Although goal-directed and habitual policies are usually associated with model-based and model-free schemes, we find the more important distinction is between belief-free and belief-based schemes. The underlying (variational) belief updating provides a comprehensive (if metaphorical) process theory for several phenomena, including the transfer of dopamine responses, reversal learning, habit formation and devaluation. Finally, we show that active inference reduces to a classical (Bellman) scheme, in the absence of ambiguity. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Transmission usage cost allocation schemes
International Nuclear Information System (INIS)
Abou El Ela, A.A.; El-Sehiemy, R.A.
2009-01-01
This paper presents different suggested transmission usage cost allocation (TCA) schemes to the system individuals. Different independent system operator (ISO) visions are presented using the proportional rata and flow-based TCA methods. There are two proposed flow-based TCA schemes (FTCA). The first FTCA scheme generalizes the equivalent bilateral exchanges (EBE) concepts for lossy networks through two-stage procedure. The second FTCA scheme is based on the modified sensitivity factors (MSF). These factors are developed from the actual measurements of power flows in transmission lines and the power injections at different buses. The proposed schemes exhibit desirable apportioning properties and are easy to implement and understand. Case studies for different loading conditions are carried out to show the capability of the proposed schemes for solving the TCA problem. (author)
Decentralized Consistent Updates in SDN
Nguyen, Thanh Dang
2017-04-10
We present ez-Segway, a decentralized mechanism to consistently and quickly update the network state while preventing forwarding anomalies (loops and blackholes) and avoiding link congestion. In our design, the centralized SDN controller only pre-computes information needed by the switches during the update execution. This information is distributed to the switches, which use partial knowledge and direct message passing to efficiently realize the update. This separation of concerns has the key benefit of improving update performance as the communication and computation bottlenecks at the controller are removed. Our evaluations via network emulations and large-scale simulations demonstrate the efficiency of ez-Segway, which compared to a centralized approach, improves network update times by up to 45% and 57% at the median and the 99th percentile, respectively. A deployment of a system prototype in a real OpenFlow switch and an implementation in P4 demonstrate the feasibility and low overhead of implementing simple network update functionality within switches.
Comment on: "Cell Therapy for Heart Disease: Trial Sequential Analyses of Two Cochrane Reviews"
DEFF Research Database (Denmark)
Castellini, Greta; Nielsen, Emil Eik; Gluud, Christian
2017-01-01
Trial Sequential Analysis is a frequentist method to help researchers control the risks of random errors in meta-analyses (1). Fisher and colleagues used Trial Sequential Analysis on cell therapy for heart diseases (2). The present article discusses the usefulness of Trial Sequential Analysis and...
Li, Jiahao; Klee Barillas, Joaquin; Guenther, Clemens; Danzer, Michael A.
2014-02-01
Battery state monitoring is one of the key techniques in battery management systems e.g. in electric vehicles. An accurate estimation can help to improve the system performance and to prolong the battery remaining useful life. Main challenges for the state estimation for LiFePO4 batteries are the flat characteristic of open-circuit-voltage over battery state of charge (SOC) and the existence of hysteresis phenomena. Classical estimation approaches like Kalman filtering show limitations to handle nonlinear and non-Gaussian error distribution problems. In addition, uncertainties in the battery model parameters must be taken into account to describe the battery degradation. In this paper, a novel model-based method combining a Sequential Monte Carlo filter with adaptive control to determine the cell SOC and its electric impedance is presented. The applicability of this dual estimator is verified using measurement data acquired from a commercial LiFePO4 cell. Due to a better handling of the hysteresis problem, results show the benefits of the proposed method against the estimation with an Extended Kalman filter.
Matroids and quantum-secret-sharing schemes
International Nuclear Information System (INIS)
Sarvepalli, Pradeep; Raussendorf, Robert
2010-01-01
A secret-sharing scheme is a cryptographic protocol to distribute a secret state in an encoded form among a group of players such that only authorized subsets of the players can reconstruct the secret. Classically, efficient secret-sharing schemes have been shown to be induced by matroids. Furthermore, access structures of such schemes can be characterized by an excluded minor relation. No such relations are known for quantum secret-sharing schemes. In this paper we take the first steps toward a matroidal characterization of quantum-secret-sharing schemes. In addition to providing a new perspective on quantum-secret-sharing schemes, this characterization has important benefits. While previous work has shown how to construct quantum-secret-sharing schemes for general access structures, these schemes are not claimed to be efficient. In this context the present results prove to be useful; they enable us to construct efficient quantum-secret-sharing schemes for many general access structures. More precisely, we show that an identically self-dual matroid that is representable over a finite field induces a pure-state quantum-secret-sharing scheme with information rate 1.
Efficient sequential and parallel algorithms for record linkage.
Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar
2014-01-01
Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Our sequential and parallel algorithms have been tested on a real dataset of 1,083,878 records and synthetic datasets ranging in size from 50,000 to 9,000,000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm.
Hammerman, Ariel; Feder-Bubis, Paula; Greenberg, Dan
2012-01-01
Risk-sharing is being considered by many health care systems to address the financial risk associated with the adoption of new technologies. We explored major stakeholders' views toward the potential implementation of a financial risk-sharing mechanism regarding budget-impact estimates for adding new technologies to the Israeli National List of Health Services. According to our proposed scheme, health plans will be partially compensated by technology sponsors if the actual use of a technology is substantially higher than what was projected and health plans will refund the government for budgets that were not fully utilized. By using a semi-structured protocol, we interviewed major stakeholders involved in the process of updating the National List of Health Services (N = 31). We inquired into participants' views toward our proposed risk-sharing mechanism, whether the proposed scheme would achieve its purpose, its feasibility of implementation, and their opinion on the other stakeholders' incentives. Participants' considerations were classified into four main areas: financial, administrative/managerial, impact on patients' health, and influence on public image. Most participants agreed that the conceptual risk-sharing scheme will improve the accuracy of early budget estimates and were in favor of the proposed scheme, although Ministry of Finance officials tended to object to it. The successful implementation of risk-sharing schemes depends mainly on their perception as a win-win situation by all stakeholders. The perception exposed by our participants that risk-sharing can be a tool for improving the accuracy of early budget-impact estimates and the challenges pointed by them are relevant to other health care systems also and should be considered when implementing similar schemes. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Fault detection in multiply-redundant measurement systems via sequential testing
International Nuclear Information System (INIS)
Ray, A.
1988-01-01
The theory and application of a sequential test procedure for fault detection and isolation. The test procedure is suited for development of intelligent instrumentation in strategic processes like aircraft and nuclear plants where redundant measurements are usually available for individual critical variables. The test procedure consists of: (1) a generic redundancy management procedure which is essentially independent of the fault detection strategy and measurement noise statistics, and (2) a modified version of sequential probability ratio test algorithm for fault detection and isolation, which functions within the framework of this redundancy management procedure. The sequential test procedure is suitable for real-time applications using commercially available microcomputers and its efficacy has been verified by online fault detection in an operating nuclear reactor. 15 references
DEFF Research Database (Denmark)
Hansen, Elo Harald; Miró, Manuel; Long, Xiangbao
2006-01-01
The determination of trace level concentrations of elements, such as metal species, in complex matrices by atomic absorption or emission spectrometric methods often require appropriate pretreatments comprising separation of the analyte from interfering constituents and analyte preconcentration...... are presented as based on the exploitation of micro-sequential injection (μSI-LOV) using hydrophobic as well as hydrophilic bead materials. The examples given comprise the presentation of a universal approach for SPE-assays, front-end speciation of Cr(III) and Cr(VI) in a fully automated and enclosed set...
Sequential designs for sensitivity analysis of functional inputs in computer experiments
International Nuclear Information System (INIS)
Fruth, J.; Roustant, O.; Kuhnt, S.
2015-01-01
Computer experiments are nowadays commonly used to analyze industrial processes aiming at achieving a wanted outcome. Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on the response variable. In this work we focus on sensitivity analysis of a scalar-valued output of a time-consuming computer code depending on scalar and functional input parameters. We investigate a sequential methodology, based on piecewise constant functions and sequential bifurcation, which is both economical and fully interpretable. The new approach is applied to a sheet metal forming problem in three sequential steps, resulting in new insights into the behavior of the forming process over time. - Highlights: • Sensitivity analysis method for functional and scalar inputs is presented. • We focus on the discovery of most influential parts of the functional domain. • We investigate economical sequential methodology based on piecewise constant functions. • Normalized sensitivity indices are introduced and investigated theoretically. • Successful application to sheet metal forming on two functional inputs
Scheme Program Documentation Tools
DEFF Research Database (Denmark)
Nørmark, Kurt
2004-01-01
are separate and intended for different documentation purposes they are related to each other in several ways. Both tools are based on XML languages for tool setup and for documentation authoring. In addition, both tools rely on the LAML framework which---in a systematic way---makes an XML language available...... as named functions in Scheme. Finally, the Scheme Elucidator is able to integrate SchemeDoc resources as part of an internal documentation resource....
On the origin of reproducible sequential activity in neural circuits
Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.
2004-12-01
Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.
Adaptive Fault Detection for Complex Dynamic Processes Based on JIT Updated Data Set
Directory of Open Access Journals (Sweden)
Jinna Li
2012-01-01
Full Text Available A novel fault detection technique is proposed to explicitly account for the nonlinear, dynamic, and multimodal problems existed in the practical and complex dynamic processes. Just-in-time (JIT detection method and k-nearest neighbor (KNN rule-based statistical process control (SPC approach are integrated to construct a flexible and adaptive detection scheme for the control process with nonlinear, dynamic, and multimodal cases. Mahalanobis distance, representing the correlation among samples, is used to simplify and update the raw data set, which is the first merit in this paper. Based on it, the control limit is computed in terms of both KNN rule and SPC method, such that we can identify whether the current data is normal or not by online approach. Noted that the control limit obtained changes with updating database such that an adaptive fault detection technique that can effectively eliminate the impact of data drift and shift on the performance of detection process is obtained, which is the second merit in this paper. The efficiency of the developed method is demonstrated by the numerical examples and an industrial case.
Updating Recursive XML Views of Relations
DEFF Research Database (Denmark)
Choi, Byron; Cong, Gao; Fan, Wenfei
2009-01-01
This paper investigates the view update problem for XML views published from relational data. We consider XML views defined in terms of mappings directed by possibly recursive DTDs compressed into DAGs and stored in relations. We provide new techniques to efficiently support XML view updates...... specified in terms of XPath expressions with recursion and complex filters. The interaction between XPath recursion and DAG compression of XML views makes the analysis of the XML view update problem rather intriguing. Furthermore, many issues are still open even for relational view updates, and need...... to be explored. In response to these, on the XML side, we revise the notion of side effects and update semantics based on the semantics of XML views, and present effecient algorithms to translate XML updates to relational view updates. On the relational side, we propose a mild condition on SPJ views, and show...
A Memory Efficient Network Encryption Scheme
El-Fotouh, Mohamed Abo; Diepold, Klaus
In this paper, we studied the two widely used encryption schemes in network applications. Shortcomings have been found in both schemes, as these schemes consume either more memory to gain high throughput or low memory with low throughput. The need has aroused for a scheme that has low memory requirements and in the same time possesses high speed, as the number of the internet users increases each day. We used the SSM model [1], to construct an encryption scheme based on the AES. The proposed scheme possesses high throughput together with low memory requirements.
Sequential spatial processes for image analysis
M.N.M. van Lieshout (Marie-Colette); V. Capasso
2009-01-01
htmlabstractWe give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects
Sequential models for coarsening and missingness
Gill, R.D.; Robins, J.M.
1997-01-01
In a companion paper we described what intuitively would seem to be the most general possible way to generate Coarsening at Random mechanisms a sequential procedure called randomized monotone coarsening Counterexamples showed that CAR mechanisms exist which cannot be represented in this way Here we
Modified Aggressive Packet Combining Scheme
International Nuclear Information System (INIS)
Bhunia, C.T.
2010-06-01
In this letter, a few schemes are presented to improve the performance of aggressive packet combining scheme (APC). To combat error in computer/data communication networks, ARQ (Automatic Repeat Request) techniques are used. Several modifications to improve the performance of ARQ are suggested by recent research and are found in literature. The important modifications are majority packet combining scheme (MjPC proposed by Wicker), packet combining scheme (PC proposed by Chakraborty), modified packet combining scheme (MPC proposed by Bhunia), and packet reversed packet combining (PRPC proposed by Bhunia) scheme. These modifications are appropriate for improving throughput of conventional ARQ protocols. Leung proposed an idea of APC for error control in wireless networks with the basic objective of error control in uplink wireless data network. We suggest a few modifications of APC to improve its performance in terms of higher throughput, lower delay and higher error correction capability. (author)
Sequential bayes estimation algorithm with cubic splines on uniform meshes
International Nuclear Information System (INIS)
Hossfeld, F.; Mika, K.; Plesser-Walk, E.
1975-11-01
After outlining the principles of some recent developments in parameter estimation, a sequential numerical algorithm for generalized curve-fitting applications is presented combining results from statistical estimation concepts and spline analysis. Due to its recursive nature, the algorithm can be used most efficiently in online experimentation. Using computer-sumulated and experimental data, the efficiency and the flexibility of this sequential estimation procedure is extensively demonstrated. (orig.) [de
Directory of Open Access Journals (Sweden)
Huber-Wagner S
2010-05-01
Full Text Available Abstract Background There are several well established scores for the assessment of the prognosis of major trauma patients that all have in common that they can be calculated at the earliest during intensive care unit stay. We intended to develop a sequential trauma score (STS that allows prognosis at several early stages based on the information that is available at a particular time. Study design In a retrospective, multicenter study using data derived from the Trauma Registry of the German Trauma Society (2002-2006, we identified the most relevant prognostic factors from the patients basic data (P, prehospital phase (A, early (B1, and late (B2 trauma room phase. Univariate and logistic regression models as well as score quality criteria and the explanatory power have been calculated. Results A total of 2,354 patients with complete data were identified. From the patients basic data (P, logistic regression showed that age was a significant predictor of survival (AUCmodel p, area under the curve = 0.63. Logistic regression of the prehospital data (A showed that blood pressure, pulse rate, Glasgow coma scale (GCS, and anisocoria were significant predictors (AUCmodel A = 0.76; AUCmodel P + A = 0.82. Logistic regression of the early trauma room phase (B1 showed that peripheral oxygen saturation, GCS, anisocoria, base excess, and thromboplastin time to be significant predictors of survival (AUCmodel B1 = 0.78; AUCmodel P +A + B1 = 0.85. Multivariate analysis of the late trauma room phase (B2 detected cardiac massage, abbreviated injury score (AIS of the head ≥ 3, the maximum AIS, the need for transfusion or massive blood transfusion, to be the most important predictors (AUCmodel B2 = 0.84; AUCfinal model P + A + B1 + B2 = 0.90. The explanatory power - a tool for the assessment of the relative impact of each segment to mortality - is 25% for P, 7% for A, 17% for B1 and 51% for B2. A spreadsheet for the easy calculation of the sequential trauma
Song, Jaeyong
2001-01-01
IVABSTRACTIn this paper, we investigate the firm-level mechanisms that underlie the sequential foreign direct investment (FDI) decisions of multinational corporations (MNCs). To understand inter-firm heterogeneity in the sequential FDI behaviors of MNCs, we develop a firm capability-based model of sequential FDI decisions. In the setting of Japanese electronics MNCs in East Asia, we empirically examine how prior investments in firm capabilities affect sequential investments into existingprodu...
Retrieval of sea surface velocities using sequential ocean colour monitor (OCM) data
Digital Repository Service at National Institute of Oceanography (India)
Prasad, J.S.; Rajawat, A.S.; Pradhan, Y.; Chauhan, O.S.; Nayak, S.R.
velocities has been developed. The method is based on matching suspended sediment dispersion patterns, in sequential two time lapsed images. The pattern matching is performed on atmospherically corrected and geo-referenced sequential pair of images by Maximum...
A fast and accurate online sequential learning algorithm for feedforward networks.
Liang, Nan-Ying; Huang, Guang-Bin; Saratchandran, P; Sundararajan, N
2006-11-01
In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.
Ibeas, Asier; de la Sen, Manuel
2006-10-01
The problem of controlling a tandem of robotic manipulators composing a teleoperation system with force reflection is addressed in this paper. The final objective of this paper is twofold: 1) to design a robust control law capable of ensuring closed-loop stability for robots with uncertainties and 2) to use the so-obtained control law to improve the tracking of each robot to its corresponding reference model in comparison with previously existing controllers when the slave is interacting with the obstacle. In this way, a multiestimation-based adaptive controller is proposed. Thus, the master robot is able to follow more accurately the constrained motion defined by the slave when interacting with an obstacle than when a single-estimation-based controller is used, improving the transparency property of the teleoperation scheme. The closed-loop stability is guaranteed if a minimum residence time, which might be updated online when unknown, between different controller parameterizations is respected. Furthermore, the analysis of the teleoperation and stability capabilities of the overall scheme is carried out. Finally, some simulation examples showing the working of the multiestimation scheme complete this paper.
Bonus schemes and trading activity
Pikulina, E.S.; Renneboog, L.D.R.; ter Horst, J.R.; Tobler, P.N.
2014-01-01
Little is known about how different bonus schemes affect traders' propensity to trade and which bonus schemes improve traders' performance. We study the effects of linear versus threshold bonus schemes on traders' behavior. Traders buy and sell shares in an experimental stock market on the basis of
Sequential spatial processes for image analysis
Lieshout, van M.N.M.; Capasso, V.
2009-01-01
We give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects through a video
Directory of Open Access Journals (Sweden)
Muhammad Asif
2015-01-01
Full Text Available One of the key requirements for mobile devices is to provide high-performance computing at lower power consumption. The processors used in these devices provide specific hardware resources to handle computationally intensive video processing and interactive graphical applications. Moreover, processors designed for low-power applications may introduce limitations on the availability and usage of resources, which present additional challenges to the system designers. Owing to the specific design of the JZ47x series of mobile application processors, a hybrid software-hardware implementation scheme for H.264/AVC encoder is proposed in this work. The proposed scheme distributes the encoding tasks among hardware and software modules. A series of optimization techniques are developed to speed up the memory access and data transferring among memories. Moreover, an efficient data reusage design is proposed for the deblock filter video processing unit to reduce the memory accesses. Furthermore, fine grained macroblock (MB level parallelism is effectively exploited and a pipelined approach is proposed for efficient utilization of hardware processing cores. Finally, based on parallelism in the proposed design, encoding tasks are distributed between two processing cores. Experiments show that the hybrid encoder is 12 times faster than a highly optimized sequential encoder due to proposed techniques.
Polarization control of direct (non-sequential) two-photon double ionization of He
International Nuclear Information System (INIS)
Pronin, E A; Manakov, N L; Marmo, S I; Starace, Anthony F
2007-01-01
An ab initio parametrization of the doubly-differential cross section (DDCS) for two-photon double ionization (TPDI) from an s 2 subshell of an atom in a 1 S 0 -state is presented. Analysis of the elliptic dichroism (ED) effect in the DDCS for TPDI of He and its comparison with the same effect in the concurrent process of sequential double ionization shows their qualitative and quantitative differences, thus providing a means to control and to distinguish sequential and non-sequential processes by measuring the relative ED parameter
A Bayesian sequential design using alpha spending function to control type I error.
Zhu, Han; Yu, Qingzhao
2017-10-01
We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.
Energy Technology Data Exchange (ETDEWEB)
Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.
Updating systematic reviews: an international survey.
Directory of Open Access Journals (Sweden)
Chantelle Garritty
Full Text Available BACKGROUND: Systematic reviews (SRs should be up to date to maintain their importance in informing healthcare policy and practice. However, little guidance is available about when and how to update SRs. Moreover, the updating policies and practices of organizations that commission or produce SRs are unclear. METHODOLOGY/PRINCIPAL FINDINGS: The objective was to describe the updating practices and policies of agencies that sponsor or conduct SRs. An Internet-based survey was administered to a purposive non-random sample of 195 healthcare organizations within the international SR community. Survey results were analyzed using descriptive statistics. The completed response rate was 58% (n = 114 from across 26 countries with 70% (75/107 of participants identified as producers of SRs. Among responders, 79% (84/107 characterized the importance of updating as high or very-high and 57% (60/106 of organizations reported to have a formal policy for updating. However, only 29% (35/106 of organizations made reference to a written policy document. Several groups (62/105; 59% reported updating practices as irregular, and over half (53/103 of organizational respondents estimated that more than 50% of their respective SRs were likely out of date. Authors of the original SR (42/106; 40% were most often deemed responsible for ensuring SRs were current. Barriers to updating included resource constraints, reviewer motivation, lack of academic credit, and limited publishing formats. Most respondents (70/100; 70% indicated that they supported centralization of updating efforts across institutions or agencies. Furthermore, 84% (83/99 of respondents indicated they favoured the development of a central registry of SRs, analogous to efforts within the clinical trials community. CONCLUSIONS/SIGNIFICANCE: Most organizations that sponsor and/or carry out SRs consider updating important. Despite this recognition, updating practices are not regular, and many organizations lack
The pursuit of balance in sequential randomized trials
Directory of Open Access Journals (Sweden)
Raymond P. Guiteras
2016-06-01
Full Text Available In many randomized trials, subjects enter the sample sequentially. Because the covariates for all units are not known in advance, standard methods of stratification do not apply. We describe and assess the method of DA-optimal sequential allocation (Atkinson, 1982 for balancing stratification covariates across treatment arms. We provide simulation evidence that the method can provide substantial improvements in precision over commonly employed alternatives. We also describe our experience implementing the method in a field trial of a clean water and handwashing intervention in Dhaka, Bangladesh, the first time the method has been used. We provide advice and software for future researchers.
Event-shape analysis: Sequential versus simultaneous multifragment emission
International Nuclear Information System (INIS)
Cebra, D.A.; Howden, S.; Karn, J.; Nadasen, A.; Ogilvie, C.A.; Vander Molen, A.; Westfall, G.D.; Wilson, W.K.; Winfield, J.S.; Norbeck, E.
1990-01-01
The Michigan State University 4π array has been used to select central-impact-parameter events from the reaction 40 Ar+ 51 V at incident energies from 35 to 85 MeV/nucleon. The event shape in momentum space is an observable which is shown to be sensitive to the dynamics of the fragmentation process. A comparison of the experimental event-shape distribution to sequential- and simultaneous-decay predictions suggests that a transition in the breakup process may have occurred. At 35 MeV/nucleon, a sequential-decay simulation reproduces the data. For the higher energies, the experimental distributions fall between the two contrasting predictions
Sequential approach to Colombeau's theory of generalized functions
International Nuclear Information System (INIS)
Todorov, T.D.
1987-07-01
J.F. Colombeau's generalized functions are constructed as equivalence classes of the elements of a specially chosen ultrapower of the class of the C ∞ -functions. The elements of this ultrapower are considered as sequences of C ∞ -functions, so in a sense, the sequential construction presented here refers to the original Colombeau theory just as, for example, the Mikusinski sequential approach to the distribution theory refers to the original Schwartz theory of distributions. The paper could be used as an elementary introduction to the Colombeau theory in which recently a solution was found to the problem of multiplication of Schwartz distributions. (author). Refs
Configural and component processing in simultaneous and sequential lineup procedures
Flowe, HD; Smith, HMJ; Karoğlu, N; Onwuegbusi, TO; Rai, L
2015-01-01
Configural processing supports accurate face recognition, yet it has never been examined within the context of criminal identification lineups. We tested, using the inversion paradigm, the role of configural processing in lineups. Recent research has found that face discrimination accuracy in lineups is better in a simultaneous compared to a sequential lineup procedure. Therefore, we compared configural processing in simultaneous and sequential lineups to examine whether there are differences...
Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Qiu, Yuchen; Zheng, Bin
2018-02-01
Objective of this study is to develop and test a new computer-aided detection (CAD) scheme with improved region of interest (ROI) segmentation combined with an image feature extraction framework to improve performance in predicting short-term breast cancer risk. A dataset involving 570 sets of "prior" negative mammography screening cases was retrospectively assembled. In the next sequential "current" screening, 285 cases were positive and 285 cases remained negative. A CAD scheme was applied to all 570 "prior" negative images to stratify cases into the high and low risk case group of having cancer detected in the "current" screening. First, a new ROI segmentation algorithm was used to automatically remove useless area of mammograms. Second, from the matched bilateral craniocaudal view images, a set of 43 image features related to frequency characteristics of ROIs were initially computed from the discrete cosine transform and spatial domain of the images. Third, a support vector machine model based machine learning classifier was used to optimally classify the selected optimal image features to build a CAD-based risk prediction model. The classifier was trained using a leave-one-case-out based cross-validation method. Applying this improved CAD scheme to the testing dataset, an area under ROC curve, AUC = 0.70+/-0.04, which was significantly higher than using the extracting features directly from the dataset without the improved ROI segmentation step (AUC = 0.63+/-0.04). This study demonstrated that the proposed approach could improve accuracy on predicting short-term breast cancer risk, which may play an important role in helping eventually establish an optimal personalized breast cancer paradigm.
Standardized method for reproducing the sequential X-rays flap
International Nuclear Information System (INIS)
Brenes, Alejandra; Molina, Katherine; Gudino, Sylvia
2009-01-01
A method is validated to estandardize in the taking, developing and analysis of bite-wing radiographs taken in sequential way, in order to compare and evaluate detectable changes in the evolution of the interproximal lesions through time. A radiographic positioner called XCP® is modified by means of a rigid acrylic guide, to achieve proper of the X ray equipment core positioning relative to the XCP® ring and the reorientation during the sequential x-rays process. 16 subjects of 4 to 40 years old are studied for a total number of 32 registries. Two x-rays of the same block of teeth of each subject have been taken in sequential way, with a minimal difference of 30 minutes between each one, before the placement of radiographic attachment. The images have been digitized with a Super Cam® scanner and imported to a software. The measurements in X and Y-axis for both x-rays were performed to proceed to compare. The intraclass correlation index (ICI) has shown that the proposed method is statistically related to measurement (mm) obtained in the X and Y-axis for both sequential series of x-rays (p=0.01). The measures of central tendency and dispersion have shown that the usual occurrence is indifferent between the two measurements (Mode 0.000 and S = 0083 and 0.109) and that the probability of occurrence of different values is lower than expected. (author) [es
Sequential Design of Experiments to Maximize Learning from Carbon Capture Pilot Plant Testing
Energy Technology Data Exchange (ETDEWEB)
Soepyan, Frits B.; Morgan, Joshua C.; Omell, Benjamin P.; Zamarripa-Perez, Miguel A.; Matuszewski, Michael S.; Miller, David C.
2018-02-06
Pilot plant test campaigns can be expensive and time-consuming. Therefore, it is of interest to maximize the amount of learning and the efficiency of the test campaign given the limited number of experiments that can be conducted. This work investigates the use of sequential design of experiments (SDOE) to overcome these challenges by demonstrating its usefulness for a recent solvent-based CO2 capture plant test campaign. Unlike traditional design of experiments methods, SDOE regularly uses information from ongoing experiments to determine the optimum locations in the design space for subsequent runs within the same experiment. However, there are challenges that need to be addressed, including reducing the high computational burden to efficiently update the model, and the need to incorporate the methodology into a computational tool. We address these challenges by applying SDOE in combination with a software tool, the Framework for Optimization, Quantification of Uncertainty and Surrogates (FOQUS) (Miller et al., 2014a, 2016, 2017). The results of applying SDOE on a pilot plant test campaign for CO2 capture suggests that relative to traditional design of experiments methods, SDOE can more effectively reduce the uncertainty of the model, thus decreasing technical risk. Future work includes integrating SDOE into FOQUS and using SDOE to support additional large-scale pilot plant test campaigns.
Quantifying Update Effects in Citizen-Oriented Software
Directory of Open Access Journals (Sweden)
Ion Ivan
2009-02-01
Full Text Available Defining citizen-oriented software. Detailing technical issues regarding update process in this kind of software. Presenting different effects triggered by types of update. Building model for update costs estimation, including producer-side and consumer-side effects. Analyzing model applicability on INVMAT – large scale matrix inversion software. Proposing a model for update effects estimation. Specifying ways for softening effects of inaccurate updates.
Alternatives to the sequential lineup: the importance of controlling the pictures.
Lindsay, R C; Bellinger, K
1999-06-01
Because sequential lineups reduce false-positive choices, their use has been recommended (R. C. L. Lindsay, 1999; R. C. L. Lindsay & G. L. Wells, 1985). Blind testing is included in the recommended procedures. Police, concerned about blind testing, devised alternative procedures, including self-administered sequential lineups, to reduce use of relative judgments (G. L. Wells, 1984) while permitting the investigating officer to conduct the procedure. Identification data from undergraduates exposed to a staged crime (N = 165) demonstrated that 4 alternative identification procedures tested were less effective than the original sequential lineup. Allowing witnesses to control the photographs resulted in higher rates of false-positive identification. Self-reports of using relative judgments were shown to be postdictive of decision accuracy.
Managerial adjustment and its limits: sequential fault in comparative perspective
Directory of Open Access Journals (Sweden)
Flávio da Cunha Rezende
2008-01-01
Full Text Available This article focuses on explanations for sequential faults in administrative reform. It deals with the limits of managerial adjustment in an approach that attempts to connect theory and empirical data, articulating three levels of analysis. The first level presents comparative evidence of sequential fault within reforms in national governments through a set of indicators geared toward understanding changes in the role of the state. In light of analyses of a representative set of comparative studies on reform implementation, the second analytical level proceeds to identify four typical mechanisms that are present in explanations on managerial adjustment faults. In this way, we seek to configure an explanatory matrix for theories on sequential fault. Next we discuss the experience of management reform in the Brazilian context, conferring special attention on one of the mechanisms that creates fault: the control dilemma. The major hypotheses that guide our article are that reforms lead to sequential fault and that there are at least four causal mechanisms that produce reforms: a transactions costs involved in producing reforms; b performance legacy; c predominance of fiscal adjustment and d the control dilemma. These mechanisms act separately or in concert, and act to decrease chances for a transformation of State managerial patterns. Major evidence that is analyzed in these articles lend consistency to the general argument that reforms have failed in their attempts to reduce public expenses, alter patterns of resource allocation, reduce the labor force and change the role of the State. Our major conclusion is that reforms fail sequentially and managerial adjustment displays considerable limitations, particularly those of a political nature.
Are Forecast Updates Progressive?
C-L. Chang (Chia-Lin); Ph.H.B.F. Franses (Philip Hans); M.J. McAleer (Michael)
2010-01-01
textabstractMacro-economic forecasts typically involve both a model component, which is replicable, as well as intuition, which is non-replicable. Intuition is expert knowledge possessed by a forecaster. If forecast updates are progressive, forecast updates should become more accurate, on average,
Palmer, Matthew A; Brewer, Neil
2012-06-01
When compared with simultaneous lineup presentation, sequential presentation has been shown to reduce false identifications to a greater extent than it reduces correct identifications. However, there has been much debate about whether this difference in identification performance represents improved discriminability or more conservative responding. In this research, data from 22 experiments that compared sequential and simultaneous lineups were analyzed using a compound signal-detection model, which is specifically designed to describe decision-making performance on tasks such as eyewitness identification tests. Sequential (cf. simultaneous) presentation did not influence discriminability, but produced a conservative shift in response bias that resulted in less-biased choosing for sequential than simultaneous lineups. These results inform understanding of the effects of lineup presentation mode on eyewitness identification decisions.
Estimation After a Group Sequential Trial.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert
2015-10-01
Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why
DEFF Research Database (Denmark)
Pötz, Katharina Anna; Haas, Rainer; Balzarova, Michaela
2013-01-01
of schemes that can be categorized on focus areas, scales, mechanisms, origins, types and commitment levels. Research limitations/implications – The findings contribute to conceptual and empirical research on existing models to compare and analyse CSR standards. Sampling technique and depth of analysis limit......Purpose – The rise of CSR followed a demand for CSR standards and guidelines. In a sector already characterized by a large number of standards, the authors seek to ask what CSR schemes apply to agribusiness, and how they can be systematically compared and analysed. Design....../methodology/approach – Following a deductive-inductive approach the authors develop a model to compare and analyse CSR schemes based on existing studies and on coding qualitative data on 216 CSR schemes. Findings – The authors confirm that CSR standards and guidelines have entered agribusiness and identify a complex landscape...
Optimal Sequential Resource Sharing and Exchange in Multi-Agent Systems
Xiao, Yuanzhang
2014-01-01
Central to the design of many engineering systems and social networks is to solve the underlying resource sharing and exchange problems, in which multiple decentralized agents make sequential decisions over time to optimize some long-term performance metrics. It is challenging for the decentralized agents to make optimal sequential decisions because of the complicated coupling among the agents and across time. In this dissertation, we mainly focus on three important classes of multi-agent seq...
S.M.P. SEQUENTIAL MATHEMATICS PROGRAM.
CICIARELLI, V; LEONARD, JOSEPH
A SEQUENTIAL MATHEMATICS PROGRAM BEGINNING WITH THE BASIC FUNDAMENTALS ON THE FOURTH GRADE LEVEL IS PRESENTED. INCLUDED ARE AN UNDERSTANDING OF OUR NUMBER SYSTEM, AND THE BASIC OPERATIONS OF WORKING WITH WHOLE NUMBERS--ADDITION, SUBTRACTION, MULTIPLICATION, AND DIVISION. COMMON FRACTIONS ARE TAUGHT IN THE FIFTH, SIXTH, AND SEVENTH GRADES. A…
Sequential Quantum Secret Sharing Using a Single Qudit
Bai, Chen-Ming; Li, Zhi-Hui; Li, Yong-Ming
2018-05-01
In this paper we propose a novel and efficient quantum secret sharing protocol using d-level single particle, which it can realize a general access structure via the thought of concatenation. In addition, Our scheme includes all advantages of Tavakoli’s scheme [Phys. Rev. A 92 (2015) 030302(R)]. In contrast to Tavakoli’s scheme, the efficiency of our scheme is 1 for the same situation, and the access structure is more general and has advantages in practical significance. Furthermore, we also analyze the security of our scheme in the primary quantum attacks. Sponsored by the National Natural Science Foundation of China under Grant Nos. 61373150 and 61602291, and Industrial Research and Development Project of Science and Technology of Shaanxi Province under Grant No. 2013k0611
Threshold Signature Schemes Application
Directory of Open Access Journals (Sweden)
Anastasiya Victorovna Beresneva
2015-10-01
Full Text Available This work is devoted to an investigation of threshold signature schemes. The systematization of the threshold signature schemes was done, cryptographic constructions based on interpolation Lagrange polynomial, elliptic curves and bilinear pairings were examined. Different methods of generation and verification of threshold signatures were explored, the availability of practical usage of threshold schemes in mobile agents, Internet banking and e-currency was shown. The topics of further investigation were given and it could reduce a level of counterfeit electronic documents signed by a group of users.
A Spatial Domain Quantum Watermarking Scheme
International Nuclear Information System (INIS)
Wei Zhan-Hong; Chen Xiu-Bo; Niu Xin-Xin; Yang Yi-Xian; Xu Shu-Jiang
2016-01-01
This paper presents a spatial domain quantum watermarking scheme. For a quantum watermarking scheme, a feasible quantum circuit is a key to achieve it. This paper gives a feasible quantum circuit for the presented scheme. In order to give the quantum circuit, a new quantum multi-control rotation gate, which can be achieved with quantum basic gates, is designed. With this quantum circuit, our scheme can arbitrarily control the embedding position of watermark images on carrier images with the aid of auxiliary qubits. Besides reversely acting the given quantum circuit, the paper gives another watermark extracting algorithm based on quantum measurements. Moreover, this paper also gives a new quantum image scrambling method and its quantum circuit. Differ from other quantum watermarking schemes, all given quantum circuits can be implemented with basic quantum gates. Moreover, the scheme is a spatial domain watermarking scheme, and is not based on any transform algorithm on quantum images. Meanwhile, it can make sure the watermark be secure even though the watermark has been found. With the given quantum circuit, this paper implements simulation experiments for the presented scheme. The experimental result shows that the scheme does well in the visual quality and the embedding capacity. (paper)
Impact of real-time measurements for data assimilation in reservoir simulation
Energy Technology Data Exchange (ETDEWEB)
Schulze-Riegert, R; Krosche, M [Scandpower Petroleum Technology GmbH, Hamburg (Germany); Pajonk, O [TU Braunschweig (Germany). Inst. fuer Wissenschaftliches Rechnen; Myrland, T [Morges Teknisk-Naturvitenskapelige Univ. (NTNU), Trondheim (Germany)
2008-10-23
This paper gives an overview on the conceptual background of data assimilation techniques. The framework of sequential data assimilation as described for the ensemble Kalman filter implementation allows a continuous integration of new measurement data. The initial diversity of ensemble members will be critical for the assimilation process and the ability to successfully assimilate measurement data. At the same time the initial ensemble will impact the propagation of uncertainties with crucial consequences for production forecasts. Data assimilation techniques have complimentary features compared to other optimization techniques built on selection or regression schemes. Specifically, EnKF is applicable to real field cases and defines an important perspective for facilitating continuous reservoir simulation model updates in a reservoir life cycle. (orig.)
Labeling schemes for bounded degree graphs
DEFF Research Database (Denmark)
Adjiashvili, David; Rotbart, Noy Galil
2014-01-01
We investigate adjacency labeling schemes for graphs of bounded degree Δ = O(1). In particular, we present an optimal (up to an additive constant) log n + O(1) adjacency labeling scheme for bounded degree trees. The latter scheme is derived from a labeling scheme for bounded degree outerplanar...... graphs. Our results complement a similar bound recently obtained for bounded depth trees [Fraigniaud and Korman, SODA 2010], and may provide new insights for closing the long standing gap for adjacency in trees [Alstrup and Rauhe, FOCS 2002]. We also provide improved labeling schemes for bounded degree...
Implicit unified gas-kinetic scheme for steady state solutions in all flow regimes
Zhu, Yajun; Zhong, Chengwen; Xu, Kun
2016-06-01
This paper presents an implicit unified gas-kinetic scheme (UGKS) for non-equilibrium steady state flow computation. The UGKS is a direct modeling method for flow simulation in all regimes with the updates of both macroscopic flow variables and microscopic gas distribution function. By solving the macroscopic equations implicitly, a predicted equilibrium state can be obtained first through iterations. With the newly predicted equilibrium state, the evolution equation of the gas distribution function and the corresponding collision term can be discretized in a fully implicit way for fast convergence through iterations as well. The lower-upper symmetric Gauss-Seidel (LU-SGS) factorization method is implemented to solve both macroscopic and microscopic equations, which improves the efficiency of the scheme. Since the UGKS is a direct modeling method and its physical solution depends on the mesh resolution and the local time step, a physical time step needs to be fixed before using an implicit iterative technique with a pseudo-time marching step. Therefore, the physical time step in the current implicit scheme is determined by the same way as that in the explicit UGKS for capturing the physical solution in all flow regimes, but the convergence to a steady state speeds up through the adoption of a numerical time step with large CFL number. Many numerical test cases in different flow regimes from low speed to hypersonic ones, such as the Couette flow, cavity flow, and the flow passing over a cylinder, are computed to validate the current implicit method. The overall efficiency of the implicit UGKS can be improved by one or two orders of magnitude in comparison with the explicit one.
Directory of Open Access Journals (Sweden)
Chao Luo
Full Text Available A novel algebraic approach is proposed to study dynamics of asynchronous random Boolean networks where a random number of nodes can be updated at each time step (ARBNs. In this article, the logical equations of ARBNs are converted into the discrete-time linear representation and dynamical behaviors of systems are investigated. We provide a general formula of network transition matrices of ARBNs as well as a necessary and sufficient algebraic criterion to determine whether a group of given states compose an attractor of length[Formula: see text] in ARBNs. Consequently, algorithms are achieved to find all of the attractors and basins in ARBNs. Examples are showed to demonstrate the feasibility of the proposed scheme.
Sequential sputtered Co-HfO{sub 2} granular films
Energy Technology Data Exchange (ETDEWEB)
Chadha, M.; Ng, V.
2017-03-15
A systematic study of magnetic, magneto-transport and micro-structural properties of Co-HfO{sub 2} granular films fabricated by sequential sputtering is presented. We demonstrate reduction in ferromagnetic-oxide formation by using HfO{sub 2} as the insulting matrix. Microstructure evaluation of the films showed that the film structure consisted of discrete hcp-Co grains embedded in HfO{sub 2} matrix. Films with varying compositions were prepared and their macroscopic properties were studied. We correlate the variation in these properties to the variation in film microstructure. Our study shows that Co-HfO{sub 2} films with reduced cobalt oxide and varying properties can be prepared using sequential sputtering technique. - Highlights: • Co-HfO{sub 2} granular films were prepared using sequential sputtering. • A reduction in ferromagnetic-oxide formation is observed. • Co-HfO{sub 2} films display superparamagnetism and tunnelling magneto-resistance. • Varying macroscopic properties were achieved by changing film composition. • Applications can be found in moderate MR sensors and high –frequency RF devices.
Involvement of Working Memory in College Students' Sequential Pattern Learning and Performance
Kundey, Shannon M. A.; De Los Reyes, Andres; Rowan, James D.; Lee, Bern; Delise, Justin; Molina, Sabrina; Cogdill, Lindsay
2013-01-01
When learning highly organized sequential patterns of information, humans and nonhuman animals learn rules regarding the hierarchical structures of these sequences. In three experiments, we explored the role of working memory in college students' sequential pattern learning and performance in a computerized task involving a sequential…
Updating of working memory: lingering bindings.
Oberauer, Klaus; Vockenberg, Kerstin
2009-05-01
Three experiments investigated proactive interference and proactive facilitation in a memory-updating paradigm. Participants remembered several letters or spatial patterns, distinguished by their spatial positions, and updated them by new stimuli up to 20 times per trial. Self-paced updating times were shorter when an item previously remembered and then replaced reappeared in the same location than when it reappeared in a different location. This effect demonstrates residual memory for no-longer-relevant bindings of items to locations. The effect increased with the number of items to be remembered. With one exception, updating times did not increase, and recall of final values did not decrease, over successive updating steps, thus providing little evidence for proactive interference building up cumulatively.
Wells, Gary L; Steblay, Nancy K; Dysart, Jennifer E
2015-02-01
Eyewitnesses (494) to actual crimes in 4 police jurisdictions were randomly assigned to view simultaneous or sequential photo lineups using laptop computers and double-blind administration. The sequential procedure used in the field experiment mimicked how it is conducted in actual practice (e.g., using a continuation rule, witness does not know how many photos are to be viewed, witnesses resolve any multiple identifications), which is not how most lab experiments have tested the sequential lineup. No significant differences emerged in rates of identifying lineup suspects (25% overall) but the sequential procedure produced a significantly lower rate (11%) of identifying known-innocent lineup fillers than did the simultaneous procedure (18%). The simultaneous/sequential pattern did not significantly interact with estimator variables and no lineup-position effects were observed for either the simultaneous or sequential procedures. Rates of nonidentification were not significantly different for simultaneous and sequential but nonidentifiers from the sequential procedure were more likely to use the "not sure" response option than were nonidentifiers from the simultaneous procedure. Among witnesses who made an identification, 36% (41% of simultaneous and 32% of sequential) identified a known-innocent filler rather than a suspect, indicating that eyewitness performance overall was very poor. The results suggest that the sequential procedure that is used in the field reduces the identification of known-innocent fillers, but the differences are relatively small.
Multiresolution signal decomposition schemes
J. Goutsias (John); H.J.A.M. Heijmans (Henk)
1998-01-01
textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis
Barriga-Rivera, Alejandro; Morley, John W; Lovell, Nigel H; Suaning, Gregg J
2016-08-01
Researchers continue to develop visual prostheses towards safer and more efficacious systems. However limitations still exist in the number of stimulating channels that can be integrated. Therefore there is a need for spatial and time multiplexing techniques to provide improved performance of the current technology. In particular, bright and high-contrast visual scenes may require simultaneous activation of several electrodes. In this research, a 24-electrode array was suprachoroidally implanted in three normally-sighted cats. Multi-unit activity was recorded from the primary visual cortex. Four stimulation strategies were contrasted to provide activation of seven electrodes arranged hexagonally: simultaneous monopolar, sequential monopolar, sequential bipolar and hexapolar. Both monopolar configurations showed similar cortical activation maps. Hexapolar and sequential bipolar configurations activated a lower number of cortical channels. Overall, the return configuration played a more relevant role in cortical activation than time multiplexing and thus, rapid sequential stimulation may assist in reducing the number of channels required to activate large retinal areas.
Energy Technology Data Exchange (ETDEWEB)
Willcock, J J; Lumsdaine, A; Quinlan, D J
2008-08-19
Tabled execution is a generalization of memorization developed by the logic programming community. It not only saves results from tabled predicates, but also stores the set of currently active calls to them; tabled execution can thus provide meaningful semantics for programs that seemingly contain infinite recursions with the same arguments. In logic programming, tabled execution is used for many purposes, both for improving the efficiency of programs, and making tasks simpler and more direct to express than with normal logic programs. However, tabled execution is only infrequently applied in mainstream functional languages such as Scheme. We demonstrate an elegant implementation of tabled execution in Scheme, using a mix of continuation-passing style and mutable data. We also show the use of tabled execution in Scheme for a problem in formal language and automata theory, demonstrating that tabled execution can be a valuable tool for Scheme users.
Sequential Versus Simultaneous Market Delineation: The Relevant Antitrust Market for Salmon
DEFF Research Database (Denmark)
Haldrup, Niels; Peter, Møllgaard
Delineation of the relevant market forms a pivotal part of most antitrust cases. The standard approach is sequential. First the product market is delineated, then the geographical market is defined. Demand andsupply substitution in both the product dimension and the geographical dimension will no...... and geographical markets. Using a unique data set for prices of Norwegian and Scottish salmon, we propose a methodology for simultaneous market delineation and we demonstrate that compared to a sequential approach conclusions will be reversed.......Delineation of the relevant market forms a pivotal part of most antitrust cases. The standard approach is sequential. First the product market is delineated, then the geographical market is defined. Demand andsupply substitution in both the product dimension and the geographical dimension...
Optimal Face-Iris Multimodal Fusion Scheme
Directory of Open Access Journals (Sweden)
Omid Sharifi
2016-06-01
Full Text Available Multimodal biometric systems are considered a way to minimize the limitations raised by single traits. This paper proposes new schemes based on score level, feature level and decision level fusion to efficiently fuse face and iris modalities. Log-Gabor transformation is applied as the feature extraction method on face and iris modalities. At each level of fusion, different schemes are proposed to improve the recognition performance and, finally, a combination of schemes at different fusion levels constructs an optimized and robust scheme. In this study, CASIA Iris Distance database is used to examine the robustness of all unimodal and multimodal schemes. In addition, Backtracking Search Algorithm (BSA, a novel population-based iterative evolutionary algorithm, is applied to improve the recognition accuracy of schemes by reducing the number of features and selecting the optimized weights for feature level and score level fusion, respectively. Experimental results on verification rates demonstrate a significant improvement of proposed fusion schemes over unimodal and multimodal fusion methods.
Endogenous sequential cortical activity evoked by visual stimuli.
Carrillo-Reid, Luis; Miller, Jae-Eun Kang; Hamm, Jordan P; Jackson, Jesse; Yuste, Rafael
2015-06-10
Although the functional properties of individual neurons in primary visual cortex have been studied intensely, little is known about how neuronal groups could encode changing visual stimuli using temporal activity patterns. To explore this, we used in vivo two-photon calcium imaging to record the activity of neuronal populations in primary visual cortex of awake mice in the presence and absence of visual stimulation. Multidimensional analysis of the network activity allowed us to identify neuronal ensembles defined as groups of cells firing in synchrony. These synchronous groups of neurons were themselves activated in sequential temporal patterns, which repeated at much higher proportions than chance and were triggered by specific visual stimuli such as natural visual scenes. Interestingly, sequential patterns were also present in recordings of spontaneous activity without any sensory stimulation and were accompanied by precise firing sequences at the single-cell level. Moreover, intrinsic dynamics could be used to predict the occurrence of future neuronal ensembles. Our data demonstrate that visual stimuli recruit similar sequential patterns to the ones observed spontaneously, consistent with the hypothesis that already existing Hebbian cell assemblies firing in predefined temporal sequences could be the microcircuit substrate that encodes visual percepts changing in time. Copyright © 2015 Carrillo-Reid et al.
How do we update faces? Effects of gaze direction and facial expressions on working memory updating
Directory of Open Access Journals (Sweden)
Caterina eArtuso
2012-09-01
Full Text Available The aim of the study was to investigate how the biological binding between different facial dimensions, and their social and communicative relevance, may impact updating processes in working memory (WM. We focused on WM updating because it plays a key role in ongoing processing. Gaze direction and facial expression are crucial and changeable components of face processing. Direct gaze enhances the processing of approach-oriented facial emotional expressions (e.g. joy, while averted gaze enhances the processing of avoidance-oriented facial emotional expressions (e.g. fear. Thus, the way in which these two facial dimensions are combined communicates to the observer important behavioral and social information. Updating of these two facial dimensions and their bindings has not been investigated before, despite the fact that they provide a piece of social information essential for building and maintaining an internal ongoing representation of our social environment. In Experiment 1 we created a task in which the binding between gaze direction and facial expression was manipulated: high binding conditions (e.g. joy-direct gaze were compared to low binding conditions (e.g. joy-averted gaze. Participants had to study and update continuously a number of faces, displaying different bindings between the two dimensions. In Experiment 2 we tested whether updating was affected by the social and communicative value of the facial dimension binding; to this end, we manipulated bindings between eye and hair color, two less communicative facial dimensions. Two new results emerged. First, faster response times were found in updating combinations of facial dimensions highly bound together. Second, our data showed that the ease of the ongoing updating processing varied depending on the communicative meaning of the binding that had to be updated. The results are discussed with reference to the role of WM updating in social cognition and appraisal processes.
How do we update faces? Effects of gaze direction and facial expressions on working memory updating.
Artuso, Caterina; Palladino, Paola; Ricciardelli, Paola
2012-01-01
The aim of the study was to investigate how the biological binding between different facial dimensions, and their social and communicative relevance, may impact updating processes in working memory (WM). We focused on WM updating because it plays a key role in ongoing processing. Gaze direction and facial expression are crucial and changeable components of face processing. Direct gaze enhances the processing of approach-oriented facial emotional expressions (e.g., joy), while averted gaze enhances the processing of avoidance-oriented facial emotional expressions (e.g., fear). Thus, the way in which these two facial dimensions are combined communicates to the observer important behavioral and social information. Updating of these two facial dimensions and their bindings has not been investigated before, despite the fact that they provide a piece of social information essential for building and maintaining an internal ongoing representation of our social environment. In Experiment 1 we created a task in which the binding between gaze direction and facial expression was manipulated: high binding conditions (e.g., joy-direct gaze) were compared to low binding conditions (e.g., joy-averted gaze). Participants had to study and update continuously a number of faces, displaying different bindings between the two dimensions. In Experiment 2 we tested whether updating was affected by the social and communicative value of the facial dimension binding; to this end, we manipulated bindings between eye and hair color, two less communicative facial dimensions. Two new results emerged. First, faster response times were found in updating combinations of facial dimensions highly bound together. Second, our data showed that the ease of the ongoing updating processing varied depending on the communicative meaning of the binding that had to be updated. The results are discussed with reference to the role of WM updating in social cognition and appraisal processes.
Efficacy of premixed versus sequential administration of ...
African Journals Online (AJOL)
sequential administration in separate syringes on block characteristics, haemodynamic parameters, side effect profile and postoperative analgesic requirement. Trial design: This was a prospective, randomised clinical study. Method: Sixty orthopaedic patients scheduled for elective lower limb surgery under spinal ...
[Sequential degradation of p-cresol by photochemical and biological methods].
Karetnikova, E A; Chaĭkovskaia, O N; Sokolova, I V; Nikitina, L I
2008-01-01
Sequential photo- and biodegradation of p-cresol was studied using a mercury lamp, as well as KrCl and XeCl excilamps. Preirradiation of p-cresol at a concentration of 10(-4) M did not affect the rate of its subsequent biodegradation. An increase in the concentration of p-cresol to 10(-3) M and in the duration preliminary UV irradiation inhibited subsequent biodegradation. Biodegradation of p-cresol was accompanied by the formation of a product with a fluorescence maximum at 365 nm (lambdaex 280 nm), and photodegradation yielded a compound fluorescing at 400 nm (lambdaex 330 nm). Sequential UV and biodegradation led to the appearance of bands in the fluorescence spectra that were ascribed to p-cresol and its photolysis products. It was shown that sequential use of biological and photochemical degradation results in degradation of not only the initial toxicant but also the metabolites formed during its biodegradation.
Lexical decoder for continuous speech recognition: sequential neural network approach
International Nuclear Information System (INIS)
Iooss, Christine
1991-01-01
The work presented in this dissertation concerns the study of a connectionist architecture to treat sequential inputs. In this context, the model proposed by J.L. Elman, a recurrent multilayers network, is used. Its abilities and its limits are evaluated. Modifications are done in order to treat erroneous or noisy sequential inputs and to classify patterns. The application context of this study concerns the realisation of a lexical decoder for analytical multi-speakers continuous speech recognition. Lexical decoding is completed from lattices of phonemes which are obtained after an acoustic-phonetic decoding stage relying on a K Nearest Neighbors search technique. Test are done on sentences formed from a lexicon of 20 words. The results are obtained show the ability of the proposed connectionist model to take into account the sequentiality at the input level, to memorize the context and to treat noisy or erroneous inputs. (author) [fr
Optimal Energy Management of Multi-Microgrids with Sequentially Coordinated Operations
Directory of Open Access Journals (Sweden)
Nah-Oak Song
2015-08-01
Full Text Available We propose an optimal electric energy management of a cooperative multi-microgrid community with sequentially coordinated operations. The sequentially coordinated operations are suggested to distribute computational burden and yet to make the optimal 24 energy management of multi-microgrids possible. The sequential operations are mathematically modeled to find the optimal operation conditions and illustrated with physical interpretation of how to achieve optimal energy management in the cooperative multi-microgrid community. This global electric energy optimization of the cooperative community is realized by the ancillary internal trading between the microgrids in the cooperative community which reduces the extra cost from unnecessary external trading by adjusting the electric energy production amounts of combined heat and power (CHP generators and amounts of both internal and external electric energy trading of the cooperative community. A simulation study is also conducted to validate the proposed mathematical energy management models.
49 CFR 1002.3 - Updating user fees.
2010-10-01
... updating fees. Each fee shall be updated by updating the cost components comprising the fee. Cost... direct labor costs are direct labor costs determined by the cost study set forth in Revision of Fees For... by total office costs for the Offices directly associated with user fee activity. Actual updating of...
Effect of sequential isoproturon pulse exposure on Scenedesmus vacuolatus.
Vallotton, Nathalie; Eggen, Rik Ilda Lambertus; Chèvre, Nathalie
2009-04-01
Aquatic organisms are typically exposed to fluctuating concentrations of herbicides in streams. To assess the effects on algae of repeated peak exposure to the herbicide isoproturon, we subjected the alga Scenedesmus vacuolatus to two sequential pulse exposure scenarios. Effects on growth and on the inhibition of the effective quantum yield of photosystem II (PSII) were measured. In the first scenario, algae were exposed to short, 5-h pulses at high isoproturon concentrations (400 and 1000 microg/l), each followed by a recovery period of 18 h, while the second scenario consisted of 22.5-h pulses at lower concentrations (60 and 120 microg/l), alternating with short recovery periods (1.5 h). In addition, any changes in the sensitivity of the algae to isoproturon following sequential pulses were examined by determining the growth rate-EC(50) prior to and following exposure. In both exposure scenarios, we found that algal growth and its effective quantum yield were systematically inhibited during the exposures and that these effects were reversible. Sequential pulses to isoproturon could be considered a sequence of independent events. Nevertheless, a consequence of inhibited growth during the repeated exposures is the cumulative decrease in biomass production. Furthermore, in the second scenario, when the sequence of long pulses began to approach a scenario of continuous exposure, a slight increase in the tolerance of the algae to isoproturon was observed. These findings indicated that sequential pulses do affect algae during each pulse exposure, even if algae recover between the exposures. These observations could support an improved risk assessment of fluctuating exposures to reversibly acting herbicides.
International Nuclear Information System (INIS)
Xia Hai-Jiang; Li Ping-Ping; Ke Jian-Hong; Lin Zhen-Quan
2015-01-01
We propose an evolutionary snowdrift game model for heterogeneous systems with two types of agents, in which the inner-directed agents adopt the memory-based updating rule while the copycat-like ones take the unconditional imitation rule; moreover, each agent can change his type to adopt another updating rule once the number he sequentially loses the game at is beyond his upper limit of tolerance. The cooperative behaviors of such heterogeneous systems are then investigated by Monte Carlo simulations. The numerical results show the equilibrium cooperation frequency and composition as functions of the cost-to-benefit ratio r are both of plateau structures with discontinuous steplike jumps, and the number of plateaux varies non-monotonically with the upper limit of tolerance ν T as well as the initial composition of agents f a0 . Besides, the quantities of the cooperation frequency and composition are dependent crucially on the system parameters including ν T , f a0 , and r. One intriguing observation is that when the upper limit of tolerance is small, the cooperation frequency will be abnormally enhanced with the increase of the cost-to-benefit ratio in the range of 0 < r < 1/4. We then probe into the relative cooperation frequencies of either type of agents, which are also of plateau structures dependent on the system parameters. Our results may be helpful to understand the cooperative behaviors of heterogenous agent systems. (paper)
Sequential-Simultaneous Analysis of Japanese Children's Performance on the Japanese McCarthy.
Ishikuma, Toshinori; And Others
This study explored the hypothesis that Japanese children perform significantly better on simultaneous processing than on sequential processing. The Kaufman Assessment Battery for Children (K-ABC) served as the criterion of the two types of mental processing. Regression equations to predict Sequential and Simultaneous processing from McCarthy…
Lee, Seong-Soo
1982-01-01
Tenth-grade students (n=144) received training on one of three processing methods: coding-mapping (simultaneous), coding only, or decision tree (sequential). The induced simultaneous processing strategy worked optimally under rule learning, while the sequential strategy was difficult to induce and/or not optimal for rule-learning operations.…
Mining of high utility-probability sequential patterns from uncertain databases.
Directory of Open Access Journals (Sweden)
Binbin Zhang
Full Text Available High-utility sequential pattern mining (HUSPM has become an important issue in the field of data mining. Several HUSPM algorithms have been designed to mine high-utility sequential patterns (HUPSPs. They have been applied in several real-life situations such as for consumer behavior analysis and event detection in sensor networks. Nonetheless, most studies on HUSPM have focused on mining HUPSPs in precise data. But in real-life, uncertainty is an important factor as data is collected using various types of sensors that are more or less accurate. Hence, data collected in a real-life database can be annotated with existing probabilities. This paper presents a novel pattern mining framework called high utility-probability sequential pattern mining (HUPSPM for mining high utility-probability sequential patterns (HUPSPs in uncertain sequence databases. A baseline algorithm with three optional pruning strategies is presented to mine HUPSPs. Moroever, to speed up the mining process, a projection mechanism is designed to create a database projection for each processed sequence, which is smaller than the original database. Thus, the number of unpromising candidates can be greatly reduced, as well as the execution time for mining HUPSPs. Substantial experiments both on real-life and synthetic datasets show that the designed algorithm performs well in terms of runtime, number of candidates, memory usage, and scalability for different minimum utility and minimum probability thresholds.
Time scale of random sequential adsorption.
Erban, Radek; Chapman, S Jonathan
2007-04-01
A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.
Directory of Open Access Journals (Sweden)
Chrystiane Maria Veras Pôrto
2010-12-01
Full Text Available Objective: To assess the effect of aquatic environment while an occupational therapeutic scenario in the development of body scheme of a child with Down Syndrome, considering the therapeutic properties of water. Description of the case: An interventionist research, with a qualitative and descriptive approach, conducted in an adapted pool of the Núcleo de Atenção Médica Integrada (NAMI of Fortaleza University (UNIFOR, Ceara, during the period of March to May, 2005. The subject of the study was a female child, aged 10 years old, diagnosed with Down Syndrome. Data collection had as instruments an interview guide for anamnesis, an evaluation form of psychomotor development, besides a field diary to record clinical observations during the sessions. This information was organized and analyzed based on clinical reasoning of occupational therapists and then described as a case study. We observed an evolution in the development of skills related to body scheme, such as the perception of fine parts of her own body, as well as large parts in someone else’s body, the imitation of positions, finishing with more active participation in activities of daily living. Final Considerations: We verified the effectiveness of occupational therapeutic activities conducted in aquatic environment for the development of the body scheme of the child in the study. This may be useful for conducting further research on the subject – whose literature is scarce – and contributing to the crescent update of occupational therapy practices.
Sequential voluntary cough and aspiration or aspiration risk in Parkinson's disease.
Hegland, Karen Wheeler; Okun, Michael S; Troche, Michelle S
2014-08-01
Disordered swallowing, or dysphagia, is almost always present to some degree in people with Parkinson's disease (PD), either causing aspiration or greatly increasing the risk for aspiration during swallowing. This likely contributes to aspiration pneumonia, a leading cause of death in this patient population. Effective airway protection is dependent upon multiple behaviors, including cough and swallowing. Single voluntary cough function is disordered in people with PD and dysphagia. However, the appropriate response to aspirate material is more than one cough, or sequential cough. The goal of this study was to examine voluntary sequential coughing in people with PD, with and without dysphagia. Forty adults diagnosed with idiopathic PD produced two trials of sequential voluntary cough. The cough airflows were obtained using pneumotachograph and facemask and subsequently digitized and recorded. All participants received a modified barium swallow study as part of their clinical care, and the worst penetration-aspiration score observed was used to determine whether the patient had dysphagia. There were significant differences in the compression phase duration, peak expiratory flow rates, and amount of air expired of the sequential cough produced by participants with and without dysphagia. The presence of dysphagia in people with PD is associated with disordered cough function. Sequential cough, which is important in removing aspirate material from large- and smaller-diameter airways, is also impaired in people with PD and dysphagia compared with those without dysphagia. There may be common neuroanatomical substrates for cough and swallowing impairment in PD leading to the co-occurrence of these dysfunctions.
Random sequential adsorption of cubes
Cieśla, Michał; Kubala, Piotr
2018-01-01
Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.
International Nuclear Information System (INIS)
Balsara, Dinshaw S.; Amano, Takanobu; Garain, Sudip; Kim, Jinho
2016-01-01
always divergence-free. This collocation also ensures that electromagnetic radiation that is propagating in a vacuum has both electric and magnetic fields that are exactly divergence-free. Coupled relativistic fluid dynamic equations are solved for the positively and negatively charged fluids. The fluids' numerical fluxes also provide a self-consistent current density for the update of the electric field. Our reconstruction strategy ensures that fluid velocities always remain sub-luminal. Our third innovation consists of an efficient design for several popular IMEX schemes so that they provide strong coupling between the finite-volume-based fluid solver and the electromagnetic fields at high order. This innovation makes it possible to efficiently utilize high order IMEX time update methods for stiff source terms in the update of high order finite-volume methods for hyperbolic conservation laws. We also show that this very general innovation should extend seamlessly to Runge–Kutta discontinuous Galerkin methods. The IMEX schemes enable us to use large CFL numbers even in the presence of stiff source terms. Several accuracy analyses are presented showing that our method meets its design accuracy in the MHD limit as well as in the limit of electromagnetic wave propagation. Several stringent test problems are also presented. We also present a relativistic version of the GEM problem, which shows that our algorithm can successfully adapt to challenging problems in high energy astrophysics.
Energy Technology Data Exchange (ETDEWEB)
Balsara, Dinshaw S., E-mail: dbalsara@nd.edu [Physics Department, University of Notre Dame (United States); Amano, Takanobu, E-mail: amano@eps.s.u-tokyo.ac.jp [Department of Earth and Planetary Science, University of Tokyo, Tokyo 113-0033 (Japan); Garain, Sudip, E-mail: sgarain@nd.edu [Physics Department, University of Notre Dame (United States); Kim, Jinho, E-mail: jkim46@nd.edu [Physics Department, University of Notre Dame (United States)
2016-08-01
always divergence-free. This collocation also ensures that electromagnetic radiation that is propagating in a vacuum has both electric and magnetic fields that are exactly divergence-free. Coupled relativistic fluid dynamic equations are solved for the positively and negatively charged fluids. The fluids' numerical fluxes also provide a self-consistent current density for the update of the electric field. Our reconstruction strategy ensures that fluid velocities always remain sub-luminal. Our third innovation consists of an efficient design for several popular IMEX schemes so that they provide strong coupling between the finite-volume-based fluid solver and the electromagnetic fields at high order. This innovation makes it possible to efficiently utilize high order IMEX time update methods for stiff source terms in the update of high order finite-volume methods for hyperbolic conservation laws. We also show that this very general innovation should extend seamlessly to Runge–Kutta discontinuous Galerkin methods. The IMEX schemes enable us to use large CFL numbers even in the presence of stiff source terms. Several accuracy analyses are presented showing that our method meets its design accuracy in the MHD limit as well as in the limit of electromagnetic wave propagation. Several stringent test problems are also presented. We also present a relativistic version of the GEM problem, which shows that our algorithm can successfully adapt to challenging problems in high energy astrophysics.
Wakamiya, Eiji; Okumura, Tomohito; Nakanishi, Makoto; Takeshita, Takashi; Mizuta, Mekumi; Kurimoto, Naoko; Tamai, Hiroshi
2011-06-01
To clarify whether rapid naming ability itself is a main underpinning factor of rapid automatized naming tests (RAN) and how deep an influence the discrete decoding process has on reading, we performed discrete naming tasks and discrete hiragana reading tasks as well as sequential naming tasks and sequential hiragana reading tasks with 38 Japanese schoolchildren with reading difficulty. There were high correlations between both discrete and sequential hiragana reading and sentence reading, suggesting that some mechanism which automatizes hiragana reading makes sentence reading fluent. In object and color tasks, there were moderate correlations between sentence reading and sequential naming, and between sequential naming and discrete naming. But no correlation was found between reading tasks and discrete naming tasks. The influence of rapid naming ability of objects and colors upon reading seemed relatively small, and multi-item processing may work in relation to these. In contrast, in the digit naming task there was moderate correlation between sentence reading and discrete naming, while no correlation was seen between sequential naming and discrete naming. There was moderate correlation between reading tasks and sequential digit naming tasks. Digit rapid naming ability has more direct effect on reading while its effect on RAN is relatively limited. The ratio of how rapid naming ability influences RAN and reading seems to vary according to kind of the stimuli used. An assumption about components in RAN which influence reading is discussed in the context of both sequential processing and discrete naming speed. Copyright © 2010 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
STABILIZED SEQUENTIAL QUADRATIC PROGRAMMING: A SURVEY
Directory of Open Access Journals (Sweden)
Damián Fernández
2014-12-01
Full Text Available We review the motivation for, the current state-of-the-art in convergence results, and some open questions concerning the stabilized version of the sequential quadratic programming algorithm for constrained optimization. We also discuss the tools required for its local convergence analysis, globalization challenges, and extentions of the method to the more general variational problems.
Terminating Sequential Delphi Survey Data Collection
Kalaian, Sema A.; Kasim, Rafa M.
2012-01-01
The Delphi survey technique is an iterative mail or electronic (e-mail or web-based) survey method used to obtain agreement or consensus among a group of experts in a specific field on a particular issue through a well-designed and systematic multiple sequential rounds of survey administrations. Each of the multiple rounds of the Delphi survey…
Concepts of incremental updating and versioning
CSIR Research Space (South Africa)
Cooper, Antony K
2004-07-01
Full Text Available of the work undertaken recently by the Working Group (WG). The WG was voted for a Commission by the General Assembly held at the 21st ICC in Durban, South Africa. The basic problem being addressed by the Commission is that a user compiles their data base... or election). Historically, updates have been provided in bulk, with the new data set replacing the old one. User could: ignore update (if it is not significant enough), manually (and selectively) update their data base, or accept the whole update...
Multiuser switched diversity scheduling schemes
Shaqfeh, Mohammad; Alnuweiri, Hussein M.; Alouini, Mohamed-Slim
2012-01-01
Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.
2007-01-01
As announced at the meeting of the Standing Concertation Committee (SCC) on 26 June 2007 and in http://Bulletin No. 28/2007, the existing Saved Leave Scheme will be discontinued as of 31 December 2007. Staff participating in the Scheme will shortly receive a contract amendment stipulating the end of financial contributions compensated by save leave. Leave already accumulated on saved leave accounts can continue to be taken in accordance with the rules applicable to the current scheme. A new system of saved leave will enter into force on 1 January 2008 and will be the subject of a new implementation procedure entitled "Short-term saved leave scheme" dated 1 January 2008. At its meeting on 4 December 2007, the SCC agreed to recommend the Director-General to approve this procedure, which can be consulted on the HR Department’s website at the following address: https://cern.ch/hr-services/services-Ben/sls_shortterm.asp All staff wishing to participate in the new scheme a...
HR Department
2007-01-01
As announced at the meeting of the Standing Concertation Committee (SCC) on 26 June 2007 and in http://Bulletin No. 28/2007, the existing Saved Leave Scheme will be discontinued as of 31 December 2007. Staff participating in the Scheme will shortly receive a contract amendment stipulating the end of financial contributions compensated by save leave. Leave already accumulated on saved leave accounts can continue to be taken in accordance with the rules applicable to the current scheme. A new system of saved leave will enter into force on 1 January 2008 and will be the subject of a new im-plementation procedure entitled "Short-term saved leave scheme" dated 1 January 2008. At its meeting on 4 December 2007, the SCC agreed to recommend the Director-General to approve this procedure, which can be consulted on the HR Department’s website at the following address: https://cern.ch/hr-services/services-Ben/sls_shortterm.asp All staff wishing to participate in the new scheme ...
Multiuser switched diversity scheduling schemes
Shaqfeh, Mohammad
2012-09-01
Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.
Sequential function approximation on arbitrarily distributed point sets
Wu, Kailiang; Xiu, Dongbin
2018-02-01
We present a randomized iterative method for approximating unknown function sequentially on arbitrary point set. The method is based on a recently developed sequential approximation (SA) method, which approximates a target function using one data point at each step and avoids matrix operations. The focus of this paper is on data sets with highly irregular distribution of the points. We present a nearest neighbor replacement (NNR) algorithm, which allows one to sample the irregular data sets in a near optimal manner. We provide mathematical justification and error estimates for the NNR algorithm. Extensive numerical examples are also presented to demonstrate that the NNR algorithm can deliver satisfactory convergence for the SA method on data sets with high irregularity in their point distributions.
Numerical schemes for explosion hazards
International Nuclear Information System (INIS)
Therme, Nicolas
2015-01-01
In nuclear facilities, internal or external explosions can cause confinement breaches and radioactive materials release in the environment. Hence, modeling such phenomena is crucial for safety matters. Blast waves resulting from explosions are modeled by the system of Euler equations for compressible flows, whereas Navier-Stokes equations with reactive source terms and level set techniques are used to simulate the propagation of flame front during the deflagration phase. The purpose of this thesis is to contribute to the creation of efficient numerical schemes to solve these complex models. The work presented here focuses on two major aspects: first, the development of consistent schemes for the Euler equations, then the buildup of reliable schemes for the front propagation. In both cases, explicit in time schemes are used, but we also introduce a pressure correction scheme for the Euler equations. Staggered discretization is used in space. It is based on the internal energy formulation of the Euler system, which insures its positivity and avoids tedious discretization of the total energy over staggered grids. A discrete kinetic energy balance is derived from the scheme and a source term is added in the discrete internal energy balance equation to preserve the exact total energy balance at the limit. High order methods of MUSCL type are used in the discrete convective operators, based solely on material velocity. They lead to positivity of density and internal energy under CFL conditions. This ensures that the total energy cannot grow and we can furthermore derive a discrete entropy inequality. Under stability assumptions of the discrete L8 and BV norms of the scheme's solutions one can prove that a sequence of converging discrete solutions necessarily converges towards the weak solution of the Euler system. Besides it satisfies a weak entropy inequality at the limit. Concerning the front propagation, we transform the flame front evolution equation (the so called
Configural and component processing in simultaneous and sequential lineup procedures.
Flowe, Heather D; Smith, Harriet M J; Karoğlu, Nilda; Onwuegbusi, Tochukwu O; Rai, Lovedeep
2016-01-01
Configural processing supports accurate face recognition, yet it has never been examined within the context of criminal identification lineups. We tested, using the inversion paradigm, the role of configural processing in lineups. Recent research has found that face discrimination accuracy in lineups is better in a simultaneous compared to a sequential lineup procedure. Therefore, we compared configural processing in simultaneous and sequential lineups to examine whether there are differences. We had participants view a crime video, and then they attempted to identify the perpetrator from a simultaneous or sequential lineup. The test faces were presented either upright or inverted, as previous research has shown that inverting test faces disrupts configural processing. The size of the inversion effect for faces was the same across lineup procedures, indicating that configural processing underlies face recognition in both procedures. Discrimination accuracy was comparable across lineup procedures in both the upright and inversion condition. Theoretical implications of the results are discussed.
Energy Technology Data Exchange (ETDEWEB)
Placidi, M.; Jung, J. -Y.; Ratti, A.; Sun, C.
2014-07-25
This paper describes beam distribution schemes adopting a novel implementation based on low amplitude vertical deflections combined with horizontal ones generated by Lambertson-type septum magnets. This scheme offers substantial compactness in the longitudinal layouts of the beam lines and increased flexibility for beam delivery of multiple beam lines on a shot-to-shot basis. Fast kickers (FK) or transverse electric field RF Deflectors (RFD) provide the low amplitude deflections. Initially proposed at the Stanford Linear Accelerator Center (SLAC) as tools for beam diagnostics and more recently adopted for multiline beam pattern schemes, RFDs offer repetition capabilities and a likely better amplitude reproducibility when compared to FKs, which, in turn, offer more modest financial involvements both in construction and operation. Both solutions represent an ideal approach for the design of compact beam distribution systems resulting in space and cost savings while preserving flexibility and beam quality.
Neuss, Michael N; Gilmore, Terry R; Belderson, Kristin M; Billett, Amy L; Conti-Kalchik, Tara; Harvey, Brittany E; Hendricks, Carolyn; LeFebvre, Kristine B; Mangu, Pamela B; McNiff, Kristen; Olsen, MiKaela; Schulmeister, Lisa; Von Gehr, Ann; Polovich, Martha
2016-12-01
Purpose To update the ASCO/Oncology Nursing Society (ONS) Chemotherapy Administration Safety Standards and to highlight standards for pediatric oncology. Methods The ASCO/ONS Chemotherapy Administration Safety Standards were first published in 2009 and updated in 2011 to include inpatient settings. A subsequent 2013 revision expanded the standards to include the safe administration and management of oral chemotherapy. A joint ASCO/ONS workshop with stakeholder participation, including that of the Association of Pediatric Hematology Oncology Nurses and American Society of Pediatric Hematology/Oncology, was held on May 12, 2015, to review the 2013 standards. An extensive literature search was subsequently conducted, and public comments on the revised draft standards were solicited. Results The updated 2016 standards presented here include clarification and expansion of existing standards to include pediatric oncology and to introduce new standards: most notably, two-person verification of chemotherapy preparation processes, administration of vinca alkaloids via minibags in facilities in which intrathecal medications are administered, and labeling of medications dispensed from the health care setting to be taken by the patient at home. The standards were reordered and renumbered to align with the sequential processes of chemotherapy prescription, preparation, and administration. Several standards were separated into their respective components for clarity and to facilitate measurement of adherence to a standard. Conclusion As oncology practice has changed, so have chemotherapy administration safety standards. Advances in technology, cancer treatment, and education and training have prompted the need for periodic review and revision of the standards. Additional information is available at http://www.asco.org/chemo-standards .
Real-time projections of cholera outbreaks through data assimilation and rainfall forecasting
Pasetto, Damiano; Finger, Flavio; Rinaldo, Andrea; Bertuzzo, Enrico
2017-10-01
Although treatment for cholera is well-known and cheap, outbreaks in epidemic regions still exact high death tolls mostly due to the unpreparedness of health care infrastructures to face unforeseen emergencies. In this context, mathematical models for the prediction of the evolution of an ongoing outbreak are of paramount importance. Here, we test a real-time forecasting framework that readily integrates new information as soon as available and periodically issues an updated forecast. The spread of cholera is modeled by a spatially-explicit scheme that accounts for the dynamics of susceptible, infected and recovered individuals hosted in different local communities connected through hydrologic and human mobility networks. The framework presents two major innovations for cholera modeling: the use of a data assimilation technique, specifically an ensemble Kalman filter, to update both state variables and parameters based on the observations, and the use of rainfall forecasts to force the model. The exercise of simulating the state of the system and the predictive capabilities of the novel tools, set at the initial phase of the 2010 Haitian cholera outbreak using only information that was available at that time, serves as a benchmark. Our results suggest that the assimilation procedure with the sequential update of the parameters outperforms calibration schemes based on Markov chain Monte Carlo. Moreover, in a forecasting mode the model usefully predicts the spatial incidence of cholera at least one month ahead. The performance decreases for longer time horizons yet allowing sufficient time to plan for deployment of medical supplies and staff, and to evaluate alternative strategies of emergency management.
Directory of Open Access Journals (Sweden)
S. Skachko
2008-12-01
Full Text Available This study focuses on an accurate estimation of ocean circulation via assimilation of satellite measurements of ocean dynamical topography into the global finite-element ocean model (FEOM. The dynamical topography data are derived from a complex analysis of multi-mission altimetry data combined with a referenced earth geoid. The assimilation is split into two parts. First, the mean dynamic topography is adjusted. To this end an adiabatic pressure correction method is used which reduces model divergence from the real evolution. Second, a sequential assimilation technique is applied to improve the representation of thermodynamical processes by assimilating the time varying dynamic topography. A method is used according to which the temperature and salinity are updated following the vertical structure of the first baroclinic mode. It is shown that the method leads to a partially successful assimilation approach reducing the rms difference between the model and data from 16 cm to 2 cm. This improvement of the mean state is accompanied by significant improvement of temporal variability in our analysis. However, it remains suboptimal, showing a tendency in the forecast phase of returning toward a free run without data assimilation. Both the mean difference and standard deviation of the difference between the forecast and observation data are reduced as the result of assimilation.
Updating Geospatial Data from Large Scale Data Sources
Zhao, R.; Chen, J.; Wang, D.; Shang, Y.; Wang, Z.; Li, X.; Ai, T.
2011-08-01
In the past decades, many geospatial databases have been established at national, regional and municipal levels over the world. Nowadays, it has been widely recognized that how to update these established geo-spatial database and keep them up to date is most critical for the value of geo-spatial database. So, more and more efforts have been devoted to the continuous updating of these geospatial databases. Currently, there exist two main types of methods for Geo-spatial database updating: directly updating with remote sensing images or field surveying materials, and indirectly updating with other updated data result such as larger scale newly updated data. The former method is the basis because the update data sources in the two methods finally root from field surveying and remote sensing. The later method is often more economical and faster than the former. Therefore, after the larger scale database is updated, the smaller scale database should be updated correspondingly in order to keep the consistency of multi-scale geo-spatial database. In this situation, it is very reasonable to apply map generalization technology into the process of geo-spatial database updating. The latter is recognized as one of most promising methods of geo-spatial database updating, especially in collaborative updating environment in terms of map scale, i.e , different scale database are produced and maintained separately by different level organizations such as in China. This paper is focused on applying digital map generalization into the updating of geo-spatial database from large scale in the collaborative updating environment for SDI. The requirements of the application of map generalization into spatial database updating are analyzed firstly. A brief review on geospatial data updating based digital map generalization is then given. Based on the requirements analysis and review, we analyze the key factors for implementing updating geospatial data from large scale including technical
Quantum signature scheme for known quantum messages
International Nuclear Information System (INIS)
Kim, Taewan; Lee, Hyang-Sook
2015-01-01
When we want to sign a quantum message that we create, we can use arbitrated quantum signature schemes which are possible to sign for not only known quantum messages but also unknown quantum messages. However, since the arbitrated quantum signature schemes need the help of a trusted arbitrator in each verification of the signature, it is known that the schemes are not convenient in practical use. If we consider only known quantum messages such as the above situation, there can exist a quantum signature scheme with more efficient structure. In this paper, we present a new quantum signature scheme for known quantum messages without the help of an arbitrator. Differing from arbitrated quantum signature schemes based on the quantum one-time pad with the symmetric key, since our scheme is based on quantum public-key cryptosystems, the validity of the signature can be verified by a receiver without the help of an arbitrator. Moreover, we show that our scheme provides the functions of quantum message integrity, user authentication and non-repudiation of the origin as in digital signature schemes. (paper)
Two-level schemes for the advection equation
Vabishchevich, Petr N.
2018-06-01
The advection equation is the basis for mathematical models of continuum mechanics. In the approximate solution of nonstationary problems it is necessary to inherit main properties of the conservatism and monotonicity of the solution. In this paper, the advection equation is written in the symmetric form, where the advection operator is the half-sum of advection operators in conservative (divergent) and non-conservative (characteristic) forms. The advection operator is skew-symmetric. Standard finite element approximations in space are used. The standard explicit two-level scheme for the advection equation is absolutely unstable. New conditionally stable regularized schemes are constructed, on the basis of the general theory of stability (well-posedness) of operator-difference schemes, the stability conditions of the explicit Lax-Wendroff scheme are established. Unconditionally stable and conservative schemes are implicit schemes of the second (Crank-Nicolson scheme) and fourth order. The conditionally stable implicit Lax-Wendroff scheme is constructed. The accuracy of the investigated explicit and implicit two-level schemes for an approximate solution of the advection equation is illustrated by the numerical results of a model two-dimensional problem.
Adrenal vein sampling in primary aldosteronism: concordance of simultaneous vs sequential sampling.
Almarzooqi, Mohamed-Karji; Chagnon, Miguel; Soulez, Gilles; Giroux, Marie-France; Gilbert, Patrick; Oliva, Vincent L; Perreault, Pierre; Bouchard, Louis; Bourdeau, Isabelle; Lacroix, André; Therasse, Eric
2017-02-01
Many investigators believe that basal adrenal venous sampling (AVS) should be done simultaneously, whereas others opt for sequential AVS for simplicity and reduced cost. This study aimed to evaluate the concordance of sequential and simultaneous AVS methods. Between 1989 and 2015, bilateral simultaneous sets of basal AVS were obtained twice within 5 min, in 188 consecutive patients (59 women and 129 men; mean age: 53.4 years). Selectivity was defined by adrenal-to-peripheral cortisol ratio ≥2, and lateralization was defined as an adrenal aldosterone-to-cortisol ratio ≥2, the contralateral side. Sequential AVS was simulated using right sampling at -5 min (t = -5) and left sampling at 0 min (t = 0). There was no significant difference in mean selectivity ratio (P = 0.12 and P = 0.42 for the right and left sides respectively) and in mean lateralization ratio (P = 0.93) between t = -5 and t = 0. Kappa for selectivity between 2 simultaneous AVS was 0.71 (95% CI: 0.60-0.82), whereas it was 0.84 (95% CI: 0.76-0.92) and 0.85 (95% CI: 0.77-0.93) between sequential and simultaneous AVS at respectively -5 min and at 0 min. Kappa for lateralization between 2 simultaneous AVS was 0.84 (95% CI: 0.75-0.93), whereas it was 0.86 (95% CI: 0.78-0.94) and 0.80 (95% CI: 0.71-0.90) between sequential AVS and simultaneous AVS at respectively -5 min at 0 min. Concordance between simultaneous and sequential AVS was not different than that between 2 repeated simultaneous AVS in the same patient. Therefore, a better diagnostic performance is not a good argument to select the AVS method. © 2017 European Society of Endocrinology.
Sequential motor skill: cognition, perception and action
Ruitenberg, M.F.L.
2013-01-01
Discrete movement sequences are assumed to be the building blocks of more complex sequential actions that are present in our everyday behavior. The studies presented in this dissertation address the (neuro)cognitive underpinnings of such movement sequences, in particular in relationship to the role
Zips : mining compressing sequential patterns in streams
Hoang, T.L.; Calders, T.G.K.; Yang, J.; Mörchen, F.; Fradkin, D.; Chau, D.H.; Vreeken, J.; Leeuwen, van M.; Faloutsos, C.
2013-01-01
We propose a streaming algorithm, based on the minimal description length (MDL) principle, for extracting non-redundant sequential patterns. For static databases, the MDL-based approach that selects patterns based on their capacity to compress data rather than their frequency, was shown to be
Memon, Amina; Gabbert, Fiona
2003-04-01
Eyewitness research has identified sequential lineup testing as a way of reducing false lineup choices while maintaining accurate identifications. The authors examined the usefulness of this procedure for reducing false choices in older adults. Young and senior witnesses viewed a crime video and were later presented with target present orabsent lineups in a simultaneous or sequential format. In addition, some participants received prelineup questions about their memory for a perpetrator's face and about their confidence in their ability to identify the culprit or to correctly reject the lineup. The sequential lineup reduced false choosing rates among young and older adults in target-absent conditions. In target-present conditions, sequential testing significantly reduced the correct identification rate in both age groups.
Yoo, Myung Hoon; Lim, Won Sub; Park, Joo Hyun; Kwon, Joong Keun; Lee, Tae-Hoon; An, Yong-Hwi; Kim, Young-Jin; Kim, Jong Yang; Lim, Hyun Woo; Park, Hong Ju
2016-01-01
Severe-to-profound sudden sensorineural hearing loss (SSNHL) has a poor prognosis. We aimed to compare the efficacy of simultaneous and sequential oral and intratympanic steroids for this condition. Fifty patients with severe-to-profound SSNHL (>70 dB HL) were included from 7 centers. The simultaneous group (27 patients) received oral and intratympanic steroid injections for 2 weeks. The sequential group (23 patients) was treated with oral steroids for 2 weeks and intratympanic steroids for the subsequent 2 weeks. Pure-tone averages (PTA) and word discrimination scores (WDS) were compared before treatment and 2 weeks and 1 and 2 months after treatment. Treatment outcomes according to the modified American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) criteria were also analyzed. The improvement in PTA and WDS at the 2-week follow-up was 23 ± 21 dB HL and 20 ± 39% in the simultaneous group and 31 ± 29 dB HL and 37 ± 42% in the sequential group; this was not statistically significant. Complete or partial recovery at the 2-week follow-up was observed in 26% of the simultaneous group and 30% of the sequential group; this was also not significant. The improvement in PTA and WDS at the 2-month follow-up was 40 ± 20 dB HL and 37 ± 35% in the simultaneous group and 41 ± 25 dB HL and 48 ± 41% in the sequential group; this was not statistically significant. Complete or partial recovery at the 2-month follow-up was observed in 33% of the simultaneous group and 35% of the sequential group; this was also not significant. Seven patients in the sequential group did not need intratympanic steroid injections for sufficient improvement after oral steroids alone. Simultaneous oral/intratympanic steroid treatment yielded a recovery similar to that produced by sequential treatment. Because the addition of intratympanic steroids can be decided upon based on the improvement after an oral steroid, the sequential regimen can be recommended to avoid unnecessary
Directory of Open Access Journals (Sweden)
Shigang Zhang
2015-10-01
Full Text Available Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics.
Zhang, Shigang; Song, Lijun; Zhang, Wei; Hu, Zheng; Yang, Yongmin
2015-01-01
Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics. PMID:26457709
International Nuclear Information System (INIS)
Okada, Kouji
1983-01-01
Sequential chemotherapy using FT-207, adriamycin and mitomycin C followed by radiotherapy was attempted to achieve effective inhibition against implanted tumor in C57BL/6 black mice bearing YM-12 tumors. Sequential combined chemotherapy was more effective than single drug chemotherapy or combined chemotherapy of other drugs. Addition of radiotherapy to the sequential combined chemotherapy was successful for enhancing therapeutic effect. (author)
Structural and Functional Impacts of ER Coactivator Sequential Recruitment.
Yi, Ping; Wang, Zhao; Feng, Qin; Chou, Chao-Kai; Pintilie, Grigore D; Shen, Hong; Foulds, Charles E; Fan, Guizhen; Serysheva, Irina; Ludtke, Steven J; Schmid, Michael F; Hung, Mien-Chie; Chiu, Wah; O'Malley, Bert W
2017-09-07
Nuclear receptors recruit multiple coactivators sequentially to activate transcription. This "ordered" recruitment allows different coactivator activities to engage the nuclear receptor complex at different steps of transcription. Estrogen receptor (ER) recruits steroid receptor coactivator-3 (SRC-3) primary coactivator and secondary coactivators, p300/CBP and CARM1. CARM1 recruitment lags behind the binding of SRC-3 and p300 to ER. Combining cryo-electron microscopy (cryo-EM) structure analysis and biochemical approaches, we demonstrate that there is a close crosstalk between early- and late-recruited coactivators. The sequential recruitment of CARM1 not only adds a protein arginine methyltransferase activity to the ER-coactivator complex, it also alters the structural organization of the pre-existing ERE/ERα/SRC-3/p300 complex. It induces a p300 conformational change and significantly increases p300 HAT activity on histone H3K18 residues, which, in turn, promotes CARM1 methylation activity on H3R17 residues to enhance transcriptional activity. This study reveals a structural role for a coactivator sequential recruitment and biochemical process in ER-mediated transcription. Copyright © 2017 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Beloshitsky, P.
1992-06-01
A versatile magnet lattice for a tau-charm factory is considered in this report. The main feature of this lattice is the possibility to use it for both standard flat beam scheme and beam monochromatization scheme. The detailed description of the lattice is given. The restrictions following the compatibility of both schemes are discussed
International Nuclear Information System (INIS)
Yoshida, Toshihiro
1981-01-01
Probabilities of meson production in the sequential decay of Reggeons, which are formed from the projectile and the target in the hadron-hadron to Reggeon-Reggeon processes, are investigated. It is assumed that pair creation of heavy quarks and simultaneous creation of two antiquark-quark pairs are negligible. The leading-order terms with respect to ratio of creation probabilities of anti s s to anti u u (anti d d) are calculated. The production cross sections in the target fragmentation region are given in terms of probabilities in the initial decay of the Reggeons and an effect of manyparticle production. (author)
[Professor GAO Yuchun's experience on "sequential acupuncture leads to smooth movement of qi"].
Wang, Yanjun; Xing, Xiao; Cui, Linhua
2016-01-01
Professor GAO Yuchun is considered as the key successor of GAO's academic school of acupuncture and moxibustion in Yanzhao region. Professor GAO's clinical experience of, "sequential acupuncture" is introduced in details in this article. In Professor GAO's opinions, appropriate acupuncture sequence is the key to satisfactory clinical effects during treatment. Based on different acupoints, sequential acupuncture can achieve the aim of qi following needles and needles leading qi; based on different symptoms, sequential acupuncture can regulate qi movement; based on different body positions, sequential acupuncture can harmonize qi-blood and reinforcing deficiency and reducing excess. In all, according to the differences of disease condition and constitution, based on the accurate acupoint selection and appropriate manipulation, it is essential to capture the nature of diseases and make the order of acupuncture, which can achieve the aim of regulating qi movement and reinforcing deficiency and reducing excess.
THROUGHPUT ANALYSIS OF EXTENDED ARQ SCHEMES
African Journals Online (AJOL)
PUBLICATIONS1
ABSTRACT. Various Automatic Repeat Request (ARQ) schemes have been used to combat errors that befall in- formation transmitted in digital communication systems. Such schemes include simple ARQ, mixed mode ARQ and Hybrid ARQ (HARQ). In this study we introduce extended ARQ schemes and derive.
TELEGRAPHS TO INCANDESCENT LAMPS: A SEQUENTIAL PROCESS OF INNOVATION
Directory of Open Access Journals (Sweden)
Laurence J. Malone
2000-01-01
Full Text Available This paper outlines a sequential process of technological innovation in the emergence of the electrical industry in the United States from 1830 to 1880. Successive inventions that realize the commercial possibilities of electricity provided the foundation for an industry where technical knowledge, invention and diffusion were ultimately consolidated within the managerial structure of new firms. The genesis of the industry is traced, sequentially, through the development of the telegraph, arc light and incandescent lamp. Exploring the origins of the telegraph and incandescent lamp reveals a process where a series of inventions and firms result from successful efforts touse scientific principles to create new commodities and markets.
Results of simultaneous and sequential pediatric liver and kidney transplantation.
Rogers, J; Bueno, J; Shapiro, R; Scantlebury, V; Mazariegos, G; Fung, J; Reyes, J
2001-11-27
The indications for simultaneous and sequential pediatric liver (LTx) and kidney (KTx) transplantation have not been well defined. We herein report the results of our experience with these procedures in children with end-stage liver disease and/or subsequent end-stage renal disease. Between 1984 and 1995, 12 LTx recipients received 15 kidney allografts. Eight simultaneous and seven sequential LTx/KTx were performed. There were six males and six females, with a mean age of 10.9 years (1.5-23.7). One of the eight simultaneous LTx/KTx was part of a multivisceral allograft. Five KTx were performed at varied intervals after successful LTx, one KTx was performed after a previous simultaneous LTx/KTx, and one KTx was performed after previous sequential LTx/KTx. Immunosuppression was with tacrolimus or cyclosporine and steroids. Indications for LTx were oxalosis (four), congenital hepatic fibrosis (two), cystinosis (one), polycystic liver disease (one), A-1-A deficiency (one), Total Parenteral Nutrition (TPN)-related (one), cryptogenic cirrhosis (one), and hepatoblastoma (one). Indications for KTx were oxalosis (four), drug-induced (four), polycystic kidney disease (three), cystinosis (one), and glomerulonephritis (1). With a mean follow-up of 58 months (0.9-130), the overall patient survival rate was 58% (7/12). One-year and 5-year actuarial patient survival rates were 66% and 58%, respectively. Patient survival rates at 1 year after KTx according to United Network of Organ Sharing (liver) status were 100% for status 3, 50% for status 2, and 0% for status 1. The overall renal allograft survival rate was 47%. Actuarial renal allograft survival rates were 53% at 1 and 5 years. The overall hepatic allograft survival rate was equivalent to the overall patient survival rate (58%). Six of seven surviving patients have normal renal allograft function, and one patient has moderate chronic allograft nephropathy. All surviving patients have normal hepatic allograft function. Six
Ponzi scheme diffusion in complex networks
Zhu, Anding; Fu, Peihua; Zhang, Qinghe; Chen, Zhenyue
2017-08-01
Ponzi schemes taking the form of Internet-based financial schemes have been negatively affecting China's economy for the last two years. Because there is currently a lack of modeling research on Ponzi scheme diffusion within social networks yet, we develop a potential-investor-divestor (PID) model to investigate the diffusion dynamics of Ponzi scheme in both homogeneous and inhomogeneous networks. Our simulation study of artificial and real Facebook social networks shows that the structure of investor networks does indeed affect the characteristics of dynamics. Both the average degree of distribution and the power-law degree of distribution will reduce the spreading critical threshold and will speed up the rate of diffusion. A high speed of diffusion is the key to alleviating the interest burden and improving the financial outcomes for the Ponzi scheme operator. The zero-crossing point of fund flux function we introduce proves to be a feasible index for reflecting the fast-worsening situation of fiscal instability and predicting the forthcoming collapse. The faster the scheme diffuses, the higher a peak it will reach and the sooner it will collapse. We should keep a vigilant eye on the harm of Ponzi scheme diffusion through modern social networks.
The Performance-based Funding Scheme of Universities
Directory of Open Access Journals (Sweden)
Juha KETTUNEN
2016-05-01
Full Text Available The purpose of this study is to analyse the effectiveness of the performance-based funding scheme of the Finnish universities that was adopted at the beginning of 2013. The political decision-makers expect that the funding scheme will create incentives for the universities to improve performance, but these funding schemes have largely failed in many other countries, primarily because public funding is only a small share of the total funding of universities. This study is interesting because Finnish universities have no tuition fees, unlike in many other countries, and the state allocates funding based on the objectives achieved. The empirical evidence of the graduation rates indicates that graduation rates increased when a new scheme was adopted, especially among male students, who have more room for improvement than female students. The new performance-based funding scheme allocates the funding according to the output-based indicators and limits the scope of strategic planning and the autonomy of the university. The performance-based funding scheme is transformed to the strategy map of the balanced scorecard. The new funding scheme steers universities in many respects but leaves the research and teaching skills to the discretion of the universities. The new scheme has also diminished the importance of the performance agreements between the university and the Ministry. The scheme increases the incentives for universities to improve the processes and structures in order to attain as much public funding as possible. It is optimal for the central administration of the university to allocate resources to faculties and other organisational units following the criteria of the performance-based funding scheme. The new funding scheme has made the universities compete with each other, because the total funding to the universities is allocated to each university according to the funding scheme. There is a tendency that the funding schemes are occasionally
Wang, Jinlong; Feng, Shuo; Wu, Qihui; Zheng, Xueqiang; Xu, Yuhua; Ding, Guoru
2014-12-01
Cognitive radio (CR) is a promising technology that brings about remarkable improvement in spectrum utilization. To tackle the hidden terminal problem, cooperative spectrum sensing (CSS) which benefits from the spatial diversity has been studied extensively. Since CSS is vulnerable to the attacks initiated by malicious secondary users (SUs), several secure CSS schemes based on Dempster-Shafer theory have been proposed. However, the existing works only utilize the current difference of SUs, such as the difference in SNR or similarity degree, to evaluate the trustworthiness of each SU. As the current difference is only one-sided and sometimes inaccurate, the statistical information contained in each SU's historical behavior should not be overlooked. In this article, we propose a robust CSS scheme based on Dempster-Shafer theory and trustworthiness degree calculation. It is carried out in four successive steps, which are basic probability assignment (BPA), trustworthiness degree calculation, selection and adjustment of BPA, and combination by Dempster-Shafer rule, respectively. Our proposed scheme evaluates the trustworthiness degree of SUs from both current difference aspect and historical behavior aspect and exploits Dempster-Shafer theory's potential to establish a `soft update' approach for the reputation value maintenance. It can not only differentiate malicious SUs from honest ones based on their historical behaviors but also reserve the current difference for each SU to achieve a better real-time performance. Abundant simulation results have validated that the proposed scheme outperforms the existing ones under the impact of different attack patterns and different number of malicious SUs.
Energy Technology Data Exchange (ETDEWEB)
Wang, Songlin; Matsuda, Isamu; Long, Fei; Ishii, Yoshitaka, E-mail: yishii@uic.edu [University of Illinois at Chicago, Department of Chemistry (United States)
2016-02-15
This study demonstrates a novel spectral editing technique for protein solid-state NMR (SSNMR) to simplify the spectrum drastically and to reduce the ambiguity for protein main-chain signal assignments in fast magic-angle-spinning (MAS) conditions at a wide frequency range of 40–80 kHz. The approach termed HIGHLIGHT (Wang et al., in Chem Comm 51:15055–15058, 2015) combines the reverse {sup 13}C, {sup 15}N-isotope labeling strategy and selective signal quenching using the frequency-selective REDOR pulse sequence under fast MAS. The scheme allows one to selectively observe the signals of “highlighted” labeled amino-acid residues that precede or follow unlabeled residues through selectively quenching {sup 13}CO or {sup 15}N signals for a pair of consecutively labeled residues by recoupling {sup 13}CO–{sup 15}N dipolar couplings. Our numerical simulation results showed that the scheme yielded only ∼15 % loss of signals for the highlighted residues while quenching as much as ∼90 % of signals for non-highlighted residues. For lysine-reverse-labeled micro-crystalline GB1 protein, the 2D {sup 15}N/{sup 13}C{sub α} correlation and 2D {sup 13}C{sub α}/{sup 13}CO correlation SSNMR spectra by the HIGHLIGHT approach yielded signals only for six residues following and preceding the unlabeled lysine residues, respectively. The experimental dephasing curves agreed reasonably well with the corresponding simulation results for highlighted and quenched residues at spinning speeds of 40 and 60 kHz. The compatibility of the HIGHLIGHT approach with fast MAS allows for sensitivity enhancement by paramagnetic assisted data collection (PACC) and {sup 1}H detection. We also discuss how the HIGHLIGHT approach facilitates signal assignments using {sup 13}C-detected 3D SSNMR by demonstrating full sequential assignments of lysine-reverse-labeled micro-crystalline GB1 protein (∼300 nmol), for which data collection required only 11 h. The HIGHLIGHT approach offers valuable
A Classification Scheme for Literary Characters
Directory of Open Access Journals (Sweden)
Matthew Berry
2017-10-01
Full Text Available There is no established classification scheme for literary characters in narrative theory short of generic categories like protagonist vs. antagonist or round vs. flat. This is so despite the ubiquity of stock characters that recur across media, cultures, and historical time periods. We present here a proposal of a systematic psychological scheme for classifying characters from the literary and dramatic fields based on a modification of the Thomas-Kilmann (TK Conflict Mode Instrument used in applied studies of personality. The TK scheme classifies personality along the two orthogonal dimensions of assertiveness and cooperativeness. To examine the validity of a modified version of this scheme, we had 142 participants provide personality ratings for 40 characters using two of the Big Five personality traits as well as assertiveness and cooperativeness from the TK scheme. The results showed that assertiveness and cooperativeness were orthogonal dimensions, thereby supporting the validity of using a modified version of TK’s two-dimensional scheme for classifying characters.