#### Sample records for sequential unconstrained minimization

1. Sequential unconstrained minimization algorithms for constrained optimization

International Nuclear Information System (INIS)

Byrne, Charles

2008-01-01

The problem of minimizing a function f(x):R J → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G k (x)=f(x)+g k (x), to obtain x k . The auxiliary functions g k (x):D subset of R J → R + are nonnegative on the set D, each x k is assumed to lie within D, and the objective is to minimize the continuous function f:R J → R over x in the set C = D-bar, the closure of D. We assume that such minimizers exist, and denote one such by x-circumflex. We assume that the functions g k (x) satisfy the inequalities 0≤g k (x)≤G k-1 (x)-G k-1 (x k-1 ), for k = 2, 3, .... Using this assumption, we show that the sequence {(x k )} is decreasing and converges to f(x-circumflex). If the restriction of f(x) to D has bounded level sets, which happens if x-circumflex is unique and f(x) is closed, proper and convex, then the sequence {x k } is bounded, and f(x*)=f(x-circumflex), for any cluster point x*. Therefore, if x-circumflex is unique, x* = x-circumflex and {x k } → x-circumflex. When x-circumflex is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton–Raphson method. The proof techniques used for SUMMA can be extended to obtain related results

2. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

KAUST Repository

Abediseid, Walid

2013-04-04

In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the \\\\textit{lattice decoder}. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

3. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

KAUST Repository

Abediseid, Walid

2012-10-01

In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

4. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

KAUST Repository

Abediseid, Walid; Alouini, Mohamed-Slim

2012-01-01

In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

5. A Modified Limited-Memory BNS Method for Unconstrained Minimization Based on the Conjugate Directions Idea

Czech Academy of Sciences Publication Activity Database

2015-01-01

Roč. 30, č. 3 (2015), s. 616-633 ISSN 1055-6788 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : unconstrained minimization * variable metric methods * limited-memory methods * the BFGS update * conjugate directions * numerical results Subject RIV: BA - General Mathematics Impact factor: 0.841, year: 2015

6. Performance-complexity tradeoff in sequential decoding for the unconstrained AWGN channel

KAUST Repository

Abediseid, Walid

2013-06-01

In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter - the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity. © 2013 IEEE.

7. Properties of the block BFGS update and its application to the limited-memory block BNS method for unconstrained minimization

Czech Academy of Sciences Publication Activity Database

Online: 02 April (2018) ISSN 1017-1398 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : Unconstrained minimization * Block variable metric methods * Limited-memory methods * BFGS update * Global convergence * Numerical results Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.241, year: 2016

8. A detailed survey of numerical methods for unconstrained minimization. Pt. 1

International Nuclear Information System (INIS)

Mika, K.; Chaves, T.

1980-01-01

A detailed description of numerical methods for unconstrained minimization is presented. This first part surveys in particular conjugate direction and gradient methods, whereas variable metric methods will be the subject of the second part. Among the results of special interest we quote the following. The conjugate direction methods of Powell, Zangwill and Sutti can be best interpreted if the Smith approach is adopted. The conditions for quadratic termination of Powell's first procedure are analyzed. Numerical results based on nonlinear least squares problems are presented for the following conjugate direction codes: VA04AD from Harwell Subroutine Library and ZXPOW from IMSL, both implementations of Powell's second procedure, DFMND from IBM-SILMATH (Zangwill's method) and Brent's algorithm PRAXIS. VA04AD turns out to be superior in all cases, PRAXIS improves for high-dimensional problems. All codes clearly exhibit superlinear convergence. Akaike's result for the method of steepest descent is derived directly from a set of nonlinear recurrence relations. Numerical results obtained with the highly ill conditioned Hilbert function confirm the theoretical predictions. Several properties of the conjugate gradient method are presented and a new derivation of the equivalence of steepest descent partan and the CG method is given. A comparison of numerical results from the CG codes VA08AD (Fletcher-Reeves), DFMCG (the SSP version of the Fletcher-Reevens algorithm) and VA14AD (Powell's implementation of the Polak-Ribiere formula) reveals that VA14AD is clearly superior in all cases, but that the convergence rate of these codes is only weakly superlinear such that high accuracy solutions require extremely large numbers of function calls. (orig.)

9. Recent Result from E821 Experiment on Muon g-2 and Unconstrained Minimal Supersymemtric Standard Model

CERN Document Server

Komine, S; Yamaguchi, M; Komine, Shinji; Moroi, Takeo; Yamaguchi, Masahiro

2001-01-01

Recently, the E821 experiment at the Brookhaven National Laboratory announced their latest result of their muon g-2 measurement which is about 2.6-\\sigma away from the standard model prediction. Taking this result seriously, we examine the possibility to explain this discrepancy by the supersymmetric contribution. Our analysis is performed in the framework of the unconstrained supersymmetric standard model which has free seven parameters relevant to muon g-2. We found that, in the case of large \\tan\\beta, sparticle masses are allowed to be large in the region where the SUSY contribution to the muon g-2 is large enough, and hence the conventional SUSY search may fail even at the LHC. On the contrary, to explain the discrepancy in the case of small \\tan\\beta, we found that (i) sleptons and SU(2)_L gauginos should be light, and (ii) negative search for the Higgs boson severely constrains the model in the framework of the mSUGRA and gauge-mediated model.

10. Unconstrained Optimization

DEFF Research Database (Denmark)

Frandsen, P. E.; Jonasson, K.; Nielsen, Hans Bruun

1999-01-01

This lecture note is intended for use in the course 04212 Optimization and Data Fitting at the Technincal University of Denmark. It covers about 25% of the curriculum. Hopefully, the note may be useful also to interested persons not participating in that course. The aim of the note is to give...... an introduction to algorithms for unconstrained optimization. We present Conjugate Gradient, Damped Newton and Quasi Newton methods together with the relevant theoretical background. The reader is assumed to be familiar with algorithms for solving linear and nonlinear system of equations, at a level corresponding...

11. Sequential changes from minimal pancreatic inflammation to advanced alcoholic pancreatitis.

Science.gov (United States)

Noronha, M; Dreiling, D A; Bordalo, O

1983-11-01

A correlation of several clinical parameters and pancreatitis morphological alterations observed in chronic alcoholics with and without pancreatic is presented. Three groups of patients were studied: asymptomatic chronic alcoholics (24); non-alcoholic controls (10); and cases with advanced chronic pancreatitis (6). Clinical, biochemical and functional studies were performed. Morphological studies were made on surgical biopsy specimens in light and electron microscopy. The results of this study showed: 1) fat accumulates within pancreatic acinar cells in alcoholics drinking more than 80 g of ethanol per day; 2) ultrastructural changes found in acinar cells of the alcoholics are similar to those described for liver cells; 3) the alterations found in alcoholics without pancreatitis are also observed in those with advanced chronic pancreatitis. An attempt to correlate the sequential changes in the histopathology of alcoholic pancreatic disease with the clinical picture and secretory patterns was made. According to these observations, admitting the ultrastructural similarities between the liver and the pancreas and the recently demonstrated abnormalities of lipid metabolism in pancreatic cells in experimental animal research, the authors postulate a toxic-metabolic mechanism as a likely hypothesis for the pathogenesis of chronic alcoholic inflammation of the pancreas.

12. Performance Analysis of Video Transmission Using Sequential Distortion Minimization Method for Digital Video Broadcasting Terrestrial

Directory of Open Access Journals (Sweden)

Novita Astin

2016-12-01

Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.

13. Asymptotically optimum multialternative sequential procedures for discernment of processes minimizing average length of observations

Science.gov (United States)

Fishman, M. M.

1985-01-01

The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.

14. Hazardous Traffic Event Detection Using Markov Blanket and Sequential Minimal Optimization (MB-SMO

Directory of Open Access Journals (Sweden)

Lixin Yan

2016-07-01

Full Text Available The ability to identify hazardous traffic events is already considered as one of the most effective solutions for reducing the occurrence of crashes. Only certain particular hazardous traffic events have been studied in previous studies, which were mainly based on dedicated video stream data and GPS data. The objective of this study is twofold: (1 the Markov blanket (MB algorithm is employed to extract the main factors associated with hazardous traffic events; (2 a model is developed to identify hazardous traffic event using driving characteristics, vehicle trajectory, and vehicle position data. Twenty-two licensed drivers were recruited to carry out a natural driving experiment in Wuhan, China, and multi-sensor information data were collected for different types of traffic events. The results indicated that a vehicle’s speed, the standard deviation of speed, the standard deviation of skin conductance, the standard deviation of brake pressure, turn signal, the acceleration of steering, the standard deviation of acceleration, and the acceleration in Z (G have significant influences on hazardous traffic events. The sequential minimal optimization (SMO algorithm was adopted to build the identification model, and the accuracy of prediction was higher than 86%. Moreover, compared with other detection algorithms, the MB-SMO algorithm was ranked best in terms of the prediction accuracy. The conclusions can provide reference evidence for the development of dangerous situation warning products and the design of intelligent vehicles.

15. Optimization Strategies for Bruch's Membrane Opening Minimum Rim Area Calculation: Sequential versus Simultaneous Minimization.

Science.gov (United States)

Enders, Philip; Adler, Werner; Schaub, Friederike; Hermann, Manuel M; Diestelhorst, Michael; Dietlein, Thomas; Cursiefen, Claus; Heindl, Ludwig M

2017-10-24

To compare a simultaneously optimized continuous minimum rim surface parameter between Bruch's membrane opening (BMO) and the internal limiting membrane to the standard sequential minimization used for calculating the BMO minimum rim area in spectral domain optical coherence tomography (SD-OCT). In this case-control, cross-sectional study, 704 eyes of 445 participants underwent SD-OCT of the optic nerve head (ONH), visual field testing, and clinical examination. Globally and clock-hour sector-wise optimized BMO-based minimum rim area was calculated independently. Outcome parameters included BMO-globally optimized minimum rim area (BMO-gMRA) and sector-wise optimized BMO-minimum rim area (BMO-MRA). BMO area was 1.89 ± 0.05 mm 2 . Mean global BMO-MRA was 0.97 ± 0.34 mm 2 , mean global BMO-gMRA was 1.01 ± 0.36 mm 2 . Both parameters correlated with r = 0.995 (P < 0.001); mean difference was 0.04 mm 2 (P < 0.001). In all sectors, parameters differed by 3.0-4.2%. In receiver operating characteristics, the calculated area under the curve (AUC) to differentiate glaucoma was 0.873 for BMO-MRA, compared to 0.866 for BMO-gMRA (P = 0.004). Among ONH sectors, the temporal inferior location showed the highest AUC. Optimization strategies to calculate BMO-based minimum rim area led to significantly different results. Imposing an additional adjacency constraint within calculation of BMO-MRA does not improve diagnostic power. Global and temporal inferior BMO-MRA performed best in differentiating glaucoma patients.

16. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming

Energy Technology Data Exchange (ETDEWEB)

Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

2017-03-27

Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.

17. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming.

Science.gov (United States)

Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

2017-08-01

Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.

18. Unconstrained multiplet in N=2 conformal supergravity

International Nuclear Information System (INIS)

Hayashi, Masahito; Uehara, Shozo.

1985-02-01

An unconstrained (general) multiplet was studied in N = 2 conformal supergravity. Transformation law, embedding formula and multiplication rule are explicitly presented at the linearized level. (author)

19. A Spectral Conjugate Gradient Method for Unconstrained Optimization

International Nuclear Information System (INIS)

Birgin, E. G.; Martinez, J. M.

2001-01-01

A family of scaled conjugate gradient algorithms for large-scale unconstrained minimization is defined. The Perry, the Polak-Ribiere and the Fletcher-Reeves formulae are compared using a spectral scaling derived from Raydan's spectral gradient optimization method. The best combination of formula, scaling and initial choice of step-length is compared against well known algorithms using a classical set of problems. An additional comparison involving an ill-conditioned estimation problem in Optics is presented

20. Number-unconstrained quantum sensing

Science.gov (United States)

Mitchell, Morgan W.

2017-12-01

Quantum sensing is commonly described as a constrained optimization problem: maximize the information gained about an unknown quantity using a limited number of particles. Important sensors including gravitational wave interferometers and some atomic sensors do not appear to fit this description, because there is no external constraint on particle number. Here, we develop the theory of particle-number-unconstrained quantum sensing, and describe how optimal particle numbers emerge from the competition of particle-environment and particle-particle interactions. We apply the theory to optical probing of an atomic medium modeled as a resonant, saturable absorber, and observe the emergence of well-defined finite optima without external constraints. The results contradict some expectations from number-constrained quantum sensing and show that probing with squeezed beams can give a large sensitivity advantage over classical strategies when each is optimized for particle number.

1. Minimalism

CERN Document Server

Obendorf, Hartmut

2009-01-01

The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.

2. Parallel algorithms for unconstrained optimization by multisplitting with inexact subspace search - the abstract

Energy Technology Data Exchange (ETDEWEB)

Renaut, R.; He, Q. [Arizona State Univ., Tempe, AZ (United States)

1994-12-31

In a new parallel iterative algorithm for unconstrained optimization by multisplitting is proposed. In this algorithm the original problem is split into a set of small optimization subproblems which are solved using well known sequential algorithms. These algorithms are iterative in nature, e.g. DFP variable metric method. Here the authors use sequential algorithms based on an inexact subspace search, which is an extension to the usual idea of an inexact fine search. Essentially the idea of the inexact line search for nonlinear minimization is that at each iteration the authors only find an approximate minimum in the line search direction. Hence by inexact subspace search, they mean that, instead of finding the minimum of the subproblem at each interation, they do an incomplete down hill search to give an approximate minimum. Some convergence and numerical results for this algorithm will be presented. Further, the original theory will be generalized to the situation with a singular Hessian. Applications for nonlinear least squares problems will be presented. Experimental results will be presented for implementations on an Intel iPSC/860 Hypercube with 64 nodes as well as on the Intel Paragon.

3. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

KAUST Repository

Abediseid, Walid; Alouini, Mohamed-Slim

2013-01-01

channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the \\textit{lattice decoder}. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement

4. An Efficient Algorithm for Unconstrained Optimization

Directory of Open Access Journals (Sweden)

Sergio Gerardo de-los-Cobos-Silva

2015-01-01

Full Text Available This paper presents an original and efficient PSO algorithm, which is divided into three phases: (1 stabilization, (2 breadth-first search, and (3 depth-first search. The proposed algorithm, called PSO-3P, was tested with 47 benchmark continuous unconstrained optimization problems, on a total of 82 instances. The numerical results show that the proposed algorithm is able to reach the global optimum. This work mainly focuses on unconstrained optimization problems from 2 to 1,000 variables.

5. The effect of zero-ischaemia laparoscopic minimally invasive partial nephrectomy using the modified sequential preplaced suture renorrhaphy technique on long-term renal functions.

Science.gov (United States)

Sönmez, Mehmet Giray; Kara, Cengiz

2017-09-01

Laparoscopic minimally invasive partial nephrectomy (MIPN) is the preferred technique in renal surgery, especially T1 phase kidney tumours, and it is recommended for the protection of renal functions in methods that do not involve ischaemia. To evaluate long-term renal functions of zero-ischaemia laparoscopic MIPN patients who underwent a modified sequential preplaced suture renorrhaphy technique. In a total of 17 renal units in 16 patients with kidney tumours that were determined incidentally and did not cause any complaints, the masses were extracted via laparoscopic partial nephrectomy (LPN) using the modified sequential preplaced suture renorrhaphy technique. Creatinine and estimated glomerular filtration rate (eGFR) values of the patients were measured preoperatively and on the first day and after 12 months postoperatively, and the results were compared. The differences between the pre- and postoperative values were statistically significant (p = 0.033, p = 0.045), but the changes in postoperative creatinine and eGFR values were clinically insignificant. While the differences between preoperative and first-day postoperative creatinine and eGFR values were found to be statistically significant (p = 0.039, p = 0.042, respectively), a statistically significant difference was not detected between preoperative and 12-month postoperative creatinine and eGFR values (p = 0.09, p = 0.065, respectively). The global percentage of functional recovery was measured as 92.5% on the first day and 95.9% at the 12 th month. The modified sequential preplaced suture renorrhaphy technique is an effective, reliable method for avoiding complications and preserving renal functions and nephrons in appropriate patients.

6. Direct Succinic Acid Production from Minimally Pretreated Biomass Using Sequential Solid-State and Slurry Fermentation with Mixed Fungal Cultures

Directory of Open Access Journals (Sweden)

Jerico Alcantara

2017-06-01

Full Text Available Conventional bio-based succinic acid production involves anaerobic bacterial fermentation of pure sugars. This study explored a new route for directly producing succinic acid from minimally-pretreated lignocellulosic biomass via a consolidated bioprocessing technology employing a mixed lignocellulolytic and acidogenic fungal co-culture. The process involved a solid-state pre-fermentation stage followed by a two-phase slurry fermentation stage. During the solid-state pre-fermentation stage, Aspergillus niger and Trichoderma reesei were co-cultured in a nitrogen-rich substrate (e.g., soybean hull to induce cellulolytic enzyme activity. The ligninolytic fungus Phanerochaete chrysosporium was grown separately on carbon-rich birch wood chips to induce ligninolytic enzymes, rendering the biomass more susceptible to cellulase attack. The solid-state pre-cultures were then combined in a slurry fermentation culture to achieve simultaneous enzymatic cellulolysis and succinic acid production. This approach generated succinic acid at maximum titers of 32.43 g/L after 72 h of batch slurry fermentation (~10 g/L production, and 61.12 g/L after 36 h of addition of fresh birch wood chips at the onset of the slurry fermentation stage (~26 g/L production. Based on this result, this approach is a promising alternative to current bacterial succinic acid production due to its minimal substrate pretreatment requirements, which could reduce production costs.

7. Single Crystals Grown Under Unconstrained Conditions

Science.gov (United States)

Sunagawa, Ichiro

Based on detailed investigations on morphology (evolution and variation in external forms), surface microtopography of crystal faces (spirals and etch figures), internal morphology (growth sectors, growth banding and associated impurity partitioning) and perfection (dislocations and other lattice defects) in single crystals, we can deduce how and by what mechanism the crystal grew and experienced fluctuation in growth parameters through its growth and post-growth history under unconstrained condition. The information is useful not only in finding appropriate way to growing highly perfect and homogeneous single crystals, but also in deciphering letters sent from the depth of the Earth and the Space. It is also useful in discriminating synthetic from natural gemstones. In this chapter, available methods to obtain molecular information are briefly summarized, and actual examples to demonstrate the importance of this type of investigations are selected from both natural minerals (diamond, quartz, hematite, corundum, beryl, phlogopite) and synthetic crystals (SiC, diamond, corundum, beryl).

8. Gender Recognition from Unconstrained and Articulated Human Body

Directory of Open Access Journals (Sweden)

Qin Wu

2014-01-01

human body images acquired from an unconstrained environment in the real world. A systematic study of some critical issues in body-based gender recognition, such as which body parts are informative, how many body parts are needed to combine together, and what representations are good for articulated body-based gender recognition, is also presented. This paper also pursues data fusion schemes and efficient feature dimensionality reduction based on the partial least squares estimation. Extensive experiments are performed on two unconstrained databases which have not been explored before for gender recognition.

9. STOCHASTIC GRADIENT METHODS FOR UNCONSTRAINED OPTIMIZATION

Directory of Open Access Journals (Sweden)

Nataša Krejić

2014-12-01

Full Text Available This papers presents an overview of gradient based methods for minimization of noisy functions. It is assumed that the objective functions is either given with error terms of stochastic nature or given as the mathematical expectation. Such problems arise in the context of simulation based optimization. The focus of this presentation is on the gradient based Stochastic Approximation and Sample Average Approximation methods. The concept of stochastic gradient approximation of the true gradient can be successfully extended to deterministic problems. Methods of this kind are presented for the data fitting and machine learning problems.

10. UNCONSTRAINED HANDWRITING RECOGNITION : LANGUAGE MODELS, PERPLEXITY, AND SYSTEM PERFORMANCE

NARCIS (Netherlands)

Marti, U-V.; Bunke, H.

2004-01-01

In this paper we present a number of language models and their behavior in the recognition of unconstrained handwritten English sentences. We use the perplexity to compare the different models and their prediction power, and relate it to the performance of a recognition system under different

11. New preconditioned conjugate gradient algorithms for nonlinear unconstrained optimization problems

International Nuclear Information System (INIS)

1997-01-01

This paper presents two new predilection conjugate gradient algorithms for nonlinear unconstrained optimization problems and examines their computational performance. Computational experience shows that the new proposed algorithms generally imp lone the efficiency of Nazareth's [13] preconditioned conjugate gradient algorithm. (authors). 16 refs., 1 tab

12. Gender Recognition from Unconstrained and Articulated Human Body

Science.gov (United States)

Wu, Qin; Guo, Guodong

2014-01-01

Gender recognition has many useful applications, ranging from business intelligence to image search and social activity analysis. Traditional research on gender recognition focuses on face images in a constrained environment. This paper proposes a method for gender recognition in articulated human body images acquired from an unconstrained environment in the real world. A systematic study of some critical issues in body-based gender recognition, such as which body parts are informative, how many body parts are needed to combine together, and what representations are good for articulated body-based gender recognition, is also presented. This paper also pursues data fusion schemes and efficient feature dimensionality reduction based on the partial least squares estimation. Extensive experiments are performed on two unconstrained databases which have not been explored before for gender recognition. PMID:24977203

13. Gender recognition from unconstrained and articulated human body.

Science.gov (United States)

Wu, Qin; Guo, Guodong

2014-01-01

Gender recognition has many useful applications, ranging from business intelligence to image search and social activity analysis. Traditional research on gender recognition focuses on face images in a constrained environment. This paper proposes a method for gender recognition in articulated human body images acquired from an unconstrained environment in the real world. A systematic study of some critical issues in body-based gender recognition, such as which body parts are informative, how many body parts are needed to combine together, and what representations are good for articulated body-based gender recognition, is also presented. This paper also pursues data fusion schemes and efficient feature dimensionality reduction based on the partial least squares estimation. Extensive experiments are performed on two unconstrained databases which have not been explored before for gender recognition.

14. Anytime decision making based on unconstrained influence diagrams

DEFF Research Database (Denmark)

Luque, Manuel; Nielsen, Thomas Dyhre; Jensen, Finn Verner

2016-01-01

. This paper addresses this problem by proposing an anytime algorithm that at any time provides a qualified recommendation for the first decisions of the problem. The algorithm performs a heuristic-based search in a decision tree representation of the problem. We provide a framework for analyzing......Unconstrained influence diagrams extend the language of influence diagrams to cope with decision problems in which the order of the decisions is unspecified. Thus, when solving an unconstrained influence diagram we not only look for an optimal policy for each decision, but also for a so-called step......-policy specifying the next decision given the observations made so far. However, due to the complexity of the problem, temporal constraints can force the decision maker to act before the solution algorithm has finished, and, in particular, before an optimal policy for the first decision has been computed...

15. Gender Recognition from Unconstrained and Articulated Human Body

OpenAIRE

Wu, Qin; Guo, Guodong

2014-01-01

Gender recognition has many useful applications, ranging from business intelligence to image search and social activity analysis. Traditional research on gender recognition focuses on face images in a constrained environment. This paper proposes a method for gender recognition in articulated human body images acquired from an unconstrained environment in the real world. A systematic study of some critical issues in body-based gender recognition, such as which body parts are informative, ho...

16. Several Guaranteed Descent Conjugate Gradient Methods for Unconstrained Optimization

Directory of Open Access Journals (Sweden)

San-Yang Liu

2014-01-01

Full Text Available This paper investigates a general form of guaranteed descent conjugate gradient methods which satisfies the descent condition gkTdk≤-1-1/4θkgk2  θk>1/4 and which is strongly convergent whenever the weak Wolfe line search is fulfilled. Moreover, we present several specific guaranteed descent conjugate gradient methods and give their numerical results for large-scale unconstrained optimization.

17. Performance-complexity tradeoff in sequential decoding for the unconstrained AWGN channel

KAUST Repository

Abediseid, Walid; Alouini, Mohamed-Slim

2013-01-01

channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement

18. Unconstrained Iris Acquisition and Recognition Using COTS PTZ Camera

Directory of Open Access Journals (Sweden)

Venugopalan Shreyas

2010-01-01

Full Text Available Abstract Uniqueness of iris patterns among individuals has resulted in the ubiquity of iris recognition systems in virtual and physical spaces, at high security facilities around the globe. Traditional methods of acquiring iris patterns in commercial systems scan the iris when an individual is at a predetermined location in front of the scanner. Most state-of-the-art techniques for unconstrained iris acquisition in literature use expensive custom equipment and are composed of a multicamera setup, which is bulky, expensive, and requires calibration. This paper investigates a method of unconstrained iris acquisition and recognition using a single commercial off-the-shelf (COTS pan-tilt-zoom (PTZ camera, that is compact and that reduces the cost of the final system, compared to other proposed hierarchical multicomponent systems. We employ state-of-the-art techniques for face detection and a robust eye detection scheme using active shape models for accurate landmark localization. Additionally, our system alleviates the need for any calibration stage prior to its use. We present results using a database of iris images captured using our system, while operating in an unconstrained acquisition mode at 1.5 m standoff, yielding an iris diameter in the 150–200 pixels range.

19. Sequential Targeting of CD52 and TNF Allows Early Minimization Therapy in Kidney Transplantation: From a Biomarker to Targeting in a Proof-Of-Concept Trial.

Directory of Open Access Journals (Sweden)

Ondrej Viklicky

Full Text Available There is high medical need for safe long-term immunosuppression monotherapy in kidney transplantation. Selective targeting of post-transplant alloantigen-(reactivated effector-T cells by anti-TNF antibodies after global T cell depletion may allow safe drug minimization, however, it is unsolved what might be the best maintenance monotherapy.In this open, prospective observational single-centre trial, 20 primary deceased donor kidney transplant recipients received 2x20 mg Alemtuzumab (d0/d1 followed by 5 mg/kg Infliximab (d2. For 14 days all patients received only tacrolimus, then they were allocated to either receive tacrolimus (TAC, n = 13 or sirolimus (SIR, n = 7 monotherapy, respectively. Protocol biopsies and extensive immune monitoring were performed and patients were followed-up for 60 months.TAC-monotherapy resulted in excellent graft survival (5yr 92%, 95%CI: 56.6-98.9 and function, normal histology, and no proteinuria. Immune monitoring revealed low intragraft inflammation (urinary IP-10 and hints for the development of operational tolerance signature in the TAC- but not SIR-group. Remarkably, the TAC-monotherapy was successful in all five presensitized (ELISPOT+ patients. However, recruitment into SIR-arm was stopped (after n = 7 because of high incidence of proteinuria and acute/chronic rejection in biopsies. No opportunistic infections occurred during follow-up.In conclusion, our novel fast-track TAC-monotherapy protocol is likely to be safe and preliminary results indicated an excellent 5-year outcome, however, a full-scale study will be needed to confirm our findings.EudraCT Number: 2006-003110-18.

20. Unconstrained parameter estimation for assessment of dynamic cerebral autoregulation

International Nuclear Information System (INIS)

Chacón, M; Nuñez, N; Henríquez, C; Panerai, R B

2008-01-01

Measurement of dynamic cerebral autoregulation (CA), the transient response of cerebral blood flow (CBF) to changes in arterial blood pressure (ABP), has been performed with an index of autoregulation (ARI), related to the parameters of a second-order differential equation model, namely gain (K), damping factor (D) and time constant (T). Limitations of the ARI were addressed by increasing its numerical resolution and generalizing the parameter space. In 16 healthy subjects, recordings of ABP (Finapres) and CBF velocity (ultrasound Doppler) were performed at rest, before, during and after 5% CO 2 breathing, and for six repeated thigh cuff maneuvers. The unconstrained model produced lower predictive error (p < 0.001) than the original model. Unconstrained parameters (K'–D'–T') were significantly different from K–D–T but were still sensitive to different measurement conditions, such as the under-regulation induced by hypercapnia. The intra-subject variability of K' was significantly lower than that of the ARI and this parameter did not show the unexpected occurrences of zero values as observed with the ARI and the classical value of K. These results suggest that K' could be considered as a more stable and reliable index of dynamic autoregulation than ARI. Further studies are needed to validate this new index under different clinical conditions

1. Parallel Quasi Newton Algorithms for Large Scale Non Linear Unconstrained Optimization

International Nuclear Information System (INIS)

Rahman, M. A.; Basarudin, T.

1997-01-01

This paper discusses about Quasi Newton (QN) method to solve non-linear unconstrained minimization problems. One of many important of QN method is choice of matrix Hk. to be positive definite and satisfies to QN method. Our interest here is the parallel QN methods which will suite for the solution of large-scale optimization problems. The QN methods became less attractive in large-scale problems because of the storage and computational requirements. How ever, it is often the case that the Hessian is space matrix. In this paper we include the mechanism of how to reduce the Hessian update and hold the Hessian properties.One major reason of our research is that the QN method may be good in solving certain type of minimization problems, but it is efficiency degenerate when is it applied to solve other category of problems. For this reason, we use an algorithm containing several direction strategies which are processed in parallel. We shall attempt to parallelized algorithm by exploring different search directions which are generated by various QN update during the minimization process. The different line search strategies will be employed simultaneously in the process of locating the minimum along each direction.The code of algorithm will be written in Occam language 2 which is run on the transputer machine

2. Spherical symmetry as a test case for unconstrained hyperboloidal evolution

International Nuclear Information System (INIS)

Vañó-Viñuales, Alex; Husa, Sascha; Hilditch, David

2015-01-01

We consider the hyperboloidal initial value problem for the Einstein equations in numerical relativity, motivated by the goal to evolve radiating compact objects such as black hole binaries with a numerical grid that includes null infinity. Unconstrained evolution schemes promise optimal efficiency, but are difficult to regularize at null infinity, where the compactified Einstein equations are formally singular. In this work we treat the spherically symmetric case, which already poses nontrivial problems and constitutes an important first step. We have carried out stable numerical evolutions with the generalized BSSN and Z4 equations coupled to a scalar field. The crucial ingredients have been to find an appropriate evolution equation for the lapse function and to adapt constraint damping terms to handle null infinity. (paper)

3. Sequential Banking.

OpenAIRE

Bizer, David S; DeMarzo, Peter M

1992-01-01

The authors study environments in which agents may borrow sequentially from more than one leader. Although debt is prioritized, additional lending imposes an externality on prior debt because, with moral hazard, the probability of repayment of prior loans decreases. Equilibrium interest rates are higher than they would be if borrowers could commit to borrow from at most one bank. Even though the loan terms are less favorable than they would be under commitment, the indebtedness of borrowers i...

4. Forced Sequence Sequential Decoding

DEFF Research Database (Denmark)

Jensen, Ole Riis

In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance...... is possible as low as Eb/No=0.6 dB, which is about 1.7 dB below the signal-to-noise ratio that marks the cut-off rate for the convolutional code. This is possible since the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....

5. Vector Graph Assisted Pedestrian Dead Reckoning Using an Unconstrained Smartphone

Directory of Open Access Journals (Sweden)

Jiuchao Qian

2015-03-01

Full Text Available The paper presents a hybrid indoor positioning solution based on a pedestrian dead reckoning (PDR approach using built-in sensors on a smartphone. To address the challenges of flexible and complex contexts of carrying a phone while walking, a robust step detection algorithm based on motion-awareness has been proposed. Given the fact that step length is influenced by different motion states, an adaptive step length estimation algorithm based on motion recognition is developed. Heading estimation is carried out by an attitude acquisition algorithm, which contains a two-phase filter to mitigate the distortion of magnetic anomalies. In order to estimate the heading for an unconstrained smartphone, principal component analysis (PCA of acceleration is applied to determine the offset between the orientation of smartphone and the actual heading of a pedestrian. Moreover, a particle filter with vector graph assisted particle weighting is introduced to correct the deviation in step length and heading estimation. Extensive field tests, including four contexts of carrying a phone, have been conducted in an office building to verify the performance of the proposed algorithm. Test results show that the proposed algorithm can achieve sub-meter mean error in all contexts.

6. Unconstrained Finite Element for Geometrical Nonlinear Dynamics of Shells

Directory of Open Access Journals (Sweden)

Humberto Breves Coda

2009-01-01

Full Text Available This paper presents a positional FEM formulation to deal with geometrical nonlinear dynamics of shells. The main objective is to develop a new FEM methodology based on the minimum potential energy theorem written regarding nodal positions and generalized unconstrained vectors not displacements and rotations. These characteristics are the novelty of the present work and avoid the use of large rotation approximations. A nondimensional auxiliary coordinate system is created, and the change of configuration function is written following two independent mappings from which the strain energy function is derived. This methodology is called positional and, as far as the authors' knowledge goes, is a new procedure to approximated geometrical nonlinear structures. In this paper a proof for the linear and angular momentum conservation property of the Newmark algorithm is provided for total Lagrangian description. The proposed shell element is locking free for elastic stress-strain relations due to the presence of linear strain variation along the shell thickness. The curved, high-order element together with an implicit procedure to solve nonlinear equations guarantees precision in calculations. The momentum conserving, the locking free behavior, and the frame invariance of the adopted mapping are numerically confirmed by examples.

7. Solving Unconstrained Global Optimization Problems via Hybrid Swarm Intelligence Approaches

Directory of Open Access Journals (Sweden)

Jui-Yu Wu

2013-01-01

Full Text Available Stochastic global optimization (SGO algorithms such as the particle swarm optimization (PSO approach have become popular for solving unconstrained global optimization (UGO problems. The PSO approach, which belongs to the swarm intelligence domain, does not require gradient information, enabling it to overcome this limitation of traditional nonlinear programming methods. Unfortunately, PSO algorithm implementation and performance depend on several parameters, such as cognitive parameter, social parameter, and constriction coefficient. These parameters are tuned by using trial and error. To reduce the parametrization of a PSO method, this work presents two efficient hybrid SGO approaches, namely, a real-coded genetic algorithm-based PSO (RGA-PSO method and an artificial immune algorithm-based PSO (AIA-PSO method. The specific parameters of the internal PSO algorithm are optimized using the external RGA and AIA approaches, and then the internal PSO algorithm is applied to solve UGO problems. The performances of the proposed RGA-PSO and AIA-PSO algorithms are then evaluated using a set of benchmark UGO problems. Numerical results indicate that, besides their ability to converge to a global minimum for each test UGO problem, the proposed RGA-PSO and AIA-PSO algorithms outperform many hybrid SGO algorithms. Thus, the RGA-PSO and AIA-PSO approaches can be considered alternative SGO approaches for solving standard-dimensional UGO problems.

8. Recognition of Faces in Unconstrained Environments: A Comparative Study

Directory of Open Access Journals (Sweden)

Javier Ruiz-del-Solar

2009-01-01

Full Text Available The aim of this work is to carry out a comparative study of face recognition methods that are suitable to work in unconstrained environments. The analyzed methods are selected by considering their performance in former comparative studies, in addition to be real-time, to require just one image per person, and to be fully online. In the study two local-matching methods, histograms of LBP features and Gabor Jet descriptors, one holistic method, generalized PCA, and two image-matching methods, SIFT-based and ERCF-based, are analyzed. The methods are compared using the FERET, LFW, UCHFaceHRI, and FRGC databases, which allows evaluating them in real-world conditions that include variations in scale, pose, lighting, focus, resolution, facial expression, accessories, makeup, occlusions, background and photographic quality. Main conclusions of this study are: there is a large dependence of the methods on the amount of face and background information that is included in the face's images, and the performance of all methods decreases largely with outdoor-illumination. The analyzed methods are robust to inaccurate alignment, face occlusions, and variations in expressions, to a large degree. LBP-based methods are an excellent election if we need real-time operation as well as high recognition rates.

9. Flow and fracture in water-saturated, unconstrained granular beds

Directory of Open Access Journals (Sweden)

Germán eVaras

2015-06-01

Full Text Available The injection of gas in a liquid-saturated granular bed gives rise to a wide variety of invasion patterns. Many studies have focused on constrained porous media, in which the grains are fixed in the bed and only the interstitial fluid flows when the gas invades the system. With a free upper boundary, however, the grains can be entrained by the ascending gas or fluid motion, and the competition between the upward motion of grains and sedimentation leads to new patterns. We propose a brief review of the experimental investigation of the dynamics of air rising through a water-saturated, unconstrained granular bed, in both two and three dimensions. After describing the invasion pattern at short and long time, a tentative regime-diagram is proposed. We report original results showing a dependence of the fluidized zone shape, at long times, on the injection flow rate and grain size. A method based on image analysis makes it possible to detect not only the fluidized zone profile in the stationary regime, but also to follow the transient dynamics of its formation. Finally, we describe the degassing dynamics inside the fluidized zone, in the stationary regime. Depending on the experimental conditions, regular bubbling, continuous degassing, intermittent regime or even spontaneous flow-to-fracture transition are observed.

10. Forced Sequence Sequential Decoding

DEFF Research Database (Denmark)

1998-01-01

We describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon (RS) codes with nonuniform profile. With this scheme decoding with good performance is possible as low...... as Eb/N0=0.6 dB, which is about 1.25 dB below the signal-to-noise ratio (SNR) that marks the cutoff rate for the full system. Accounting for about 0.45 dB due to the outer codes, sequential decoding takes place at about 1.7 dB below the SNR cutoff rate for the convolutional code. This is possible since...... the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...

11. Unconstrained snoring detection using a smartphone during ordinary sleep.

Science.gov (United States)

Shin, Hangsik; Cho, Jaegeol

2014-08-15

Snoring can be a representative symptom of a sleep disorder, and thus snoring detection is quite important to improving the quality of an individual's daily life. The purpose of this research is to develop an unconstrained snoring detection technique that can be integrated into a smartphone application. In contrast with previous studies, we developed a practical technique for snoring detection during ordinary sleep by using the built-in sound recording system of a smartphone, and the recording was carried out in a standard private bedroom. The experimental protocol was designed to include a variety of actions that frequently produce noise (including coughing, playing music, talking, rining an alarm, opening/closing doors, running a fan, playing the radio, and walking) in order to accurately recreate the actual circumstances during sleep. The sound data were recorded for 10 individuals during actual sleep. In total, 44 snoring data sets and 75 noise datasets were acquired. The algorithm uses formant analysis to examine sound features according to the frequency and magnitude. Then, a quadratic classifier is used to distinguish snoring from non-snoring noises. Ten-fold cross validation was used to evaluate the developed snoring detection methods, and validation was repeated 100 times randomly to improve statistical effectiveness. The overall results showed that the proposed method is competitive with those from previous research. The proposed method presented 95.07% accuracy, 98.58% sensitivity, 94.62% specificity, and 70.38% positive predictivity. Though there was a relatively high false positive rate, the results show the possibility for ubiquitous personal snoring detection through a smartphone application that takes into account data from normally occurring noises without training using preexisting data.

12. Global Convergence of the EM Algorithm for Unconstrained Latent Variable Models with Categorical Indicators

Science.gov (United States)

Weissman, Alexander

2013-01-01

Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…

13. A Note on the Dual of an Unconstrained (Generalized) Geometric Programming Problem

NARCIS (Netherlands)

J.B.G. Frenk (Hans); G.J. Still

2005-01-01

textabstractIn this note we show that the strong duality theorem of an unconstrained (generalized) geometric programming problem as defined by Peterson (cf.[1]) is actually a special case of a Lagrangian duality result. Contrary to [1] we also consider the case that the set C is compact and

14. Learning-based encoding with soft assignment for age estimation under unconstrained imaging conditions

NARCIS (Netherlands)

Alnajar, F.; Shan, C.; Gevers, T.; Geusebroek, J.M.

2012-01-01

In this paper we propose to adopt a learning-based encoding method for age estimation under unconstrained imaging conditions. A similar approach [Cao et al., 2010] is applied to face recognition in real-life face images. However, the feature vectors are encoded in hard manner i.e. each feature

15. 4D Unconstrained Real-time Face Recognition Using a Commodity Depthh Camera

NARCIS (Netherlands)

Schimbinschi, Florin; Wiering, Marco; Mohan, R.E.; Sheba, J.K.

2012-01-01

Robust unconstrained real-time face recognition still remains a challenge today. The recent addition to the market of lightweight commodity depth sensors brings new possibilities for human-machine interaction and therefore face recognition. This article accompanies the reader through a succinct

16. Single-Camera-Based Method for Step Length Symmetry Measurement in Unconstrained Elderly Home Monitoring.

Science.gov (United States)

Cai, Xi; Han, Guang; Song, Xin; Wang, Jinkuan

2017-11-01

single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc. single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring

Energy Technology Data Exchange (ETDEWEB)

El-Sharkawi, Mohamed A. (Renton, WA); Xing, Jian (Seattle, WA); Butler, Nicholas G. (Newberg, OR); Rodriguez, Alonso (Pasadena, CA)

1994-01-01

An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.

Science.gov (United States)

El-Sharkawi, Mohamed A.; Xing, Jian; Butler, Nicholas G.; Rodriguez, Alonso

1994-01-01

An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.

19. Velocity-dependent changes of rotational axes in the non-visual control of unconstrained 3D arm motions.

Science.gov (United States)

Isableu, B; Rezzoug, N; Mallet, G; Bernardin, D; Gorce, P; Pagano, C C

2009-12-29

We examined the roles of inertial (e(3)), shoulder-centre of mass (SH-CM) and shoulder-elbow articular (SH-EL) rotation axes in the non-visual control of unconstrained 3D arm rotations. Subjects rotated the arm in elbow configurations that yielded either a constant or variable separation between these axes. We hypothesized that increasing the motion frequency and the task complexity would result in the limbs' rotational axis to correspond to e(3) in order to minimize rotational resistances. Results showed two velocity-dependent profiles wherein the rotation axis coincided with the SH-EL axis for S and I velocities and then in the F velocity shifted to either a SH-CM/e(3) trade-off axis for one profile, or to no preferential axis for the other. A third profile was velocity-independent, with the SH-CM/e(3) trade-off axis being adopted. Our results are the first to provide evidence that the rotational axis of a multi-articulated limb may change from a geometrical axis of rotation to a mass or inertia based axis as motion frequency increases. These findings are discussed within the framework of the minimum inertia tensor model (MIT), which shows that rotations about e(3) reduce the amount of joint muscle torque that must be produced by employing the interaction torque to assist movement.

20. Comparative performance of an elitist teaching-learning-based optimization algorithm for solving unconstrained optimization problems

Directory of Open Access Journals (Sweden)

R. Venkata Rao

2013-01-01

Full Text Available Teaching-Learning-based optimization (TLBO is a recently proposed population based algorithm, which simulates the teaching-learning process of the class room. This algorithm requires only the common control parameters and does not require any algorithm-specific control parameters. In this paper, the effect of elitism on the performance of the TLBO algorithm is investigated while solving unconstrained benchmark problems. The effects of common control parameters such as the population size and the number of generations on the performance of the algorithm are also investigated. The proposed algorithm is tested on 76 unconstrained benchmark functions with different characteristics and the performance of the algorithm is compared with that of other well known optimization algorithms. A statistical test is also performed to investigate the results obtained using different algorithms. The results have proved the effectiveness of the proposed elitist TLBO algorithm.

1. TabletGaze: Unconstrained Appearance-based Gaze Estimation in Mobile Tablets

OpenAIRE

Huang, Qiong; Veeraraghavan, Ashok; Sabharwal, Ashutosh

2015-01-01

We study gaze estimation on tablets, our key design goal is uncalibrated gaze estimation using the front-facing camera during natural use of tablets, where the posture and method of holding the tablet is not constrained. We collected the first large unconstrained gaze dataset of tablet users, labeled Rice TabletGaze dataset. The dataset consists of 51 subjects, each with 4 different postures and 35 gaze locations. Subjects vary in race, gender and in their need for prescription glasses, all o...

2. Locating Critical Circular and Unconstrained Failure Surface in Slope Stability Analysis with Tailored Genetic Algorithm

Science.gov (United States)

Pasik, Tomasz; van der Meij, Raymond

2017-12-01

This article presents an efficient search method for representative circular and unconstrained slip surfaces with the use of the tailored genetic algorithm. Searches for unconstrained slip planes with rigid equilibrium methods are yet uncommon in engineering practice, and little publications regarding truly free slip planes exist. The proposed method presents an effective procedure being the result of the right combination of initial population type, selection, crossover and mutation method. The procedure needs little computational effort to find the optimum, unconstrained slip plane. The methodology described in this paper is implemented using Mathematica. The implementation, along with further explanations, is fully presented so the results can be reproduced. Sample slope stability calculations are performed for four cases, along with a detailed result interpretation. Two cases are compared with analyses described in earlier publications. The remaining two are practical cases of slope stability analyses of dikes in Netherlands. These four cases show the benefits of analyzing slope stability with a rigid equilibrium method combined with a genetic algorithm. The paper concludes by describing possibilities and limitations of using the genetic algorithm in the context of the slope stability problem.

3. Zips : mining compressing sequential patterns in streams

NARCIS (Netherlands)

Hoang, T.L.; Calders, T.G.K.; Yang, J.; Mörchen, F.; Fradkin, D.; Chau, D.H.; Vreeken, J.; Leeuwen, van M.; Faloutsos, C.

2013-01-01

We propose a streaming algorithm, based on the minimal description length (MDL) principle, for extracting non-redundant sequential patterns. For static databases, the MDL-based approach that selects patterns based on their capacity to compress data rather than their frequency, was shown to be

4. Minimal surfaces

CERN Document Server

Dierkes, Ulrich; Sauvigny, Friedrich; Jakob, Ruben; Kuster, Albrecht

2010-01-01

Minimal Surfaces is the first volume of a three volume treatise on minimal surfaces (Grundlehren Nr. 339-341). Each volume can be read and studied independently of the others. The central theme is boundary value problems for minimal surfaces. The treatise is a substantially revised and extended version of the monograph Minimal Surfaces I, II (Grundlehren Nr. 295 & 296). The first volume begins with an exposition of basic ideas of the theory of surfaces in three-dimensional Euclidean space, followed by an introduction of minimal surfaces as stationary points of area, or equivalently

5. A Trust-region-based Sequential Quadratic Programming Algorithm

DEFF Research Database (Denmark)

Henriksen, Lars Christian; Poulsen, Niels Kjølstad

This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints.......This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints....

6. Sequential charged particle reaction

International Nuclear Information System (INIS)

Hori, Jun-ichi; Ochiai, Kentaro; Sato, Satoshi; Yamauchi, Michinori; Nishitani, Takeo

2004-01-01

The effective cross sections for producing the sequential reaction products in F82H, pure vanadium and LiF with respect to the 14.9-MeV neutron were obtained and compared with the estimation ones. Since the sequential reactions depend on the secondary charged particles behavior, the effective cross sections are corresponding to the target nuclei and the material composition. The effective cross sections were also estimated by using the EAF-libraries and compared with the experimental ones. There were large discrepancies between estimated and experimental values. Additionally, we showed the contribution of the sequential reaction on the induced activity and dose rate in the boundary region with water. From the present study, it has been clarified that the sequential reactions are of great importance to evaluate the dose rates around the surface of cooling pipe and the activated corrosion products. (author)

7. Design of Unconstrained DMC to Improve the Distillate Product Purity of the Distillation Column

Directory of Open Access Journals (Sweden)

Bhat Vinayambika S.

2016-01-01

Full Text Available This paper demonstrates the use of unconstrained Dynamic Matrix Control (DMC to control the process transfer function with time delay. The selection of tuning parameter is challenging task in predictive control algorithm. The DMC scheme is designed and it is used here to control the process transfer function, First Order Plant with Dead Time (FOPDT model. Here, one manipulated variable (reflux and one controlled variable (distillate is considered for the implementation. The algorithm significantly controls the reflux to improve the distillate product purity. The simulation is done using MATLAB m-file. Both servo and regulatory responses were obtained. The simulation result validates the effectiveness of the proposed algorithm.

8. Unconstrained SU(2) and SU(3) Yang-Mills clasical mechanics

International Nuclear Information System (INIS)

Dahmen, B.; Raabe, B.

1992-01-01

A systematic study of constraints in SU(2) and SU(3) Yang-Mills classical mechanics is performed. Expect for the SU(2) case with vanishing spatial angular momenta they turn out to be non-holonomic. Using Dirac's constraint formalism we achieve a complete elimination of the unphysical gauge and rotational degrees of freedom. This leads to an effective unconstrained formulation both for the full SU(2) Yang-Mills classical mechanics and for the SU(3) case in the subspace of vanishing spatial angular momenta. We believe that our results are well suited for further explicit dynamical investigations. (orig.)

9. Unconstrained SU(2) and SU(3) Yang-Mills classical mechanics

International Nuclear Information System (INIS)

Dahmen, B.; Raabe, B.

1992-01-01

A systematic study of contraints in SU(2) and SU(3) Yang-Mills classical mechanics is performed. Expect for the SU(2) case with spatial angular momenta they turn out to be nonholonomic. The complete elimination of the unphysical gauge and rotatinal degrees of freedom is achieved using Dirac's constraint formalism. We present an effective unconstrained formulation of the general SU(2) Yang-Mills classical mechanics as well as for SU(3) in the subspace of vanishing spatial angular momenta that is well suited for further explicit dynamical investigations. (orig.)

10. An Improved Marriage in Honey Bees Optimization Algorithm for Single Objective Unconstrained Optimization

Directory of Open Access Journals (Sweden)

Yuksel Celik

2013-01-01

Full Text Available Marriage in honey bees optimization (MBO is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm’s performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms.

11. Global Convergence of a Spectral Conjugate Gradient Method for Unconstrained Optimization

Directory of Open Access Journals (Sweden)

Jinkui Liu

2012-01-01

Full Text Available A new nonlinear spectral conjugate descent method for solving unconstrained optimization problems is proposed on the basis of the CD method and the spectral conjugate gradient method. For any line search, the new method satisfies the sufficient descent condition gkTdk<−∥gk∥2. Moreover, we prove that the new method is globally convergent under the strong Wolfe line search. The numerical results show that the new method is more effective for the given test problems from the CUTE test problem library (Bongartz et al., 1995 in contrast to the famous CD method, FR method, and PRP method.

12. Exploring the Metabolic and Perceptual Correlates of Self-Selected Walking Speed under Constrained and Un-Constrained Conditions

Directory of Open Access Journals (Sweden)

David T Godsiff, Shelly Coe, Charlotte Elsworth-Edelsten, Johnny Collett, Ken Howells, Martyn Morris, Helen Dawes

2018-03-01

Full Text Available Mechanisms underpinning self-selected walking speed (SSWS are poorly understood. The present study investigated the extent to which SSWS is related to metabolism, energy cost, and/or perceptual parameters during both normal and artificially constrained walking. Fourteen participants with no pathology affecting gait were tested under standard conditions. Subjects walked on a motorized treadmill at speeds derived from their SSWS as a continuous protocol. RPE scores (CR10 and expired air to calculate energy cost (J.kg-1.m-1 and carbohydrate (CHO oxidation rate (J.kg-1.min-1 were collected during minutes 3-4 at each speed. Eight individuals were re-tested under the same conditions within one week with a hip and knee-brace to immobilize their right leg. Deflection in RPE scores (CR10 and CHO oxidation rate (J.kg-1.min-1 were not related to SSWS (five and three people had deflections in the defined range of SSWS in constrained and unconstrained conditions, respectively (p > 0.05. Constrained walking elicited a higher energy cost (J.kg-1.m-1 and slower SSWS (p 0.05. SSWS did not occur at a minimum energy cost (J.kg-1.m-1 in either condition, however, the size of the minimum energy cost to SSWS disparity was the same (Froude {Fr} = 0.09 in both conditions (p = 0.36. Perceptions of exertion can modify walking patterns and therefore SSWS and metabolism/ energy cost are not directly related. Strategies which minimize perceived exertion may enable faster walking in people with altered gait as our findings indicate they should self-optimize to the same extent under different conditions.

13. Sequential dependencies in magnitude scaling of loudness

DEFF Research Database (Denmark)

2013-01-01

Ten normally hearing listeners used a programmable sone-potentiometer knob to adjust the level of a 1000-Hz sinusoid to match the loudness of numbers presented to them in a magnitude production task. Three different power-law exponents (0.15, 0.30, and 0.60) and a log-law with equal steps in d......B were used to program the sone-potentiometer. The knob settings systematically influenced the form of the loudness function. Time series analysis was used to assess the sequential dependencies in the data, which increased with increasing exponent and were greatest for the log-law. It would be possible......, therefore, to choose knob properties that minimized these dependencies. When the sequential dependencies were removed from the data, the slope of the loudness functions did not change, but the variability decreased. Sequential dependencies were only present when the level of the tone on the previous trial...

14. Long-term Outcome of Unconstrained Primary Total Hip Arthroplasty in Ipsilateral Residual Poliomyelitis.

Science.gov (United States)

Buttaro, Martín A; Slullitel, Pablo A; García Mansilla, Agustín M; Carlucci, Sofía; Comba, Fernando M; Zanotti, Gerardo; Piccaluga, Francisco

2017-03-01

Incapacitating articular sequelae in the hip joint have been described for patients with late effects of poliomyelitis. In these patients, total hip arthroplasty (THA) has been associated with a substantial rate of dislocation. This study was conducted to evaluate the long-term clinical and radiologic outcomes of unconstrained THA in this specific group of patients. The study included 6 patients with ipsilateral polio who underwent primary THA between 1985 and 2006. Patients with polio who underwent THA on the nonparalytic limb were excluded. Mean follow-up was 119.5 months (minimum, 84 months). Clinical outcomes were evaluated with the modified Harris Hip Score (mHHS) and the visual analog scale (VAS) pain score. Radiographs were examined to identify the cause of complications and determine the need for revision surgery. All patients showed significantly better functional results when preoperative and postoperative mHHS (67.58 vs 87.33, respectively; P=.002) and VAS pain score (7.66 vs 2, respectively; P=.0003) were compared. Although 2 cases of instability were diagnosed, only 1 patient needed acetabular revision as a result of component malpositioning. None of the patients had component loosening, osteolysis, or infection. Unconstrained THA in the affected limb of patients with poliomyelitis showed favorable long-term clinical results, with improved function and pain relief. Nevertheless, instability may be a more frequent complication in this group of patients compared with the general population. [Orthopedics. 2017; 40(2):e255-e261.]. Copyright 2016, SLACK Incorporated.

15. Unconstrained monitoring of long-term heart and breath rates during sleep

International Nuclear Information System (INIS)

Chen, Wenxi; Zhu, Xin; Wei, Daming; Nemoto, Tetsu; Sugitani, Kayo; Kitamura, Kei-ichiro

2008-01-01

An unconstrained method for the long-term monitoring of heart and breath rates during sleep is proposed. The system includes a sensor unit and a web-based network module. The sensor unit is set beneath a pillow to pick up the pressure variations from the head induced by inhalation/exhalation movements and heart pulsation during sleep. The measured pressure signal was digitized and transferred to a remote database server via the network module. A wavelet-based algorithm was employed to detect the heart and breath rates, as well as body movement, during sleep. The overall system was utilized for a total six-month trial operation delivered to a female subject. The profiles of the heart and breath rates on a beat-by-beat and daily basis were obtained. Movements during sleep were also estimated. The results show that the daily average percentage of undetectable periods (UPs) during 881.6 sleep hours over a 180 day period was 17.2%. A total of 89.2% of sleep hours had a UP of not more than 25%. The profile of the heart rate revealed a periodic property that corresponded to the female monthly menstrual cycle. Our system shows promise as a long-term unconstrained monitor for heart and breath rates, and for other physiological parameters related to the quality of sleep and the regularity of the menstrual cycle. (note)

16. Does Cash Contribute to Value? A Comparison of Constrained and Unconstrained Firms in China and Germany

Directory of Open Access Journals (Sweden)

Wei Zhang

2015-11-01

Full Text Available A fundamental characteristic of emerging markets is the underdevelopment of legal institutions and financial markets. Therefore, the marginal value of a firm’s cash holdings in emerging countries can be lower than 1, due to high agency costs resulting from poor external corporate governance. However, the marginal value of cash may also be high in emerging markets because the information asymmetry between current and new providers of funds is high, which means that it is difficult to access the (low quality capital markets. We study for the industrialized countries of China and Germany whether corporate cash holdings contribute to shareholder value in both constrained and unconstrained firms. In contradiction to previous literature on emerging markets, we find that the marginal value of cash is not smaller than 1 in China, so that agency costs do not dominate. We, however, find marginal values of cash lower than 1 for unconstrained firms in both countries, implying that in these firms agency costs of cash holdings exist. For constrained firms we find marginal values significantly larger than 1 in both countries. This indicates difficulties in accessing the financial markets for these firms. These difficulties prove to be larger in China than in Germany for small and service firms, but not for high growth firms.

17. Unconstrained N=2 matter, Yang-Mills and supergravity theories in harmonic superspace

International Nuclear Information System (INIS)

Galperin, A.; Kalitzin, S.; Sokatchev, E.

1984-04-01

A new approach to N=2 supersymmetry based on the concept of harmonic superspace is proposed and is used to give an unconstrained superfield geometric description of N=2 super Yang-Mills and supergravity theories as well as of matter N=2 hypermultiplets. The harmonic N=2 superspace has as independent coordinates, in addition to the usual ones, the isospinor harmonics Usub(i)sup(+-) on the sphere SU(2)/U(1). The role of Usub(i)sup(+-) is to relate the SU(2) group realized on the component fields to a U(1) group acting on the relevant superfields. Their introduction makes it possible to SU(2)-covariantize the notion of Grassmann analyticity. Crucial for our construction is the existence of an analytic subspace of the general harmonic N=2 superspace. The hypermultiplet superfields and the true prepotentials (pre-prepotentials) of N=2 super Yang-Mills and supergravity are unconstrained superfunctions over this analytic subspace. The pre-prepotentials have a clear geometric interpretation as gauge connections with respect to the internal SU(2)/U(1)-directions. A radically new feature arises: the number of gauge and auxiliary degrees of freedom becomes infinite while the number of physical degrees of freedom remains finite. Other new results are the massive N=2 Yang-Mills theory and various off-shell self-interactions of hypermultiplets. The propagators for matter and Yang-Mills superfields are given. (author)

18. Assessment of patient functional performance in different knee arthroplasty designs during unconstrained squat.

Science.gov (United States)

Verdini, Federica; Zara, Claudio; Leo, Tommaso; Mengarelli, Alessandro; Cardarelli, Stefano; Innocenti, Bernardo

2017-01-01

In this paper, squat named by Authors unconstrained because performed without constrains related to feet position, speed, knee maximum angle to be reached, was tested as motor task revealing differences in functional performance after knee arthroplasty. It involves large joints ranges of motion, does not compromise joint safety and requires accurate control strategies to maintain balance. Motion capture techniques were used to study squat on a healthy control group (CTR) and on three groups, each characterised by a specific knee arthroplasty design: a Total Knee Arthroplasty (TKA), a Mobile Bearing and a Fixed Bearing Unicompartmental Knee Arthroplasty (respectively MBUA and FBUA). Squat was analysed during descent, maintenance and ascent phase and described by speed, angular kinematics of lower and upper body, the Center of Pressure (CoP) trajectory and muscle activation timing of quadriceps and biceps femoris. Compared to CTR, for TKA and MBUA knee maximum flexion was lower, vertical speed during descent and ascent reduced and the duration of whole movement was longer. CoP mean distance was higher for all arthroplasty groups during descent as higher was, CoP mean velocity for MBUA and TKA during ascent and descent. Unconstrained squat is able to reveal differences in the functional performance among control and arthroplasty groups and between different arthroplasty designs. Considering the similarity index calculated for the variables showing statistically significance, FBUA performance appears to be closest to that of the CTR group. III a.

19. Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments

Directory of Open Access Journals (Sweden)

Annette Mossel

2015-12-01

Full Text Available In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1 user tracking for virtual and augmented reality applications, (2 handheld target tracking for tunneling and (3 machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system’s capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m.

20. Ethanol self-administration in serotonin transporter knockout mice: unconstrained demand and elasticity.

Science.gov (United States)

Lamb, R J; Daws, L C

2013-10-01

Low serotonin function is associated with alcoholism, leading to speculation that increasing serotonin function could decrease ethanol consumption. Mice with one or two deletions of the serotonin transporter (SERT) gene have increased extracellular serotonin. To examine the relationship between SERT genotype and motivation for alcohol, we compared ethanol self-administration in mice with zero (knockout, KO), one (HET) or two copies (WT) of the SERT gene. All three genotypes learned to self-administer ethanol. The SSRI, fluvoxamine, decreased responding for ethanol in the HET and WT, but not the KO mice. When tested under a progressive ratio schedule, KO mice had lower breakpoints than HET or WT. As work requirements were increased across sessions, behavioral economic analysis of ethanol self-administration indicated that the decreased breakpoint in KO as compared to HET or WT mice was a result of lower levels of unconstrained demand, rather than differences in elasticity, i.e. the proportional decreases in ethanol earned with increasing work requirements were similar across genotypes. The difference in unconstrained demand was unlikely to result from motor or general motivational factors, as both WT and KO mice responded at high levels for a 50% condensed milk solution. As elasticity is hypothesized to measure essential value, these results indicate that KO value ethanol similarly to WT or HET mice despite having lower break points for ethanol. © 2013 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.

1. Assessment of patient functional performance in different knee arthroplasty designs during unconstrained squat

Science.gov (United States)

Verdini, Federica; Zara, Claudio; Leo, Tommaso; Mengarelli, Alessandro; Cardarelli, Stefano; Innocenti, Bernardo

2017-01-01

Summary Background In this paper, squat named by Authors unconstrained because performed without constrains related to feet position, speed, knee maximum angle to be reached, was tested as motor task revealing differences in functional performance after knee arthroplasty. It involves large joints ranges of motion, does not compromise joint safety and requires accurate control strategies to maintain balance. Methods Motion capture techniques were used to study squat on a healthy control group (CTR) and on three groups, each characterised by a specific knee arthroplasty design: a Total Knee Arthroplasty (TKA), a Mobile Bearing and a Fixed Bearing Unicompartmental Knee Arthroplasty (respectively MBUA and FBUA). Squat was analysed during descent, maintenance and ascent phase and described by speed, angular kinematics of lower and upper body, the Center of Pressure (CoP) trajectory and muscle activation timing of quadriceps and biceps femoris. Results Compared to CTR, for TKA and MBUA knee maximum flexion was lower, vertical speed during descent and ascent reduced and the duration of whole movement was longer. CoP mean distance was higher for all arthroplasty groups during descent as higher was, CoP mean velocity for MBUA and TKA during ascent and descent. Conclusions Unconstrained squat is able to reveal differences in the functional performance among control and arthroplasty groups and between different arthroplasty designs. Considering the similarity index calculated for the variables showing statistically significance, FBUA performance appears to be closest to that of the CTR group. Level of evidence III a. PMID:29387646

2. A one-sided sequential test

Energy Technology Data Exchange (ETDEWEB)

Racz, A.; Lux, I. [Hungarian Academy of Sciences, Budapest (Hungary). Atomic Energy Research Inst.

1996-04-16

The applicability of the classical sequential probability ratio testing (SPRT) for early failure detection problems is limited by the fact that there is an extra time delay between the occurrence of the failure and its first recognition. Chien and Adams developed a method to minimize this time for the case when the problem can be formulated as testing the mean value of a Gaussian signal. In our paper we propose a procedure that can be applied for both mean and variance testing and that minimizes the time delay. The method is based on a special parametrization of the classical SPRT. The one-sided sequential tests (OSST) can reproduce the results of the Chien-Adams test when applied for mean values. (author).

3. Sequential stochastic optimization

CERN Document Server

Cairoli, Renzo

1996-01-01

Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet

4. Sequential memory: Binding dynamics

Science.gov (United States)

Afraimovich, Valentin; Gong, Xue; Rabinovich, Mikhail

2015-10-01

Temporal order memories are critical for everyday animal and human functioning. Experiments and our own experience show that the binding or association of various features of an event together and the maintaining of multimodality events in sequential order are the key components of any sequential memories—episodic, semantic, working, etc. We study a robustness of binding sequential dynamics based on our previously introduced model in the form of generalized Lotka-Volterra equations. In the phase space of the model, there exists a multi-dimensional binding heteroclinic network consisting of saddle equilibrium points and heteroclinic trajectories joining them. We prove here the robustness of the binding sequential dynamics, i.e., the feasibility phenomenon for coupled heteroclinic networks: for each collection of successive heteroclinic trajectories inside the unified networks, there is an open set of initial points such that the trajectory going through each of them follows the prescribed collection staying in a small neighborhood of it. We show also that the symbolic complexity function of the system restricted to this neighborhood is a polynomial of degree L - 1, where L is the number of modalities.

5. Sequential Dependencies in Driving

Science.gov (United States)

Doshi, Anup; Tran, Cuong; Wilder, Matthew H.; Mozer, Michael C.; Trivedi, Mohan M.

2012-01-01

The effect of recent experience on current behavior has been studied extensively in simple laboratory tasks. We explore the nature of sequential effects in the more naturalistic setting of automobile driving. Driving is a safety-critical task in which delayed response times may have severe consequences. Using a realistic driving simulator, we find…

6. Mining compressing sequential problems

NARCIS (Netherlands)

Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

2012-01-01

Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

7. Unconstrained tripolar implants for primary total hip arthroplasty in patients at risk for dislocation.

Science.gov (United States)

Guyen, Olivier; Pibarot, Vincent; Vaz, Gualter; Chevillotte, Christophe; Carret, Jean-Paul; Bejui-Hugues, Jacques

2007-09-01

We performed a retrospective study on 167 primary total hip arthroplasty (THA) procedures in 163 patients at high risk for instability to assess the reliability of unconstrained tripolar implants (press-fit outer metal shell articulating a bipolar polyethylene component) in preventing dislocations. Eighty-four percent of the patients had at least 2 risk factors for dislocation. The mean follow-up length was 40.2 months. No dislocation was observed. Harris hip scores improved significantly. Six hips were revised, and no aseptic loosening of the cup was observed. The tripolar implant was extremely successful in achieving stability. However, because of the current lack of data documenting polyethylene wear at additional bearing, the routine use of tripolar implants in primary THA is discouraged and should be considered at the present time only for selected patients at high risk for dislocation and with limited activities.

8. Steepest descent method implementation on unconstrained optimization problem using C++ program

Science.gov (United States)

Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.

2018-03-01

Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.

9. Unconstrained Capacities of Quantum Key Distribution and Entanglement Distillation for Pure-Loss Bosonic Broadcast Channels.

Science.gov (United States)

Takeoka, Masahiro; Seshadreesan, Kaushik P; Wilde, Mark M

2017-10-13

We consider quantum key distribution (QKD) and entanglement distribution using a single-sender multiple-receiver pure-loss bosonic broadcast channel. We determine the unconstrained capacity region for the distillation of bipartite entanglement and secret key between the sender and each receiver, whenever they are allowed arbitrary public classical communication. A practical implication of our result is that the capacity region demonstrated drastically improves upon rates achievable using a naive time-sharing strategy, which has been employed in previously demonstrated network QKD systems. We show a simple example of a broadcast QKD protocol overcoming the limit of the point-to-point strategy. Our result is thus an important step toward opening a new framework of network channel-based quantum communication technology.

10. Unconstrained off-shell N=3 supersymmetric Yang-Mills theory

International Nuclear Information System (INIS)

Galperin, A.; Ivanov, E.; Kalitzin, S.; Ogievetsky, V.; Sokatchev, E.

1984-01-01

The harmonic superspace is used to build up an unconstrained off-shell formulation of N=3 supersymmetric Yang-Mills theory. The theory is defined in an analytic N=3 superspace having M 4 x(SU(3)/U(1)xU(1) as an even part. The basic objects are the analytic potentials which serve as gauge connections entering harmonic derivatives. The action is an integral over analytic superspace. The Lagrange density is surprisingly simple and it is gauge invariant up to total harmonic derivative. The equations of motion are integrability conditions on the internal space SU(3)/U(1)xU(1). The jumping over the ''N=3 barrier'' became possible due to the infinite number of auxiliary fields

11. Two Modified Three-Term Type Conjugate Gradient Methods and Their Global Convergence for Unconstrained Optimization

Directory of Open Access Journals (Sweden)

Zhongbo Sun

2014-01-01

Full Text Available Two modified three-term type conjugate gradient algorithms which satisfy both the descent condition and the Dai-Liao type conjugacy condition are presented for unconstrained optimization. The first algorithm is a modification of the Hager and Zhang type algorithm in such a way that the search direction is descent and satisfies Dai-Liao’s type conjugacy condition. The second simple three-term type conjugate gradient method can generate sufficient decent directions at every iteration; moreover, this property is independent of the steplength line search. Also, the algorithms could be considered as a modification of the MBFGS method, but with different zk. Under some mild conditions, the given methods are global convergence, which is independent of the Wolfe line search for general functions. The numerical experiments show that the proposed methods are very robust and efficient.

12. New generalized conjugate gradient methods for the non-quadratic model in unconstrained optimization

International Nuclear Information System (INIS)

Al-Bayati, A.

2001-01-01

This paper present two new conjugate gradient algorithms which use the non-quadratic model in unconstrained optimization. The first is a new generalized self-scaling variable metric algorithm based on the sloboda generalized conjugate gradient method which is invariant to a nonlinear scaling of a stricity convex quadratic function; the second is an interleaving between the generalized sloboda method and the first algorithm; all these algorithm use exact line searches. Numerical comparisons over twenty test functions show that the interleaving algorithm is best overall and requires only about half the function evaluations of the Sloboda method: interleaving algorithms are likely to be preferred when the dimensionality of the problem is increased. (author). 29 refs., 1 tab

13. Unconstrained Capacities of Quantum Key Distribution and Entanglement Distillation for Pure-Loss Bosonic Broadcast Channels

Science.gov (United States)

Takeoka, Masahiro; Seshadreesan, Kaushik P.; Wilde, Mark M.

2017-10-01

We consider quantum key distribution (QKD) and entanglement distribution using a single-sender multiple-receiver pure-loss bosonic broadcast channel. We determine the unconstrained capacity region for the distillation of bipartite entanglement and secret key between the sender and each receiver, whenever they are allowed arbitrary public classical communication. A practical implication of our result is that the capacity region demonstrated drastically improves upon rates achievable using a naive time-sharing strategy, which has been employed in previously demonstrated network QKD systems. We show a simple example of a broadcast QKD protocol overcoming the limit of the point-to-point strategy. Our result is thus an important step toward opening a new framework of network channel-based quantum communication technology.

14. Taxonomic minimalism.

Science.gov (United States)

Beattle, A J; Oliver, I

1994-12-01

Biological surveys are in increasing demand while taxonomic resources continue to decline. How much formal taxonomy is required to get the job done? The answer depends on the kind of job but it is possible that taxonomic minimalism, especially (1) the use of higher taxonomic ranks, (2) the use of morphospecies rather than species (as identified by Latin binomials), and (3) the involvement of taxonomic specialists only for training and verification, may offer advantages for biodiversity assessment, environmental monitoring and ecological research. As such, formal taxonomy remains central to the process of biological inventory and survey but resources may be allocated more efficiently. For example, if formal Identification is not required, resources may be concentrated on replication and increasing sample sizes. Taxonomic minimalism may also facilitate the inclusion in these activities of important but neglected groups, especially among the invertebrates, and perhaps even microorganisms. Copyright © 1994. Published by Elsevier Ltd.

15. Sequential Power-Dependence Theory

NARCIS (Netherlands)

Buskens, Vincent; Rijt, Arnout van de

2008-01-01

Existing methods for predicting resource divisions in laboratory exchange networks do not take into account the sequential nature of the experimental setting. We extend network exchange theory by considering sequential exchange. We prove that Sequential Power-Dependence Theory—unlike

16. Modelling sequentially scored item responses

NARCIS (Netherlands)

Akkermans, W.

2000-01-01

The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is

17. A convergent overlapping domain decomposition method for total variation minimization

KAUST Repository

Fornasier, Massimo; Langer, Andreas; Schö nlieb, Carola-Bibiane

2010-01-01

In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation

18. Sequential decay of Reggeons

International Nuclear Information System (INIS)

Yoshida, Toshihiro

1981-01-01

Probabilities of meson production in the sequential decay of Reggeons, which are formed from the projectile and the target in the hadron-hadron to Reggeon-Reggeon processes, are investigated. It is assumed that pair creation of heavy quarks and simultaneous creation of two antiquark-quark pairs are negligible. The leading-order terms with respect to ratio of creation probabilities of anti s s to anti u u (anti d d) are calculated. The production cross sections in the target fragmentation region are given in terms of probabilities in the initial decay of the Reggeons and an effect of manyparticle production. (author)

19. Unconstrained and Noninvasive Measurement of Swimming Behavior of Small Fish Based on Ventilatory Signals

Science.gov (United States)

Kitayama, Shigehisa; Soh, Zu; Hirano, Akira; Tsuji, Toshio; Takiguchi, Noboru; Ohtake, Hisao

Ventilatory signal is a kind of bioelectric signals reflecting the ventilatory conditions of fish, and has received recent attention as an indicator for assessment of water quality, since breathing is adjusted by the respiratory center according to changes in the underwater environment surrounding the fish. The signals are thus beginning to be used in bioassay systems for water examination. Other than ventilatory conditions, swimming behavior also contains important information for water examination. The conventional bioassay systems, however, only measure either ventilatory signals or swimming behavior. This paper proposes a new unconstrained and noninvasive measurement method that is capable of conducting ventilatory signal measurement and behavioral analysis of fish at the same time. The proposed method estimates the position and the velocity of a fish in free-swimming conditions using power spectrum distribution of measured ventilatory signals from multiple electrodes. This allowed the system to avoid using a camera system which requires light sources. In order to validate estimation accuracy, the position and the velocity estimated by the proposed method were compared to those obtained from video analysis. The results confirmed that the estimated error of the fish positions was within the size of fish, and the correlation coefficient between the velocities was 0.906. The proposed method thus not only can measure the ventilatory signals, but also performs behavioral analysis as accurate as using a video camera.

20. Real-time construction and visualisation of drift-free video mosaics from unconstrained camera motion

Directory of Open Access Journals (Sweden)

Mateusz Brzeszcz

2015-08-01

Full Text Available This work proposes a novel approach for real-time video mosaicking facilitating drift-free mosaic construction and visualisation, with integrated frame blending and redundancy management, that is shown to be flexible to a range of varying mosaic scenarios. The approach supports unconstrained camera motion with in-sequence loop closing, variation in camera focal distance (zoom and recovery from video sequence breaks. Real-time performance, over extended duration sequences, is realised via novel aspects of frame management within the mosaic representation and thus avoiding the high data redundancy associated with temporally dense, spatially overlapping video frame inputs. This managed set of image frames is visualised in real time using a dynamic mosaic representation of overlapping textured graphics primitives in place of the traditional globally constructed, and hence frequently reconstructed, mosaic image. Within this formulation, subsequent optimisation occurring during online construction can thus efficiency adjust relative frame positions via simple primitive position transforms. Effective visualisation is similarly facilitated by online inter-frame blending to overcome the illumination and colour variance associated with modern camera hardware. The evaluation illustrates overall robustness in video mosaic construction under a diverse range of conditions including indoor and outdoor environments, varying illumination and presence of in-scene motion on varying computational platforms.

1. An Improved Real-Coded Population-Based Extremal Optimization Method for Continuous Unconstrained Optimization Problems

Directory of Open Access Journals (Sweden)

Guo-Qiang Zeng

2014-01-01

Full Text Available As a novel evolutionary optimization method, extremal optimization (EO has been successfully applied to a variety of combinatorial optimization problems. However, the applications of EO in continuous optimization problems are relatively rare. This paper proposes an improved real-coded population-based EO method (IRPEO for continuous unconstrained optimization problems. The key operations of IRPEO include generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. The experimental results on 10 benchmark test functions with the dimension N=30 have shown that IRPEO is competitive or even better than the recently reported various genetic algorithm (GA versions with different mutation operations in terms of simplicity, effectiveness, and efficiency. Furthermore, the superiority of IRPEO to other evolutionary algorithms such as original population-based EO, particle swarm optimization (PSO, and the hybrid PSO-EO is also demonstrated by the experimental results on some benchmark functions.

2. An Improved Quantum-Behaved Particle Swarm Optimization Algorithm with Elitist Breeding for Unconstrained Optimization.

Science.gov (United States)

Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing

2015-01-01

An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate.

3. Sequencing of real-world samples using a microfabricated hybrid device having unconstrained straight separation channels.

Science.gov (United States)

Liu, Shaorong; Elkin, Christopher; Kapur, Hitesh

2003-11-01

We describe a microfabricated hybrid device that consists of a microfabricated chip containing multiple twin-T injectors attached to an array of capillaries that serve as the separation channels. A new fabrication process was employed to create two differently sized round channels in a chip. Twin-T injectors were formed by the smaller round channels that match the bore of the separation capillaries and separation capillaries were incorporated to the injectors through the larger round channels that match the outer diameter of the capillaries. This allows for a minimum dead volume and provides a robust chip/capillary interface. This hybrid design takes full advantage, such as sample stacking and purification and uniform signal intensity profile, of the unique chip injection scheme for DNA sequencing while employing long straight capillaries for the separations. In essence, the separation channel length is optimized for both speed and resolution since it is unconstrained by chip size. To demonstrate the reliability and practicality of this hybrid device, we sequenced over 1000 real-world samples from Human Chromosome 5 and Ciona intestinalis, prepared at Joint Genome Institute. We achieved average Phred20 read of 675 bases in about 70 min with a success rate of 91%. For the similar type of samples on MegaBACE 1000, the average Phred20 read is about 550-600 bases in 120 min separation time with a success rate of about 80-90%.

4. Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review

Science.gov (United States)

Miao, Yinglong; McCammon, J. Andrew

2016-01-01

Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations. PMID:27453631

5. A Novel Walking Detection and Step Counting Algorithm Using Unconstrained Smartphones.

Science.gov (United States)

Kang, Xiaomin; Huang, Baoqi; Qi, Guodong

2018-01-19

Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D) angular velocities of a smartphone through FFT (fast Fourier transform) and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products.

6. Imaging Brain Function with Functional Near-Infrared Spectroscopy in Unconstrained Environments

Directory of Open Access Journals (Sweden)

Joana B. Balardin

2017-05-01

Full Text Available Assessing the neural correlates of motor and cognitive processes under naturalistic experimentation is challenging due to the movement constraints of traditional brain imaging technologies. The recent advent of portable technologies that are less sensitive to motion artifacts such as Functional Near Infrared Spectroscopy (fNIRS have been made possible the study of brain function in freely-moving participants. In this paper, we describe a series of proof-of-concept experiments examining the potential of fNIRS in assessing the neural correlates of cognitive and motor processes in unconstrained environments. We show illustrative applications for practicing a sport (i.e., table tennis, playing a musical instrument (i.e., piano and violin alone or in duo and performing daily activities for many hours (i.e., continuous monitoring. Our results expand upon previous research on the feasibility and robustness of fNIRS to monitor brain hemodynamic changes in different real life settings. We believe that these preliminary results showing the flexibility and robustness of fNIRS measurements may contribute by inspiring future work in the field of applied neuroscience.

7. Unconstrained steps of myosin VI appear longest among known molecular motors.

Science.gov (United States)

Ali, M Yusuf; Homma, Kazuaki; Iwane, Atsuko Hikikoshi; Adachi, Kengo; Itoh, Hiroyasu; Kinosita, Kazuhiko; Yanagida, Toshio; Ikebe, Mitsuo

2004-06-01

Myosin VI is a two-headed molecular motor that moves along an actin filament in the direction opposite to most other myosins. Previously, a single myosin VI molecule has been shown to proceed with steps that are large compared to its neck size: either it walks by somehow extending its neck or one head slides along actin for a long distance before the other head lands. To inquire into these and other possible mechanism of motility, we suspended an actin filament between two plastic beads, and let a single myosin VI molecule carrying a bead duplex move along the actin. This configuration, unlike previous studies, allows unconstrained rotation of myosin VI around the right-handed double helix of actin. Myosin VI moved almost straight or as a right-handed spiral with a pitch of several micrometers, indicating that the molecule walks with strides slightly longer than the actin helical repeat of 36 nm. The large steps without much rotation suggest kinesin-type walking with extended and flexible necks, but how to move forward with flexible necks, even under a backward load, is not clear. As an answer, we propose that a conformational change in the lifted head would facilitate landing on a forward, rather than backward, site. This mechanism may underlie stepping of all two-headed molecular motors including kinesin and myosin V.

8. A Novel Walking Detection and Step Counting Algorithm Using Unconstrained Smartphones

Directory of Open Access Journals (Sweden)

Xiaomin Kang

2018-01-01

Full Text Available Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D angular velocities of a smartphone through FFT (fast Fourier transform and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products.

9. Synthetic Aperture Sequential Beamforming

DEFF Research Database (Denmark)

Kortbek, Jacob; Jensen, Jørgen Arendt; Gammelmark, Kim Løkke

2008-01-01

A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective is to im......A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective...... is to improve and obtain a more range independent lateral resolution compared to conventional dynamic receive focusing (DRF) without compromising frame rate. SASB is a two-stage procedure using two separate beamformers. First a set of Bmode image lines using a single focal point in both transmit and receive...... is stored. The second stage applies the focused image lines from the first stage as input data. The SASB method has been investigated using simulations in Field II and by off-line processing of data acquired with a commercial scanner. The performance of SASB with a static image object is compared with DRF...

10. A multi-objective improved teaching-learning based optimization algorithm for unconstrained and constrained optimization problems

Directory of Open Access Journals (Sweden)

R. Venkata Rao

2014-01-01

Full Text Available The present work proposes a multi-objective improved teaching-learning based optimization (MO-ITLBO algorithm for unconstrained and constrained multi-objective function optimization. The MO-ITLBO algorithm is the improved version of basic teaching-learning based optimization (TLBO algorithm adapted for multi-objective problems. The basic TLBO algorithm is improved to enhance its exploration and exploitation capacities by introducing the concept of number of teachers, adaptive teaching factor, tutorial training and self-motivated learning. The MO-ITLBO algorithm uses a grid-based approach to adaptively assess the non-dominated solutions (i.e. Pareto front maintained in an external archive. The performance of the MO-ITLBO algorithm is assessed by implementing it on unconstrained and constrained test problems proposed for the Congress on Evolutionary Computation 2009 (CEC 2009 competition. The performance assessment is done by using the inverted generational distance (IGD measure. The IGD measures obtained by using the MO-ITLBO algorithm are compared with the IGD measures of the other state-of-the-art algorithms available in the literature. Finally, Lexicographic ordering is used to assess the overall performance of competitive algorithms. Results have shown that the proposed MO-ITLBO algorithm has obtained the 1st rank in the optimization of unconstrained test functions and the 3rd rank in the optimization of constrained test functions.

11. Quantum Inequalities and Sequential Measurements

International Nuclear Information System (INIS)

Candelpergher, B.; Grandouz, T.; Rubinx, J.L.

2011-01-01

In this article, the peculiar context of sequential measurements is chosen in order to analyze the quantum specificity in the two most famous examples of Heisenberg and Bell inequalities: Results are found at some interesting variance with customary textbook materials, where the context of initial state re-initialization is described. A key-point of the analysis is the possibility of defining Joint Probability Distributions for sequential random variables associated to quantum operators. Within the sequential context, it is shown that Joint Probability Distributions can be defined in situations where not all of the quantum operators (corresponding to random variables) do commute two by two. (authors)

12. Review of applications of TLBO algorithm and a tutorial for beginners to solve the unconstrained and constrained optimization problems

Directory of Open Access Journals (Sweden)

R. Venkata Rao

2016-01-01

Full Text Available The teaching-learning-based optimization (TLBO algorithm is finding a large number of applications in different fields of engineering and science since its introduction in 2011. The major applications are found in electrical engineering, mechanical design, thermal engineering, manufacturing engineering, civil engineering, structural engineering, computer engineering, electronics engineering, physics, chemistry, biotechnology and economics. This paper presents a review of applications of TLBO algorithm and a tutorial for solving the unconstrained and constrained optimization problems. The tutorial is expected to be useful to the beginners.

13. Framework for sequential approximate optimization

NARCIS (Netherlands)

Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.

2004-01-01

An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python

14. Sequentially pulsed traveling wave accelerator

Science.gov (United States)

Caporaso, George J [Livermore, CA; Nelson, Scott D [Patterson, CA; Poole, Brian R [Tracy, CA

2009-08-18

A sequentially pulsed traveling wave compact accelerator having two or more pulse forming lines each with a switch for producing a short acceleration pulse along a short length of a beam tube, and a trigger mechanism for sequentially triggering the switches so that a traveling axial electric field is produced along the beam tube in synchronism with an axially traversing pulsed beam of charged particles to serially impart energy to the particle beam.

15. The minimally tuned minimal supersymmetric standard model

International Nuclear Information System (INIS)

Essig, Rouven; Fortin, Jean-Francois

2008-01-01

The regions in the Minimal Supersymmetric Standard Model with the minimal amount of fine-tuning of electroweak symmetry breaking are presented for general messenger scale. No a priori relations among the soft supersymmetry breaking parameters are assumed and fine-tuning is minimized with respect to all the important parameters which affect electroweak symmetry breaking. The superpartner spectra in the minimally tuned region of parameter space are quite distinctive with large stop mixing at the low scale and negative squark soft masses at the high scale. The minimal amount of tuning increases enormously for a Higgs mass beyond roughly 120 GeV

16. Comparison of Sequential and Variational Data Assimilation

Science.gov (United States)

Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht

2017-04-01

Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.

17. Sequential and parallel image restoration: neural network implementations.

Science.gov (United States)

Figueiredo, M T; Leitao, J N

1994-01-01

Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.

18. Remarks on sequential designs in risk assessment

International Nuclear Information System (INIS)

Seidenfeld, T.

1982-01-01

The special merits of sequential designs are reviewed in light of particular challenges that attend risk assessment for human population. The kinds of ''statistical inference'' are distinguished and the problem of design which is pursued is the clash between Neyman-Pearson and Bayesian programs of sequential design. The value of sequential designs is discussed and the Neyman-Pearson vs. Bayesian sequential designs are probed in particular. Finally, warnings with sequential designs are considered, especially in relation to utilitarianism

19. Automatic synthesis of sequential control schemes

International Nuclear Information System (INIS)

Klein, I.

1993-01-01

Of all hard- and software developed for industrial control purposes, the majority is devoted to sequential, or binary valued, control and only a minor part to classical linear control. Typically, the sequential parts of the controller are invoked during startup and shut-down to bring the system into its normal operating region and into some safe standby region, respectively. Despite its importance, fairly little theoretical research has been devoted to this area, and sequential control programs are therefore still created manually without much theoretical support to obtain a systematic approach. We propose a method to create sequential control programs automatically. The main ideas is to spend some effort off-line modelling the plant, and from this model generate the control strategy, that is the plan. The plant is modelled using action structures, thereby concentrating on the actions instead of the states of the plant. In general the planning problem shows exponential complexity in the number of state variables. However, by focusing on the actions, we can identify problem classes as well as algorithms such that the planning complexity is reduced to polynomial complexity. We prove that these algorithms are sound, i.e., the generated solution will solve the stated problem, and complete, i.e., if the algorithms fail, then no solution exists. The algorithms generate a plan as a set of actions and a partial order on this set specifying the execution order. The generated plant is proven to be minimal and maximally parallel. For a larger class of problems we propose a method to split the original problem into a number of simple problems that can each be solved using one of the presented algorithms. It is also shown how a plan can be translated into a GRAFCET chart, and to illustrate these ideas we have implemented a planing tool, i.e., a system that is able to automatically create control schemes. Such a tool can of course also be used on-line if it is fast enough. This

20. Using Unconstrained Tongue Motion as an Alternative Control Mechanism for Wheeled Mobility

OpenAIRE

Huo, Xueliang; Ghovanloo, Maysam

2009-01-01

Tongue drive system (TDS) is a tongue-operated, minimally invasive, unobtrusive, noncontact, and wireless assistive technology that infers users’ intentions by detecting and classifying their voluntary tongue motions, and translating them to user-defined commands. We have developed customized interface circuitry between an external TDS (eTDS) prototype and a commercial powered wheelchair (PWC) as well as three control strategies to evaluate the tongue motion as an alternative control input fo...

1. The minimal non-minimal standard model

International Nuclear Information System (INIS)

Bij, J.J. van der

2006-01-01

In this Letter I discuss a class of extensions of the standard model that have a minimal number of possible parameters, but can in principle explain dark matter and inflation. It is pointed out that the so-called new minimal standard model contains a large number of parameters that can be put to zero, without affecting the renormalizability of the model. With the extra restrictions one might call it the minimal (new) non-minimal standard model (MNMSM). A few hidden discrete variables are present. It is argued that the inflaton should be higher-dimensional. Experimental consequences for the LHC and the ILC are discussed

2. Sequential versus simultaneous market delineation

DEFF Research Database (Denmark)

Haldrup, Niels; Møllgaard, Peter; Kastberg Nielsen, Claus

2005-01-01

and geographical markets. Using a unique data setfor prices of Norwegian and Scottish salmon, we propose a methodologyfor simultaneous market delineation and we demonstrate that comparedto a sequential approach conclusions will be reversed.JEL: C3, K21, L41, Q22Keywords: Relevant market, econometric delineation......Delineation of the relevant market forms a pivotal part of most antitrustcases. The standard approach is sequential. First the product marketis delineated, then the geographical market is defined. Demand andsupply substitution in both the product dimension and the geographicaldimension...

3. Sequential logic analysis and synthesis

CERN Document Server

Cavanagh, Joseph

2007-01-01

Until now, there was no single resource for actual digital system design. Using both basic and advanced concepts, Sequential Logic: Analysis and Synthesis offers a thorough exposition of the analysis and synthesis of both synchronous and asynchronous sequential machines. With 25 years of experience in designing computing equipment, the author stresses the practical design of state machines. He clearly delineates each step of the structured and rigorous design principles that can be applied to practical applications. The book begins by reviewing the analysis of combinatorial logic and Boolean a

4. A new entropy function for feature extraction with the refined scores as a classifier for the unconstrained ear verification

Directory of Open Access Journals (Sweden)

Mamta Bansal

2017-05-01

Full Text Available For high end security like surveillance there is a need for a robust system capable of verifying a person under the unconstrained conditions. This paper presents the ear based verification system using a new entropy function that changes not only the information gain function but also the information source values. This entropy function displays peculiar characteristics such as splitting into two modes. Two types of entropy features: Effective Gaussian Information source value and Effective Exponential Information source value functions are derived using the entropy function. To classify the entropy features we have devised refined scores (RS method that refines the scores generated using the Euclidean distance. The experimental results vindicate the superiority of proposed method over literature.

5. An Algorithm for the Weighted Earliness-Tardiness Unconstrained Project Scheduling Problem

Science.gov (United States)

This research considers a project scheduling problem with the object of minimizing weighted earliness-tardiness penalty costs, taking into account a deadline for the project and precedence relations among the activities. An exact recursive method has been proposed for solving the basic form of this problem. We present a new depth-first branch and bound algorithm for extended form of the problem, which time value of money is taken into account by discounting the cash flows. The algorithm is extended with two bounding rules in order to reduce the size of the branch and bound tree. Finally, some test problems are solved and computational results are reported.

6. Evaluation Using Sequential Trials Methods.

Science.gov (United States)

Cohen, Mark E.; Ralls, Stephen A.

1986-01-01

Although dental school faculty as well as practitioners are interested in evaluating products and procedures used in clinical practice, research design and statistical analysis can sometimes pose problems. Sequential trials methods provide an analytical structure that is both easy to use and statistically valid. (Author/MLW)

7. Attack Trees with Sequential Conjunction

NARCIS (Netherlands)

Jhawar, Ravi; Kordy, Barbara; Mauw, Sjouke; Radomirović, Sasa; Trujillo-Rasua, Rolando

2015-01-01

We provide the first formal foundation of SAND attack trees which are a popular extension of the well-known attack trees. The SAND at- tack tree formalism increases the expressivity of attack trees by intro- ducing the sequential conjunctive operator SAND. This operator enables the modeling of

8. Sequential determination of important ecotoxic radionuclides in nuclear waste samples

International Nuclear Information System (INIS)

Bilohuscin, J.

2016-01-01

In the dissertation thesis we focused on the development and optimization of a sequential determination method for radionuclides 93 Zr, 94 Nb, 99 Tc and 126 Sn, employing extraction chromatography sorbents TEVA (R) Resin and Anion Exchange Resin, supplied by Eichrom Industries. Prior to the attestation of sequential separation of these proposed radionuclides from radioactive waste samples, a unique sequential procedure of 90 Sr, 239 Pu, 241 Am separation from urine matrices was tried, using molecular recognition sorbents of AnaLig (R) series and extraction chromatography sorbent DGA (R) Resin. On these experiments, four various sorbents were continually used for separation, including PreFilter Resin sorbent, which removes interfering organic materials present in raw urine. After the acquisition of positive results of this sequential procedure followed experiments with a 126 Sn separation using TEVA (R) Resin and Anion Exchange Resin sorbents. Radiochemical recoveries obtained from samples of radioactive evaporate concentrates and sludge showed high efficiency of the separation, while values of 126 Sn were under the minimum detectable activities MDA. Activity of 126 Sn was determined after ingrowth of daughter nuclide 126m Sb on HPGe gamma detector, with minimal contamination of gamma interfering radionuclides with decontamination factors (D f ) higher then 1400 for 60 Co and 47000 for 137 Cs. Based on the acquired experiments and results of these separation procedures, a complex method of sequential separation of 93 Zr, 94 Nb, 99 Tc and 126 Sn was proposed, which included optimization steps similar to those used in previous parts of the dissertation work. Application of the sequential separation method for sorbents TEVA (R) Resin and Anion Exchange Resin on real samples of radioactive wastes provided satisfactory results and an economical, time sparing, efficient method. (author)

9. Multi-agent sequential hypothesis testing

KAUST Repository

Kim, Kwang-Ki K.; Shamma, Jeff S.

2014-01-01

incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well

10. Standardized method for reproducing the sequential X-rays flap

International Nuclear Information System (INIS)

Brenes, Alejandra; Molina, Katherine; Gudino, Sylvia

2009-01-01

A method is validated to estandardize in the taking, developing and analysis of bite-wing radiographs taken in sequential way, in order to compare and evaluate detectable changes in the evolution of the interproximal lesions through time. A radiographic positioner called XCP® is modified by means of a rigid acrylic guide, to achieve proper of the X ray equipment core positioning relative to the XCP® ring and the reorientation during the sequential x-rays process. 16 subjects of 4 to 40 years old are studied for a total number of 32 registries. Two x-rays of the same block of teeth of each subject have been taken in sequential way, with a minimal difference of 30 minutes between each one, before the placement of radiographic attachment. The images have been digitized with a Super Cam® scanner and imported to a software. The measurements in X and Y-axis for both x-rays were performed to proceed to compare. The intraclass correlation index (ICI) has shown that the proposed method is statistically related to measurement (mm) obtained in the X and Y-axis for both sequential series of x-rays (p=0.01). The measures of central tendency and dispersion have shown that the usual occurrence is indifferent between the two measurements (Mode 0.000 and S = 0083 and 0.109) and that the probability of occurrence of different values is lower than expected. (author) [es

11. Robustness of the Sequential Lineup Advantage

Science.gov (United States)

Gronlund, Scott D.; Carlson, Curt A.; Dailey, Sarah B.; Goodsell, Charles A.

2009-01-01

A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup…

12. Sequential Probability Ration Tests : Conservative and Robust

NARCIS (Netherlands)

Kleijnen, J.P.C.; Shi, Wen

2017-01-01

In practice, most computers generate simulation outputs sequentially, so it is attractive to analyze these outputs through sequential statistical methods such as sequential probability ratio tests (SPRTs). We investigate several SPRTs for choosing between two hypothesized values for the mean output

13. Random sequential adsorption of cubes

Science.gov (United States)

Cieśla, Michał; Kubala, Piotr

2018-01-01

Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.

14. Using unconstrained tongue motion as an alternative control mechanism for wheeled mobility.

Science.gov (United States)

Huo, Xueliang; Ghovanloo, Maysam

2009-06-01

Tongue drive system (TDS) is a tongue-operated, minimally invasive, unobtrusive, noncontact, and wireless assistive technology that infers users' intentions by detecting and classifying their voluntary tongue motions, and translating them to user-defined commands. We have developed customized interface circuitry between an external TDS (eTDS) prototype and a commercial powered wheelchair (PWC) as well as three control strategies to evaluate the tongue motion as an alternative control input for wheeled mobility. We tested the eTDS performance in driving PWCs on 12 able-bodied human subjects, of which 11 were novice. The results showed that all subjects could complete navigation tasks by operating the PWC using their tongue motions. Despite little prior experience, the average time using the eTDS and the tongue was only approximately three times longer than using a joystick and the fingers. Navigation time was strongly dependant on the number of issued commands, which reduced by gaining experience. Particularly, the unintended issued commands (the Midas touch problem) were rare, demonstrating the effectiveness of the tongue tracking and external magnetic field cancellation algorithms as well as the safety of the TDS for wheeled mobility.

15. Regularity of Minimal Surfaces

CERN Document Server

Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht

2010-01-01

"Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t

16. A second-order unconstrained optimization method for canonical-ensemble density-functional methods

Science.gov (United States)

Nygaard, Cecilie R.; Olsen, Jeppe

2013-03-01

A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.

17. Strain development in a filled epoxy resin curing under constrained and unconstrained conditions as assessed by Fibre Bragg Grating sensors

Directory of Open Access Journals (Sweden)

2007-04-01

Full Text Available The influence of adhesion to the mould wall on the released strain of a highly filled anhydride cured epoxy resin (EP, which was hardened in an aluminium mould under constrained and unconstrained condition, was investigated. The shrinkage-induced strain was measured by fibre optical sensing technique. Fibre Bragg Grating (FBG sensors were embedded into the curing EP placed in a cylindrical mould cavity. The cure-induced strain signals were detected in both, vertical and horizontal directions, during isothermal curing at 75 °C for 1000 minutes. A huge difference in the strain signal of both directions could be detected for the different adhesion conditions. Under non-adhering condition the horizontal and vertical strain-time traces were practically identical resulting in a compressive strain at the end of about 3200 ppm, which is a proof of free or isotropic shrinking. However, under constrained condition the horizontal shrinkage in the EP was prevented due to its adhesion to the mould wall. So, the curing material shrunk preferably in vertical direction. This resulted in much higher released compressive strain signals in vertical (10430 ppm than in horizontal (2230 ppm direction. The constrained cured EP resins are under inner stresses. Qualitative information on the residual stress state in the molding was deduced by exploiting the birefringence of the EP.

18. On Applicability of Tunable Filter Bank Based Feature for Ear Biometrics: A Study from Constrained to Unconstrained.

Science.gov (United States)

Chowdhury, Debbrota Paul; Bakshi, Sambit; Guo, Guodong; Sa, Pankaj Kumar

2017-11-27

In this paper, an overall framework has been presented for person verification using ear biometric which uses tunable filter bank as local feature extractor. The tunable filter bank, based on a half-band polynomial of 14th order, extracts distinct features from ear images maintaining its frequency selectivity property. To advocate the applicability of tunable filter bank on ear biometrics, recognition test has been performed on available constrained databases like AMI, WPUT, IITD and unconstrained database like UERC. Experiments have been conducted applying tunable filter based feature extractor on subparts of the ear. Empirical experiments have been conducted with four and six subdivisions of the ear image. Analyzing the experimental results, it has been found that tunable filter moderately succeeds to distinguish ear features at par with the state-of-the-art features used for ear recognition. Accuracies of 70.58%, 67.01%, 81.98%, and 57.75% have been achieved on AMI, WPUT, IITD, and UERC databases through considering Canberra Distance as underlying measure of separation. The performances indicate that tunable filter is a candidate for recognizing human from ear images.

19. Large-scale sequential quadratic programming algorithms

Energy Technology Data Exchange (ETDEWEB)

Eldersveld, S.K.

1992-09-01

The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

20. Minimally invasive orthognathic surgery.

Science.gov (United States)

Resnick, Cory M; Kaban, Leonard B; Troulis, Maria J

2009-02-01

Minimally invasive surgery is defined as the discipline in which operative procedures are performed in novel ways to diminish the sequelae of standard surgical dissections. The goals of minimally invasive surgery are to reduce tissue trauma and to minimize bleeding, edema, and injury, thereby improving the rate and quality of healing. In orthognathic surgery, there are two minimally invasive techniques that can be used separately or in combination: (1) endoscopic exposure and (2) distraction osteogenesis. This article describes the historical developments of the fields of orthognathic surgery and minimally invasive surgery, as well as the integration of the two disciplines. Indications, techniques, and the most current outcome data for specific minimally invasive orthognathic surgical procedures are presented.

1. Correlates of minimal dating.

Science.gov (United States)

Leck, Kira

2006-10-01

Researchers have associated minimal dating with numerous factors. The present author tested shyness, introversion, physical attractiveness, performance evaluation, anxiety, social skill, social self-esteem, and loneliness to determine the nature of their relationships with 2 measures of self-reported minimal dating in a sample of 175 college students. For women, shyness, introversion, physical attractiveness, self-rated anxiety, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. For men, physical attractiveness, observer-rated social skill, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. The patterns of relationships were not identical for the 2 indicators of minimal dating, indicating the possibility that minimal dating is not a single construct as researchers previously believed. The present author discussed implications and suggestions for future researchers.

2. Hexavalent Chromium Minimization Strategy

Science.gov (United States)

2011-05-01

Logistics 4 Initiative - DoD Hexavalent Chromium Minimization Non- Chrome Primer IIEXAVAJ ENT CHRO:M I~UMI CHROMIUM (VII Oil CrfVli.J CANCEfl HAnRD CD...Management Office of the Secretary of Defense Hexavalent Chromium Minimization Strategy Report Documentation Page Form ApprovedOMB No. 0704-0188...00-2011 4. TITLE AND SUBTITLE Hexavalent Chromium Minimization Strategy 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

3. Minimal Super Technicolor

DEFF Research Database (Denmark)

Antola, M.; Di Chiara, S.; Sannino, F.

2011-01-01

We introduce novel extensions of the Standard Model featuring a supersymmetric technicolor sector (supertechnicolor). As the first minimal conformal supertechnicolor model we consider N=4 Super Yang-Mills which breaks to N=1 via the electroweak interactions. This is a well defined, economical......, between unparticle physics and Minimal Walking Technicolor. We consider also other N =1 extensions of the Minimal Walking Technicolor model. The new models allow all the standard model matter fields to acquire a mass....

4. Sequential series for nuclear reactions

International Nuclear Information System (INIS)

Izumo, Ko

1975-01-01

A new time-dependent treatment of nuclear reactions is given, in which the wave function of compound nucleus is expanded by a sequential series of the reaction processes. The wave functions of the sequential series form another complete set of compound nucleus at the limit Δt→0. It is pointed out that the wave function is characterized by the quantities: the number of degrees of freedom of motion n, the period of the motion (Poincare cycle) tsub(n), the delay time t sub(nμ) and the relaxation time tausub(n) to the equilibrium of compound nucleus, instead of the usual quantum number lambda, the energy eigenvalue Esub(lambda) and the total width GAMMAsub(lambda) of resonance levels, respectively. The transition matrix elements and the yields of nuclear reactions also become the functions of time given by the Fourier transform of the usual ones. The Poincare cycles of compound nuclei are compared with the observed correlations among resonance levels, which are about 10 -17 --10 -16 sec for medium and heavy nuclei and about 10 -20 sec for the intermediate resonances. (auth.)

5. Minimizing Mutual Couping

DEFF Research Database (Denmark)

2010-01-01

Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna.......Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna....

6. Ruled Laguerre minimal surfaces

KAUST Repository

Skopenkov, Mikhail; Pottmann, Helmut; Grohs, Philipp

2011-01-01

A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ

7. Exploring the sequential lineup advantage using WITNESS.

Science.gov (United States)

Goodsell, Charles A; Gronlund, Scott D; Carlson, Curt A

2010-12-01

Advocates claim that the sequential lineup is an improvement over simultaneous lineup procedures, but no formal (quantitatively specified) explanation exists for why it is better. The computational model WITNESS (Clark, Appl Cogn Psychol 17:629-654, 2003) was used to develop theoretical explanations for the sequential lineup advantage. In its current form, WITNESS produced a sequential advantage only by pairing conservative sequential choosing with liberal simultaneous choosing. However, this combination failed to approximate four extant experiments that exhibited large sequential advantages. Two of these experiments became the focus of our efforts because the data were uncontaminated by likely suspect position effects. Decision-based and memory-based modifications to WITNESS approximated the data and produced a sequential advantage. The next step is to evaluate the proposed explanations and modify public policy recommendations accordingly.

8. Sequential lineup presentation: Patterns and policy

OpenAIRE

Lindsay, R C L; Mansour, Jamal K; Beaudry, J L; Leach, A-M; Bertrand, M I

2009-01-01

Sequential lineups were offered as an alternative to the traditional simultaneous lineup. Sequential lineups reduce incorrect lineup selections; however, the accompanying loss of correct identifications has resulted in controversy regarding adoption of the technique. We discuss the procedure and research relevant to (1) the pattern of results found using sequential versus simultaneous lineups; (2) reasons (theory) for differences in witness responses; (3) two methodological issues; and (4) im...

9. The Bacterial Sequential Markov Coalescent.

Science.gov (United States)

De Maio, Nicola; Wilson, Daniel J

2017-05-01

Bacteria can exchange and acquire new genetic material from other organisms directly and via the environment. This process, known as bacterial recombination, has a strong impact on the evolution of bacteria, for example, leading to the spread of antibiotic resistance across clades and species, and to the avoidance of clonal interference. Recombination hinders phylogenetic and transmission inference because it creates patterns of substitutions (homoplasies) inconsistent with the hypothesis of a single evolutionary tree. Bacterial recombination is typically modeled as statistically akin to gene conversion in eukaryotes, i.e. , using the coalescent with gene conversion (CGC). However, this model can be very computationally demanding as it needs to account for the correlations of evolutionary histories of even distant loci. So, with the increasing popularity of whole genome sequencing, the need has emerged for a faster approach to model and simulate bacterial genome evolution. We present a new model that approximates the coalescent with gene conversion: the bacterial sequential Markov coalescent (BSMC). Our approach is based on a similar idea to the sequential Markov coalescent (SMC)-an approximation of the coalescent with crossover recombination. However, bacterial recombination poses hurdles to a sequential Markov approximation, as it leads to strong correlations and linkage disequilibrium across very distant sites in the genome. Our BSMC overcomes these difficulties, and shows a considerable reduction in computational demand compared to the exact CGC, and very similar patterns in simulated data. We implemented our BSMC model within new simulation software FastSimBac. In addition to the decreased computational demand compared to previous bacterial genome evolution simulators, FastSimBac provides more general options for evolutionary scenarios, allowing population structure with migration, speciation, population size changes, and recombination hotspots. FastSimBac is

10. Biased lineups: sequential presentation reduces the problem.

Science.gov (United States)

Lindsay, R C; Lea, J A; Nosworthy, G J; Fulford, J A; Hector, J; LeVan, V; Seabrook, C

1991-12-01

Biased lineups have been shown to increase significantly false, but not correct, identification rates (Lindsay, Wallbridge, & Drennan, 1987; Lindsay & Wells, 1980; Malpass & Devine, 1981). Lindsay and Wells (1985) found that sequential lineup presentation reduced false identification rates, presumably by reducing reliance on relative judgment processes. Five staged-crime experiments were conducted to examine the effect of lineup biases and sequential presentation on eyewitness recognition accuracy. Sequential lineup presentation significantly reduced false identification rates from fair lineups as well as from lineups biased with regard to foil similarity, instructions, or witness attire, and from lineups biased in all of these ways. The results support recommendations that police present lineups sequentially.

11. Immediate Sequential Bilateral Cataract Surgery

DEFF Research Database (Denmark)

Kessel, Line; Andresen, Jens; Erngaard, Ditte

2015-01-01

The aim of the present systematic review was to examine the benefits and harms associated with immediate sequential bilateral cataract surgery (ISBCS) with specific emphasis on the rate of complications, postoperative anisometropia, and subjective visual function in order to formulate evidence......-based national Danish guidelines for cataract surgery. A systematic literature review in PubMed, Embase, and Cochrane central databases identified three randomized controlled trials that compared outcome in patients randomized to ISBCS or bilateral cataract surgery on two different dates. Meta-analyses were...... performed using the Cochrane Review Manager software. The quality of the evidence was assessed using the GRADE method (Grading of Recommendation, Assessment, Development, and Evaluation). We did not find any difference in the risk of complications or visual outcome in patients randomized to ISBCS or surgery...

12. Random and cooperative sequential adsorption

Science.gov (United States)

Evans, J. W.

1993-10-01

Irreversible random sequential adsorption (RSA) on lattices, and continuum "car parking" analogues, have long received attention as models for reactions on polymer chains, chemisorption on single-crystal surfaces, adsorption in colloidal systems, and solid state transformations. Cooperative generalizations of these models (CSA) are sometimes more appropriate, and can exhibit richer kinetics and spatial structure, e.g., autocatalysis and clustering. The distribution of filled or transformed sites in RSA and CSA is not described by an equilibrium Gibbs measure. This is the case even for the saturation "jammed" state of models where the lattice or space cannot fill completely. However exact analysis is often possible in one dimension, and a variety of powerful analytic methods have been developed for higher dimensional models. Here we review the detailed understanding of asymptotic kinetics, spatial correlations, percolative structure, etc., which is emerging for these far-from-equilibrium processes.

13. Minimizing Exposure at Work

Science.gov (United States)

; Environment Human Health Animal Health Safe Use Practices Food Safety Environment Air Water Soil Wildlife Home Page Pesticide Health and Safety Information Safe Use Practices Minimizing Exposure at Work Pesticides - Pennsylvania State University Cooperative Extension Personal Protective Equipment for Working

14. Minimalism. Clip and Save.

Science.gov (United States)

Hubbard, Guy

2002-01-01

Provides background information on the art movement called "Minimalism" discussing why it started and its characteristics. Includes learning activities and information on the artist, Donald Judd. Includes a reproduction of one of his art works and discusses its content. (CMK)

15. Ruled Laguerre minimal surfaces

KAUST Repository

Skopenkov, Mikhail

2011-10-30

A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ) + λ(sin φ, cos φ, 0), where A,B,C,D ε ℝ are fixed. To achieve invariance under Laguerre transformations, we also derive all Laguerre minimal surfaces that are enveloped by a family of cones. The methodology is based on the isotropic model of Laguerre geometry. In this model a Laguerre minimal surface enveloped by a family of cones corresponds to a graph of a biharmonic function carrying a family of isotropic circles. We classify such functions by showing that the top view of the family of circles is a pencil. © 2011 Springer-Verlag.

16. Minimal and careful processing

OpenAIRE

Nielsen, Thorkild

2004-01-01

In several standards, guidelines and publications, organic food processing is strongly associated with "minimal processing" and "careful processing". The term "minimal processing" is nowadays often used in the general food processing industry and described in literature. The term "careful processing" is used more specifically within organic food processing but is not yet clearly defined. The concept of carefulness seems to fit very well with the processing of organic foods, especially if it i...

17. Trial Sequential Methods for Meta-Analysis

Science.gov (United States)

Kulinskaya, Elena; Wood, John

2014-01-01

Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…

18. Sequential lineup laps and eyewitness accuracy.

Science.gov (United States)

Steblay, Nancy K; Dietrich, Hannah L; Ryan, Shannon L; Raczynski, Jeanette L; James, Kali A

2011-08-01

Police practice of double-blind sequential lineups prompts a question about the efficacy of repeated viewings (laps) of the sequential lineup. Two laboratory experiments confirmed the presence of a sequential lap effect: an increase in witness lineup picks from first to second lap, when the culprit was a stranger. The second lap produced more errors than correct identifications. In Experiment 2, lineup diagnosticity was significantly higher for sequential lineup procedures that employed a single versus double laps. Witnesses who elected to view a second lap made significantly more errors than witnesses who chose to stop after one lap or those who were required to view two laps. Witnesses with prior exposure to the culprit did not exhibit a sequential lap effect.

19. Multi-agent sequential hypothesis testing

KAUST Repository

Kim, Kwang-Ki K.

2014-12-15

This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.

20. Sequential Product of Quantum Effects: An Overview

Science.gov (United States)

Gudder, Stan

2010-12-01

This article presents an overview for the theory of sequential products of quantum effects. We first summarize some of the highlights of this relatively recent field of investigation and then provide some new results. We begin by discussing sequential effect algebras which are effect algebras endowed with a sequential product satisfying certain basic conditions. We then consider sequential products of (discrete) quantum measurements. We next treat transition effect matrices (TEMs) and their associated sequential product. A TEM is a matrix whose entries are effects and whose rows form quantum measurements. We show that TEMs can be employed for the study of quantum Markov chains. Finally, we prove some new results concerning TEMs and vector densities.

1. Waste minimization assessment procedure

International Nuclear Information System (INIS)

Kellythorne, L.L.

1993-01-01

Perry Nuclear Power Plant began developing a waste minimization plan early in 1991. In March of 1991 the plan was documented following a similar format to that described in the EPA Waste Minimization Opportunity Assessment Manual. Initial implementation involved obtaining management's commitment to support a waste minimization effort. The primary assessment goal was to identify all hazardous waste streams and to evaluate those streams for minimization opportunities. As implementation of the plan proceeded, non-hazardous waste streams routinely generated in large volumes were also evaluated for minimization opportunities. The next step included collection of process and facility data which would be useful in helping the facility accomplish its assessment goals. This paper describes the resources that were used and which were most valuable in identifying both the hazardous and non-hazardous waste streams that existed on site. For each material identified as a waste stream, additional information regarding the materials use, manufacturer, EPA hazardous waste number and DOT hazard class was also gathered. Once waste streams were evaluated for potential source reduction, recycling, re-use, re-sale, or burning for heat recovery, with disposal as the last viable alternative

2. Multilevel sequential Monte Carlo samplers

KAUST Repository

Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

2016-01-01

In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.

3. Multilevel sequential Monte Carlo samplers

KAUST Repository

Beskos, Alexandros

2016-08-29

In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.

4. Sequential Scintigraphy in Renal Transplantation

Energy Technology Data Exchange (ETDEWEB)

Winkel, K. zum; Harbst, H.; Schenck, P.; Franz, H. E.; Ritz, E.; Roehl, L.; Ziegler, M.; Ammann, W.; Maier-Borst, W. [Institut Fuer Nuklearmedizin, Deutsches Krebsforschungszentrum, Heidelberg, Federal Republic of Germany (Germany)

1969-05-15

Based on experience gained from more than 1600 patients with proved or suspected kidney diseases and on results on extended studies with dogs, sequential scintigraphy was performed after renal transplantation in dogs. After intravenous injection of 500 {mu}Ci. {sup 131}I-Hippuran scintiphotos were taken during the first minute with an exposure time of 15 sec each and thereafter with an exposure of 2 min up to at least 16 min.. Several examinations were evaluated digitally. 26 examinations were performed on 11 dogs with homotransplanted kidneys. Immediately after transplantation the renal function was almost normal arid the bladder was filled in due time. At the beginning of rejection the initial uptake of radioactive Hippuran was reduced. The intrarenal transport became delayed; probably the renal extraction rate decreased. Corresponding to the development of an oedema in the transplant the uptake area increased in size. In cases of thrombosis of the main artery there was no evidence of any uptake of radioactivity in the transplant. Similar results were obtained in 41 examinations on 15 persons. Patients with postoperative anuria due to acute tubular necrosis showed still some uptake of radioactivity contrary to those with thrombosis of the renal artery, where no uptake was found. In cases of rejection the most frequent signs were a reduced initial uptake and a delayed intrarenal transport of radioactive Hippuran. Infarction could be detected by a reduced uptake in distinct areas of the transplant. (author)

5. Sequential provisional implant prosthodontics therapy.

Science.gov (United States)

Zinner, Ira D; Markovits, Stanley; Jansen, Curtis E; Reid, Patrick E; Schnader, Yale E; Shapiro, Herbert J

2012-01-01

The fabrication and long-term use of first- and second-stage provisional implant prostheses is critical to create a favorable prognosis for function and esthetics of a fixed-implant supported prosthesis. The fixed metal and acrylic resin cemented first-stage prosthesis, as reviewed in Part I, is needed for prevention of adjacent and opposing tooth movement, pressure on the implant site as well as protection to avoid micromovement of the freshly placed implant body. The second-stage prosthesis, reviewed in Part II, should be used following implant uncovering and abutment installation. The patient wears this provisional prosthesis until maturation of the bone and healing of soft tissues. The second-stage provisional prosthesis is also a fail-safe mechanism for possible early implant failures and also can be used with late failures and/or for the necessity to repair the definitive prosthesis. In addition, the screw-retained provisional prosthesis is used if and when an implant requires removal or other implants are to be placed as in a sequential approach. The creation and use of both first- and second-stage provisional prostheses involve a restorative dentist, dental technician, surgeon, and patient to work as a team. If the dentist alone cannot do diagnosis and treatment planning, surgery, and laboratory techniques, he or she needs help by employing the expertise of a surgeon and a laboratory technician. This team approach is essential for optimum results.

6. Comparing kinematic changes between a finger-tapping task and unconstrained finger flexion-extension task in patients with Parkinson's disease.

Science.gov (United States)

Teo, W P; Rodrigues, J P; Mastaglia, F L; Thickbroom, G W

2013-06-01

7. Minimal quantization and confinement

International Nuclear Information System (INIS)

Ilieva, N.P.; Kalinowskij, Yu.L.; Nguyen Suan Han; Pervushin, V.N.

1987-01-01

A ''minimal'' version of the Hamiltonian quantization based on the explicit solution of the Gauss equation and on the gauge-invariance principle is considered. By the example of the one-particle Green function we show that the requirement for gauge invariance leads to relativistic covariance of the theory and to more proper definition of the Faddeev - Popov integral that does not depend on the gauge choice. The ''minimal'' quantization is applied to consider the gauge-ambiguity problem and a new topological mechanism of confinement

8. Minimal Composite Inflation

DEFF Research Database (Denmark)

Channuie, Phongpichit; Jark Joergensen, Jakob; Sannino, Francesco

2011-01-01

We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity, and that the u......We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity...

9. Minimalism and Speakers’ Intuitions

Directory of Open Access Journals (Sweden)

Matías Gariazzo

2011-08-01

Full Text Available Minimalism proposes a semantics that does not account for speakers’ intuitions about the truth conditions of a range of sentences or utterances. Thus, a challenge for this view is to offer an explanation of how its assignment of semantic contents to these sentences is grounded in their use. Such an account was mainly offered by Soames, but also suggested by Cappelen and Lepore. The article criticizes this explanation by presenting four kinds of counterexamples to it, and arrives at the conclusion that minimalism has not successfully answered the above-mentioned challenge.

10. Minimal open strings

International Nuclear Information System (INIS)

Hosomichi, Kazuo

2008-01-01

We study FZZT-branes and open string amplitudes in (p, q) minimal string theory. We focus on the simplest boundary changing operators in two-matrix models, and identify the corresponding operators in worldsheet theory through the comparison of amplitudes. Along the way, we find a novel linear relation among FZZT boundary states in minimal string theory. We also show that the boundary ground ring is realized on physical open string operators in a very simple manner, and discuss its use for perturbative computation of higher open string amplitudes.

11. Tradable permit allocations and sequential choice

Energy Technology Data Exchange (ETDEWEB)

MacKenzie, Ian A. [Centre for Economic Research, ETH Zuerich, Zurichbergstrasse 18, 8092 Zuerich (Switzerland)

2011-01-15

This paper investigates initial allocation choices in an international tradable pollution permit market. For two sovereign governments, we compare allocation choices that are either simultaneously or sequentially announced. We show sequential allocation announcements result in higher (lower) aggregate emissions when announcements are strategic substitutes (complements). Whether allocation announcements are strategic substitutes or complements depends on the relationship between the follower's damage function and governments' abatement costs. When the marginal damage function is relatively steep (flat), allocation announcements are strategic substitutes (complements). For quadratic abatement costs and damages, sequential announcements provide a higher level of aggregate emissions. (author)

12. Sequential Generalized Transforms on Function Space

Directory of Open Access Journals (Sweden)

Jae Gil Choi

2013-01-01

Full Text Available We define two sequential transforms on a function space Ca,b[0,T] induced by generalized Brownian motion process. We then establish the existence of the sequential transforms for functionals in a Banach algebra of functionals on Ca,b[0,T]. We also establish that any one of these transforms acts like an inverse transform of the other transform. Finally, we give some remarks about certain relations between our sequential transforms and other well-known transforms on Ca,b[0,T].

13. Minimal model holography

International Nuclear Information System (INIS)

Gaberdiel, Matthias R; Gopakumar, Rajesh

2013-01-01

We review the duality relating 2D W N minimal model conformal field theories, in a large-N ’t Hooft like limit, to higher spin gravitational theories on AdS 3 . This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Higher spin theories and holography’. (review)

14. Minimal constrained supergravity

Energy Technology Data Exchange (ETDEWEB)

2017-01-10

We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

15. Hazardous waste minimization

International Nuclear Information System (INIS)

Freeman, H.

1990-01-01

This book presents an overview of waste minimization. Covers applications of technology to waste reduction, techniques for implementing programs, incorporation of programs into R and D, strategies for private industry and the public sector, and case studies of programs already in effect

16. Minimally invasive distal pancreatectomy

NARCIS (Netherlands)

Røsok, Bård I.; de Rooij, Thijs; van Hilst, Jony; Diener, Markus K.; Allen, Peter J.; Vollmer, Charles M.; Kooby, David A.; Shrikhande, Shailesh V.; Asbun, Horacio J.; Barkun, Jeffrey; Besselink, Marc G.; Boggi, Ugo; Conlon, Kevin; Han, Ho Seong; Hansen, Paul; Kendrick, Michael L.; Kooby, David; Montagnini, Andre L.; Palanivelu, Chinnasamy; Wakabayashi, Go; Zeh, Herbert J.

2017-01-01

The first International conference on Minimally Invasive Pancreas Resection was arranged in conjunction with the annual meeting of the International Hepato-Pancreato-Biliary Association (IHPBA), in Sao Paulo, Brazil on April 19th 2016. The presented evidence and outcomes resulting from the session

17. Minimal DBM Substraction

DEFF Research Database (Denmark)

David, Alexandre; Håkansson, John; G. Larsen, Kim

In this paper we present an algorithm to compute DBM substractions with a guaranteed minimal number of splits and disjoint DBMs to avoid any redundance. The substraction is one of the few operations that result in a non-convex zone, and thus, requires splitting. It is of prime importance to reduce...

18. Minimal constrained supergravity

Directory of Open Access Journals (Sweden)

N. Cribiori

2017-01-01

Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

19. Minimal constrained supergravity

International Nuclear Information System (INIS)

Cribiori, N.; Dall'Agata, G.; Farakos, F.; Porrati, M.

2017-01-01

We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

20. Efficacy of premixed versus sequential administration of ...

African Journals Online (AJOL)

sequential administration in separate syringes on block characteristics, haemodynamic parameters, side effect profile and postoperative analgesic requirement. Trial design: This was a prospective, randomised clinical study. Method: Sixty orthopaedic patients scheduled for elective lower limb surgery under spinal ...

1. Structural Consistency, Consistency, and Sequential Rationality.

OpenAIRE

Kreps, David M; Ramey, Garey

1987-01-01

Sequential equilibria comprise consistent beliefs and a sequentially ra tional strategy profile. Consistent beliefs are limits of Bayes ratio nal beliefs for sequences of strategies that approach the equilibrium strategy. Beliefs are structurally consistent if they are rationaliz ed by some single conjecture concerning opponents' strategies. Consis tent beliefs are not necessarily structurally consistent, notwithstan ding a claim by Kreps and Robert Wilson (1982). Moreover, the spirit of stru...

2. Minimal abdominal incisions

Directory of Open Access Journals (Sweden)

João Carlos Magi

2017-04-01

Full Text Available Minimally invasive procedures aim to resolve the disease with minimal trauma to the body, resulting in a rapid return to activities and in reductions of infection, complications, costs and pain. Minimally incised laparotomy, sometimes referred to as minilaparotomy, is an example of such minimally invasive procedures. The aim of this study is to demonstrate the feasibility and utility of laparotomy with minimal incision based on the literature and exemplifying with a case. The case in question describes reconstruction of the intestinal transit with the use of this incision. Male, young, HIV-positive patient in a late postoperative of ileotiflectomy, terminal ileostomy and closing of the ascending colon by an acute perforating abdomen, due to ileocolonic tuberculosis. The barium enema showed a proximal stump of the right colon near the ileostomy. The access to the cavity was made through the orifice resulting from the release of the stoma, with a lateral-lateral ileo-colonic anastomosis with a 25 mm circular stapler and manual closure of the ileal stump. These surgeries require their own tactics, such as rigor in the lysis of adhesions, tissue traction, and hemostasis, in addition to requiring surgeon dexterity – but without the need for investments in technology; moreover, the learning curve is reported as being lower than that for videolaparoscopy. Laparotomy with minimal incision should be considered as a valid and viable option in the treatment of surgical conditions. Resumo: Procedimentos minimamente invasivos visam resolver a doença com o mínimo de trauma ao organismo, resultando em retorno rápido às atividades, reduções nas infecções, complicações, custos e na dor. A laparotomia com incisão mínima, algumas vezes referida como minilaparotomia, é um exemplo desses procedimentos minimamente invasivos. O objetivo deste trabalho é demonstrar a viabilidade e utilidade das laparotomias com incisão mínima com base na literatura e

3. Minimal Walking Technicolor

DEFF Research Database (Denmark)

2007-01-01

Different theoretical and phenomenological aspects of the Minimal and Nonminimal Walking Technicolor theories have recently been studied. The goal here is to make the models ready for collider phenomenology. We do this by constructing the low energy effective theory containing scalars......, pseudoscalars, vector mesons and other fields predicted by the minimal walking theory. We construct their self-interactions and interactions with standard model fields. Using the Weinberg sum rules, opportunely modified to take into account the walking behavior of the underlying gauge theory, we find...... interesting relations for the spin-one spectrum. We derive the electroweak parameters using the newly constructed effective theory and compare the results with the underlying gauge theory. Our analysis is sufficiently general such that the resulting model can be used to represent a generic walking technicolor...

4. Reliability assessment of restructured power systems using reliability network equivalent and pseudo-sequential simulation techniques

International Nuclear Information System (INIS)

Ding, Yi; Wang, Peng; Goel, Lalit; Billinton, Roy; Karki, Rajesh

2007-01-01

This paper presents a technique to evaluate reliability of a restructured power system with a bilateral market. The proposed technique is based on the combination of the reliability network equivalent and pseudo-sequential simulation approaches. The reliability network equivalent techniques have been implemented in the Monte Carlo simulation procedure to reduce the computational burden of the analysis. Pseudo-sequential simulation has been used to increase the computational efficiency of the non-sequential simulation method and to model the chronological aspects of market trading and system operation. Multi-state Markov models for generation and transmission systems are proposed and implemented in the simulation. A new load shedding scheme is proposed during generation inadequacy and network congestion to minimize the load curtailment. The IEEE reliability test system (RTS) is used to illustrate the technique. (author)

5. Legal incentives for minimizing waste

International Nuclear Information System (INIS)

Clearwater, S.W.; Scanlon, J.M.

1991-01-01

Waste minimization, or pollution prevention, has become an integral component of federal and state environmental regulation. Minimizing waste offers many economic and public relations benefits. In addition, waste minimization efforts can also dramatically reduce potential criminal requirements. This paper addresses the legal incentives for minimizing waste under current and proposed environmental laws and regulations

6. Sequential Path Model for Grain Yield in Soybean

Directory of Open Access Journals (Sweden)

2010-09-01

Full Text Available This study was performed to determine some physiological traits that affect soybean,s grain yield via sequential path analysis. In a factorial experiment, two cultivars (Harcor and Williams were sown under four levels of nitrogen and two levels of weed management at the research station of Tabriz University, Iran, during 2004 and 2005. Grain yield, some yield components and physiological traits were measured. Correlation coefficient analysis showed that grain yield had significant positive and negative association with measured traits. A sequential path analysis was done in order to evaluate associations among grain yield and related traits by ordering the various variables in first, second and third order paths on the basis of their maximum direct effects and minimal collinearity. Two first-order variables, namely number of pods per plant and pre-flowering net photosynthesis revealed highest direct effect on total grain yield and explained 49, 44 and 47 % of the variation in grain yield based on 2004, 2005, and combined datasets, respectively. Four traits i.e. post-flowering net photosynthesis, plant height, leaf area index and intercepted radiation at the bottom layer of canopy were found to fit as second-order variables. Pre- and post-flowering chlorophyll content, main root length and intercepted radiation at the middle layer of canopy were placed at the third-order path. From the results concluded that, number of pods per plant and pre-flowering net photosynthesis are the best selection criteria in soybean for grain yield.

7. Sound Power Minimization of Circular Plates Through Damping Layer Placement

Science.gov (United States)

Wodtke, H.-W.; Lamancusa, J. S.

1998-09-01

Damping layers, widely used for noise and vibration control of thin-walled structures, can be designed to provide an optimal trade-off between performance and weight which is of particular importance in the automotive and aircraft industry. The goal of the presented work is the minimization of sound power radiated from plates under broadband excitation by redistribution of unconstrained damping layers. The total radiated sound power is assumed to be represented by the sound power radiated at the structural resonances. Resonance tracking is performed by means of single-degree-of-freedom (SDOF)-approximations based on near-resonance responses and their frequency derivatives. Axisymmetric vibrations of circular plates under several boundary and forcing conditions are considered. Frequency dependent Young's modulus and loss factor of the damping material are taken into account. Vibration analysis is based on the finite element method (FEM) while acoustic radiation is treated by means of Rayleigh's integral formula. It is shown that, starting from a uniform damping layer distribution, substantial reduction in radiated sound power can be achieved through redistribution of the damping layers. Depending on the given situation, these reductions are not only due to amplitude reductions but also to changes in vibration shapes and frequencies.

8. Electroweak precision observables in the minimal supersymmetric standard model

International Nuclear Information System (INIS)

Heinemeyer, S.; Hollik, W.; Weiglein, G.

2006-01-01

The current status of electroweak precision observables in the Minimal Supersymmetric Standard Model (MSSM) is reviewed. We focus in particular on the W boson mass, M W , the effective leptonic weak mixing angle, sin 2 θ eff , the anomalous magnetic moment of the muon (g-2) μ , and the lightest CP-even MSSM Higgs boson mass, m h . We summarize the current experimental situation and the status of the theoretical evaluations. An estimate of the current theoretical uncertainties from unknown higher-order corrections and from the experimental errors of the input parameters is given. We discuss future prospects for both the experimental accuracies and the precision of the theoretical predictions. Confronting the precision data with the theory predictions within the unconstrained MSSM and within specific SUSY-breaking scenarios, we analyse how well the data are described by the theory. The mSUGRA scenario with cosmological constraints yields a very good fit to the data, showing a clear preference for a relatively light mass scale of the SUSY particles. The constraints on the parameter space from the precision data are discussed, and it is shown that the prospective accuracy at the next generation of colliders will enhance the sensitivity of the precision tests very significantly

9. The ZOOM minimization package

International Nuclear Information System (INIS)

Fischler, Mark S.; Sachs, D.

2004-01-01

A new object-oriented Minimization package is available for distribution in the same manner as CLHEP. This package, designed for use in HEP applications, has all the capabilities of Minuit, but is a re-write from scratch, adhering to modern C++ design principles. A primary goal of this package is extensibility in several directions, so that its capabilities can be kept fresh with as little maintenance effort as possible. This package is distinguished by the priority that was assigned to C++ design issues, and the focus on producing an extensible system that will resist becoming obsolete

10. Minimizing the Pacman effect

International Nuclear Information System (INIS)

Ritson, D.; Chou, W.

1997-10-01

The Pacman bunches will experience two deleterious effects: tune shift and orbit displacement. It is known that the tune shift can be compensated by arranging crossing planes 900 relative to each other at successive interaction points (lPs). This paper gives an analytical estimate of the Pacman orbit displacement for a single as well as for two crossings. For the latter, it can be minimized by using equal phase advances from one IP to another. In the LHC, this displacement is in any event small and can be neglected

11. Minimally Invasive Parathyroidectomy

Directory of Open Access Journals (Sweden)

Lee F. Starker

2011-01-01

Full Text Available Minimally invasive parathyroidectomy (MIP is an operative approach for the treatment of primary hyperparathyroidism (pHPT. Currently, routine use of improved preoperative localization studies, cervical block anesthesia in the conscious patient, and intraoperative parathyroid hormone analyses aid in guiding surgical therapy. MIP requires less surgical dissection causing decreased trauma to tissues, can be performed safely in the ambulatory setting, and is at least as effective as standard cervical exploration. This paper reviews advances in preoperative localization, anesthetic techniques, and intraoperative management of patients undergoing MIP for the treatment of pHPT.

12. Electroweak symmetry breaking and collider signatures in the next-to-minimal composite Higgs model

Science.gov (United States)

Niehoff, Christoph; Stangl, Peter; Straub, David M.

2017-04-01

We conduct a detailed numerical analysis of the composite pseudo-Nambu-Goldstone Higgs model based on the next-to-minimal coset SO(6)/SO(5) ≅ SU(4)/Sp(4), featuring an additional SM singlet scalar in the spectrum, which we allow to mix with the Higgs boson. We identify regions in parameter space compatible with all current exper-imental constraints, including radiative electroweak symmetry breaking, flavour physics, and direct searches at colliders. We find the additional scalar, with a mass predicted to be below a TeV, to be virtually unconstrained by current LHC data, but potentially in reach of run 2 searches. Promising indirect searches include rare semi-leptonic B decays, CP violation in B s mixing, and the electric dipole moment of the neutron.

13. Electroweak symmetry breaking and collider signatures in the next-to-minimal composite Higgs model

Energy Technology Data Exchange (ETDEWEB)

Niehoff, Christoph; Stangl, Peter; Straub, David M. [Excellence Cluster Universe, TUM,Boltzmannstr. 2, 85748 Garching (Germany)

2017-04-20

We conduct a detailed numerical analysis of the composite pseudo-Nambu-Goldstone Higgs model based on the next-to-minimal coset SO(6)/SO(5)≅SU(4)/Sp(4), featuring an additional SM singlet scalar in the spectrum, which we allow to mix with the Higgs boson. We identify regions in parameter space compatible with all current experimental constraints, including radiative electroweak symmetry breaking, flavour physics, and direct searches at colliders. We find the additional scalar, with a mass predicted to be below a TeV, to be virtually unconstrained by current LHC data, but potentially in reach of run 2 searches. Promising indirect searches include rare semi-leptonic B decays, C P violation in B{sub s} mixing, and the electric dipole moment of the neutron.

14. Minimal conformal model

Energy Technology Data Exchange (ETDEWEB)

Helmboldt, Alexander; Humbert, Pascal; Lindner, Manfred; Smirnov, Juri [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany)

2016-07-01

The gauge hierarchy problem is one of the crucial drawbacks of the standard model of particle physics (SM) and thus has triggered model building over the last decades. Its most famous solution is the introduction of low-scale supersymmetry. However, without any significant signs of supersymmetric particles at the LHC to date, it makes sense to devise alternative mechanisms to remedy the hierarchy problem. One such mechanism is based on classically scale-invariant extensions of the SM, in which both the electroweak symmetry and the (anomalous) scale symmetry are broken radiatively via the Coleman-Weinberg mechanism. Apart from giving an introduction to classically scale-invariant models, the talk presents our results on obtaining a theoretically consistent minimal extension of the SM, which reproduces the correct low-scale phenomenology.

15. Minimal Reducts with Grasp

Directory of Open Access Journals (Sweden)

Iris Iddaly Mendez Gurrola

2011-03-01

Full Text Available The proper detection of patient level of dementia is important to offer the suitable treatment. The diagnosis is based on certain criteria, reflected in the clinical examinations. From these examinations emerge the limitations and the degree in which each patient is in. In order to reduce the total of limitations to be evaluated, we used the rough set theory, this theory has been applied in areas of the artificial intelligence such as decision analysis, expert systems, knowledge discovery, classification with multiple attributes. In our case this theory is applied to find the minimal limitations set or reduct that generate the same classification that considering all the limitations, to fulfill this purpose we development an algorithm GRASP (Greedy Randomized Adaptive Search Procedure.

16. Minimally extended SILH

International Nuclear Information System (INIS)

Chala, Mikael; Grojean, Christophe; Humboldt-Univ. Berlin; Lima, Leonardo de; Univ. Estadual Paulista, Sao Paulo

2017-03-01

Higgs boson compositeness is a phenomenologically viable scenario addressing the hierarchy problem. In minimal models, the Higgs boson is the only degree of freedom of the strong sector below the strong interaction scale. We present here the simplest extension of such a framework with an additional composite spin-zero singlet. To this end, we adopt an effective field theory approach and develop a set of rules to estimate the size of the various operator coefficients, relating them to the parameters of the strong sector and its structural features. As a result, we obtain the patterns of new interactions affecting both the new singlet and the Higgs boson's physics. We identify the characteristics of the singlet field which cause its effects on Higgs physics to dominate over the ones inherited from the composite nature of the Higgs boson. Our effective field theory construction is supported by comparisons with explicit UV models.

17. Minimally extended SILH

Energy Technology Data Exchange (ETDEWEB)

Chala, Mikael [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Valencia Univ. (Spain). Dept. de Fisica Teorica y IFIC; Durieux, Gauthier; Matsedonskyi, Oleksii [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Grojean, Christophe [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Humboldt-Univ. Berlin (Germany). Inst. fuer Physik; Lima, Leonardo de [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Univ. Estadual Paulista, Sao Paulo (Brazil). Inst. de Fisica Teorica

2017-03-15

Higgs boson compositeness is a phenomenologically viable scenario addressing the hierarchy problem. In minimal models, the Higgs boson is the only degree of freedom of the strong sector below the strong interaction scale. We present here the simplest extension of such a framework with an additional composite spin-zero singlet. To this end, we adopt an effective field theory approach and develop a set of rules to estimate the size of the various operator coefficients, relating them to the parameters of the strong sector and its structural features. As a result, we obtain the patterns of new interactions affecting both the new singlet and the Higgs boson's physics. We identify the characteristics of the singlet field which cause its effects on Higgs physics to dominate over the ones inherited from the composite nature of the Higgs boson. Our effective field theory construction is supported by comparisons with explicit UV models.

18. Dihydroazulene photoswitch operating in sequential tunneling regime

DEFF Research Database (Denmark)

Broman, Søren Lindbæk; Lara-Avila, Samuel; Thisted, Christine Lindbjerg

2012-01-01

to electrodes so that the electron transport goes by sequential tunneling. To assure weak coupling, the DHA switching kernel is modified by incorporating p-MeSC6H4 end-groups. Molecules are prepared by Suzuki cross-couplings on suitable halogenated derivatives of DHA. The synthesis presents an expansion of our......, incorporating a p-MeSC6H4 anchoring group in one end, has been placed in a silver nanogap. Conductance measurements justify that transport through both DHA (high resistivity) and VHF (low resistivity) forms goes by sequential tunneling. The switching is fairly reversible and reenterable; after more than 20 ON...

19. Asynchronous Operators of Sequential Logic Venjunction & Sequention

CERN Document Server

2011-01-01

This book is dedicated to new mathematical instruments assigned for logical modeling of the memory of digital devices. The case in point is logic-dynamical operation named venjunction and venjunctive function as well as sequention and sequentional function. Venjunction and sequention operate within the framework of sequential logic. In a form of the corresponding equations, they organically fit analytical expressions of Boolean algebra. Thus, a sort of symbiosis is formed using elements of asynchronous sequential logic on the one hand and combinational logic on the other hand. So, asynchronous

20. Speciation fingerprints of binary mixtures by the optimized sequential two-phase separation

International Nuclear Information System (INIS)

Macasek, F.

1995-01-01

The analysis of the separation methods suitable for chemical speciation of radionuclides and metals, and advantages of sequential (double) distribution technique were discussed. The equilibria are relatively easy to control and the method enables to minimize a matrix composition adjustment, and therefore it minimizes also the disturbance of original (native) state of elements. The technique may consist in the repeat solvent extraction of sample, or the replicate equilibration with sorbent. The common condition of applicability is a linear separation isotherm of the species, what is mostly a reasonable condition in case of trace concentrations. The equations used for simultaneous fitting were written in general form. 1 tab., 1 fig., 2 refs

1. Radial basis function neural networks with sequential learning MRAN and its applications

CERN Document Server

Sundararajan, N; Wei Lu Ying

1999-01-01

This book presents in detail the newly developed sequential learning algorithm for radial basis function neural networks, which realizes a minimal network. This algorithm, created by the authors, is referred to as Minimal Resource Allocation Networks (MRAN). The book describes the application of MRAN in different areas, including pattern recognition, time series prediction, system identification, control, communication and signal processing. Benchmark problems from these areas have been studied, and MRAN is compared with other algorithms. In order to make the book self-contained, a review of t

2. Eyewitness confidence in simultaneous and sequential lineups: a criterion shift account for sequential mistaken identification overconfidence.

Science.gov (United States)

Dobolyi, David G; Dodson, Chad S

2013-12-01

Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.

3. An Equivalent Emission Minimization Strategy for Causal Optimal Control of Diesel Engines

Directory of Open Access Journals (Sweden)

Stephan Zentner

2014-02-01

Full Text Available One of the main challenges during the development of operating strategies for modern diesel engines is the reduction of the CO2 emissions, while complying with ever more stringent limits for the pollutant emissions. The inherent trade-off between the emissions of CO2 and pollutants renders a simultaneous reduction difficult. Therefore, an optimal operating strategy is sought that yields minimal CO2 emissions, while holding the cumulative pollutant emissions at the allowed level. Such an operating strategy can be obtained offline by solving a constrained optimal control problem. However, the final-value constraint on the cumulated pollutant emissions prevents this approach from being adopted for causal control. This paper proposes a framework for causal optimal control of diesel engines. The optimization problem can be solved online when the constrained minimization of the CO2 emissions is reformulated as an unconstrained minimization of the CO2 emissions and the weighted pollutant emissions (i.e., equivalent emissions. However, the weighting factors are not known a priori. A method for the online calculation of these weighting factors is proposed. It is based on the Hamilton–Jacobi–Bellman (HJB equation and a physically motivated approximation of the optimal cost-to-go. A case study shows that the causal control strategy defined by the online calculation of the equivalence factor and the minimization of the equivalent emissions is only slightly inferior to the non-causal offline optimization, while being applicable to online control.

4. Comparative efficacy of simultaneous versus sequential multiple health behavior change interventions among adults: A systematic review of randomised trials.

Science.gov (United States)

James, Erica; Freund, Megan; Booth, Angela; Duncan, Mitch J; Johnson, Natalie; Short, Camille E; Wolfenden, Luke; Stacey, Fiona G; Kay-Lambkin, Frances; Vandelanotte, Corneel

2016-08-01

Growing evidence points to the benefits of addressing multiple health behaviors rather than single behaviors. This review evaluates the relative effectiveness of simultaneous and sequentially delivered multiple health behavior change (MHBC) interventions. Secondary aims were to identify: a) the most effective spacing of sequentially delivered components; b) differences in efficacy of MHBC interventions for adoption/cessation behaviors and lifestyle/addictive behaviors, and; c) differences in trial retention between simultaneously and sequentially delivered interventions. MHBC intervention trials published up to October 2015 were identified through a systematic search. Eligible trials were randomised controlled trials that directly compared simultaneous and sequential delivery of a MHBC intervention. A narrative synthesis was undertaken. Six trials met the inclusion criteria and across these trials the behaviors targeted were smoking, diet, physical activity, and alcohol consumption. Three trials reported a difference in intervention effect between a sequential and simultaneous approach in at least one behavioral outcome. Of these, two trials favoured a sequential approach on smoking. One trial favoured a simultaneous approach on fat intake. There was no difference in retention between sequential and simultaneous approaches. There is limited evidence regarding the relative effectiveness of sequential and simultaneous approaches. Given only three of the six trials observed a difference in intervention effectiveness for one health behavior outcome, and the relatively consistent finding that the sequential and simultaneous approaches were more effective than a usual/minimal care control condition, it appears that both approaches should be considered equally efficacious. PROSPERO registration number: CRD42015027876. Copyright © 2016 Elsevier Inc. All rights reserved.

5. Interpretability degrees of finitely axiomatized sequential theories

NARCIS (Netherlands)

Visser, Albert

In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory-like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB-have suprema. This partially answers a question posed

6. Interpretability Degrees of Finitely Axiomatized Sequential Theories

NARCIS (Netherlands)

Visser, Albert

2012-01-01

In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory —like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB— have suprema. This partially answers a question

7. S.M.P. SEQUENTIAL MATHEMATICS PROGRAM.

Science.gov (United States)

CICIARELLI, V; LEONARD, JOSEPH

A SEQUENTIAL MATHEMATICS PROGRAM BEGINNING WITH THE BASIC FUNDAMENTALS ON THE FOURTH GRADE LEVEL IS PRESENTED. INCLUDED ARE AN UNDERSTANDING OF OUR NUMBER SYSTEM, AND THE BASIC OPERATIONS OF WORKING WITH WHOLE NUMBERS--ADDITION, SUBTRACTION, MULTIPLICATION, AND DIVISION. COMMON FRACTIONS ARE TAUGHT IN THE FIFTH, SIXTH, AND SEVENTH GRADES. A…

8. Sequential and Simultaneous Logit: A Nested Model.

NARCIS (Netherlands)

van Ophem, J.C.M.; Schram, A.J.H.C.

1997-01-01

A nested model is presented which has both the sequential and the multinomial logit model as special cases. This model provides a simple test to investigate the validity of these specifications. Some theoretical properties of the model are discussed. In the analysis a distribution function is

9. Sensitivity Analysis in Sequential Decision Models.

Science.gov (United States)

Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

2017-02-01

Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

10. Sequential models for coarsening and missingness

NARCIS (Netherlands)

Gill, R.D.; Robins, J.M.

1997-01-01

In a companion paper we described what intuitively would seem to be the most general possible way to generate Coarsening at Random mechanisms a sequential procedure called randomized monotone coarsening Counterexamples showed that CAR mechanisms exist which cannot be represented in this way Here we

11. Sequential motor skill: cognition, perception and action

NARCIS (Netherlands)

Ruitenberg, M.F.L.

2013-01-01

Discrete movement sequences are assumed to be the building blocks of more complex sequential actions that are present in our everyday behavior. The studies presented in this dissertation address the (neuro)cognitive underpinnings of such movement sequences, in particular in relationship to the role

12. Sequential decoders for large MIMO systems

KAUST Repository

Ali, Konpal S.; Abediseid, Walid; Alouini, Mohamed-Slim

2014-01-01

the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity

13. A framework for sequential multiblock component methods

NARCIS (Netherlands)

Smilde, A.K.; Westerhuis, J.A.; Jong, S.de

2003-01-01

Multiblock or multiset methods are starting to be used in chemistry and biology to study complex data sets. In chemometrics, sequential multiblock methods are popular; that is, methods that calculate one component at a time and use deflation for finding the next component. In this paper a framework

14. Classical and sequential limit analysis revisited

Science.gov (United States)

Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi

2018-04-01

Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.

15. Sequential spatial processes for image analysis

NARCIS (Netherlands)

M.N.M. van Lieshout (Marie-Colette); V. Capasso

2009-01-01

htmlabstractWe give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects

16. Sequential spatial processes for image analysis

NARCIS (Netherlands)

Lieshout, van M.N.M.; Capasso, V.

2009-01-01

We give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects through a video

17. Sequential Analysis: Hypothesis Testing and Changepoint Detection

Science.gov (United States)

2014-07-11

maintains the flexibility of deciding sooner than the fixed sample size procedure at the price of some lower power [13, 514]. The sequential probability... markets , detection of signals with unknown arrival time in seismology, navigation, radar and sonar signal processing, speech segmentation, and the... skimming cruise missile can yield a significant increase in the probability of raid annihilation. Furthermore, usually detection systems are

18. STABILIZED SEQUENTIAL QUADRATIC PROGRAMMING: A SURVEY

Directory of Open Access Journals (Sweden)

Damián Fernández

2014-12-01

Full Text Available We review the motivation for, the current state-of-the-art in convergence results, and some open questions concerning the stabilized version of the sequential quadratic programming algorithm for constrained optimization. We also discuss the tools required for its local convergence analysis, globalization challenges, and extentions of the method to the more general variational problems.

19. Truly costly sequential search and oligopolistic pricing

NARCIS (Netherlands)

Janssen, Maarten C W; Moraga-González, José Luis; Wildenbeest, Matthijs R.

We modify the paper of Stahl (1989) [Stahl, D.O., 1989. Oligopolistic pricing with sequential consumer search. American Economic Review 79, 700-12] by relaxing the assumption that consumers obtain the first price quotation for free. When all price quotations are costly to obtain, the unique

20. How to Read the Tractatus Sequentially

Directory of Open Access Journals (Sweden)

Tim Kraft

2016-11-01

Full Text Available One of the unconventional features of Wittgenstein’s Tractatus Logico-Philosophicus is its use of an elaborated and detailed numbering system. Recently, Bazzocchi, Hacker und Kuusela have argued that the numbering system means that the Tractatus must be read and interpreted not as a sequentially ordered book, but as a text with a two-dimensional, tree-like structure. Apart from being able to explain how the Tractatus was composed, the tree reading allegedly solves exegetical issues both on the local (e. g. how 4.02 fits into the series of remarks surrounding it and the global level (e. g. relation between ontology and picture theory, solipsism and the eye analogy, resolute and irresolute readings. This paper defends the sequential reading against the tree reading. After presenting the challenges generated by the numbering system and the two accounts as attempts to solve them, it is argued that Wittgenstein’s own explanation of the numbering system, anaphoric references within the Tractatus and the exegetical issues mentioned above do not favour the tree reading, but a version of the sequential reading. This reading maintains that the remarks of the Tractatus form a sequential chain: The role of the numbers is to indicate how remarks on different levels are interconnected to form a concise, surveyable and unified whole.

1. Adult Word Recognition and Visual Sequential Memory

Science.gov (United States)

Holmes, V. M.

2012-01-01

Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…

2. Terminating Sequential Delphi Survey Data Collection

Science.gov (United States)

Kalaian, Sema A.; Kasim, Rafa M.

2012-01-01

The Delphi survey technique is an iterative mail or electronic (e-mail or web-based) survey method used to obtain agreement or consensus among a group of experts in a specific field on a particular issue through a well-designed and systematic multiple sequential rounds of survey administrations. Each of the multiple rounds of the Delphi survey…

3. Minimal Marking: A Success Story

Science.gov (United States)

McNeilly, Anne

2014-01-01

The minimal-marking project conducted in Ryerson's School of Journalism throughout 2012 and early 2013 resulted in significantly higher grammar scores in two first-year classes of minimally marked university students when compared to two traditionally marked classes. The "minimal-marking" concept (Haswell, 1983), which requires…

4. Swarm robotics and minimalism

Science.gov (United States)

Sharkey, Amanda J. C.

2007-09-01

Swarm Robotics (SR) is closely related to Swarm Intelligence, and both were initially inspired by studies of social insects. Their guiding principles are based on their biological inspiration and take the form of an emphasis on decentralized local control and communication. Earlier studies went a step further in emphasizing the use of simple reactive robots that only communicate indirectly through the environment. More recently SR studies have moved beyond these constraints to explore the use of non-reactive robots that communicate directly, and that can learn and represent their environment. There is no clear agreement in the literature about how far such extensions of the original principles could go. Should there be any limitations on the individual abilities of the robots used in SR studies? Should knowledge of the capabilities of social insects lead to constraints on the capabilities of individual robots in SR studies? There is a lack of explicit discussion of such questions, and researchers have adopted a variety of constraints for a variety of reasons. A simple taxonomy of swarm robotics is presented here with the aim of addressing and clarifying these questions. The taxonomy distinguishes subareas of SR based on the emphases and justifications for minimalism and individual simplicity.

5. Minimal dilaton model

Directory of Open Access Journals (Sweden)

Oda Kin-ya

2013-05-01

Full Text Available Both the ATLAS and CMS experiments at the LHC have reported the observation of the particle of mass around 125 GeV which is consistent to the Standard Model (SM Higgs boson, but with an excess of events beyond the SM expectation in the diphoton decay channel at each of them. There still remains room for a logical possibility that we are not seeing the SM Higgs but something else. Here we introduce the minimal dilaton model in which the LHC signals are explained by an extra singlet scalar of the mass around 125 GeV that slightly mixes with the SM Higgs heavier than 600 GeV. When this scalar has a vacuum expectation value well beyond the electroweak scale, it can be identified as a linearly realized version of a dilaton field. Though the current experimental constraints from the Higgs search disfavors such a region, the singlet scalar model itself still provides a viable alternative to the SM Higgs in interpreting its search results.

6. Minimal mirror twin Higgs

Energy Technology Data Exchange (ETDEWEB)

Barbieri, Riccardo [Institute of Theoretical Studies, ETH Zurich,CH-8092 Zurich (Switzerland); Scuola Normale Superiore,Piazza dei Cavalieri 7, 56126 Pisa (Italy); Hall, Lawrence J.; Harigaya, Keisuke [Department of Physics, University of California,Berkeley, California 94720 (United States); Theoretical Physics Group, Lawrence Berkeley National Laboratory,Berkeley, California 94720 (United States)

2016-11-29

In a Mirror Twin World with a maximally symmetric Higgs sector the little hierarchy of the Standard Model can be significantly mitigated, perhaps displacing the cutoff scale above the LHC reach. We show that consistency with observations requires that the Z{sub 2} parity exchanging the Standard Model with its mirror be broken in the Yukawa couplings. A minimal such effective field theory, with this sole Z{sub 2} breaking, can generate the Z{sub 2} breaking in the Higgs sector necessary for the Twin Higgs mechanism. The theory has constrained and correlated signals in Higgs decays, direct Dark Matter Detection and Dark Radiation, all within reach of foreseen experiments, over a region of parameter space where the fine-tuning for the electroweak scale is 10-50%. For dark matter, both mirror neutrons and a variety of self-interacting mirror atoms are considered. Neutrino mass signals and the effects of a possible additional Z{sub 2} breaking from the vacuum expectation values of B−L breaking fields are also discussed.

7. Implementation of generalized measurements with minimal disturbance on a quantum computer

International Nuclear Information System (INIS)

Decker, T.; Grassl, M.

2006-01-01

We consider the problem of efficiently implementing a generalized measurement on a quantum computer. Using methods from representation theory, we exploit symmetries of the states we want to identify respectively symmetries of the measurement operators. In order to allow the information to be extracted sequentially, the disturbance of the quantum state due to the measurement should be minimal. (Abstract Copyright [2006], Wiley Periodicals, Inc.)

8. Selective condensation drives partitioning and sequential secretion of cyst wall proteins in differentiating Giardia lamblia.

Directory of Open Access Journals (Sweden)

2010-04-01

Full Text Available Controlled secretion of a protective extracellular matrix is required for transmission of the infective stage of a large number of protozoan and metazoan parasites. Differentiating trophozoites of the highly minimized protozoan parasite Giardia lamblia secrete the proteinaceous portion of the cyst wall material (CWM consisting of three paralogous cyst wall proteins (CWP1-3 via organelles termed encystation-specific vesicles (ESVs. Phylogenetic and molecular data indicate that Diplomonads have lost a classical Golgi during reductive evolution. However, neogenesis of ESVs in encysting Giardia trophozoites transiently provides basic Golgi functions by accumulating presorted CWM exported from the ER for maturation. Based on this "minimal Golgi" hypothesis we predicted maturation of ESVs to a trans Golgi-like stage, which would manifest as a sorting event before regulated secretion of the CWM. Here we show that proteolytic processing of pro-CWP2 in maturing ESVs coincides with partitioning of CWM into two fractions, which are sorted and secreted sequentially with different kinetics. This novel sorting function leads to rapid assembly of a structurally defined outer cyst wall, followed by slow secretion of the remaining components. Using live cell microscopy we find direct evidence for condensed core formation in maturing ESVs. Core formation suggests that a mechanism controlled by phase transitions of the CWM from fluid to condensed and back likely drives CWM partitioning and makes sorting and sequential secretion possible. Blocking of CWP2 processing by a protease inhibitor leads to mis-sorting of a CWP2 reporter. Nevertheless, partitioning and sequential secretion of two portions of the CWM are unaffected in these cells. Although these cysts have a normal appearance they are not water resistant and therefore not infective. Our findings suggest that sequential assembly is a basic architectural principle of protective wall formation and requires

9. Impact of Diagrams on Recalling Sequential Elements in Expository Texts.

Science.gov (United States)

Guri-Rozenblit, Sarah

1988-01-01

Examines the instructional effectiveness of abstract diagrams on recall of sequential relations in social science textbooks. Concludes that diagrams assist significantly the recall of sequential relations in a text and decrease significantly the rate of order mistakes. (RS)

10. Documentscape: Intertextuality, Sequentiality & Autonomy at Work

DEFF Research Database (Denmark)

Christensen, Lars Rune; Bjørn, Pernille

2014-01-01

On the basis of an ethnographic field study, this article introduces the concept of documentscape to the analysis of document-centric work practices. The concept of documentscape refers to the entire ensemble of documents in their mutual intertextual interlocking. Providing empirical data from...... a global software development case, we show how hierarchical structures and sequentiality across the interlocked documents are critical to how actors make sense of the work of others and what to do next in a geographically distributed setting. Furthermore, we found that while each document is created...... as part of a quasi-sequential order, this characteristic does not make the document, as a single entity, into a stable object. Instead, we found that the documents were malleable and dynamic while suspended in intertextual structures. Our concept of documentscape points to how the hierarchical structure...

11. A minimax procedure in the context of sequential mastery testing

NARCIS (Netherlands)

Vos, Hendrik J.

1999-01-01

The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master or a nonmaster, or to continue sampling and administering another random test item. The framework of minimax sequential decision theory

12. Applying the minimax principle to sequential mastery testing

NARCIS (Netherlands)

Vos, Hendrik J.

2002-01-01

The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master, a nonmaster, or to continue sampling and administering another random item. The framework of minimax sequential decision theory (minimum

13. Optimal Sequential Rules for Computer-Based Instruction.

Science.gov (United States)

Vos, Hans J.

1998-01-01

Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

14. On Locally Most Powerful Sequential Rank Tests

Czech Academy of Sciences Publication Activity Database

Kalina, Jan

2017-01-01

Roč. 36, č. 1 (2017), s. 111-125 ISSN 0747-4946 R&D Projects: GA ČR GA17-07384S Grant - others:Nadační fond na podporu vědy(CZ) Neuron Institutional support: RVO:67985807 Keywords : nonparametric test s * sequential ranks * stopping variable Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.339, year: 2016

15. Sequential pattern recognition by maximum conditional informativity

Czech Academy of Sciences Publication Activity Database

Grim, Jiří

2014-01-01

Roč. 45, č. 1 (2014), s. 39-45 ISSN 0167-8655 R&D Projects: GA ČR(CZ) GA14-02652S; GA ČR(CZ) GA14-10911S Keywords : Multivariate statistics * Statistical pattern recognition * Sequential decision making * Product mixtures * EM algorithm * Shannon information Subject RIV: IN - Informatics, Computer Sci ence Impact factor: 1.551, year: 2014 http://library.utia.cas.cz/separaty/2014/RO/grim-0428565.pdf

16. Comparing two Poisson populations sequentially: an application

International Nuclear Information System (INIS)

Halteman, E.J.

1986-01-01

Rocky Flats Plant in Golden, Colorado monitors each of its employees for radiation exposure. Excess exposure is detected by comparing the means of two Poisson populations. A sequential probability ratio test (SPRT) is proposed as a replacement for the fixed sample normal approximation test. A uniformly most efficient SPRT exists, however logistics suggest using a truncated SPRT. The truncated SPRT is evaluated in detail and shown to possess large potential savings in average time spent by employees in the monitoring process

17. Heat accumulation during sequential cortical bone drilling.

Science.gov (United States)

Palmisano, Andrew C; Tai, Bruce L; Belmont, Barry; Irwin, Todd A; Shih, Albert; Holmes, James R

2016-03-01

Significant research exists regarding heat production during single-hole bone drilling. No published data exist regarding repetitive sequential drilling. This study elucidates the phenomenon of heat accumulation for sequential drilling with both Kirschner wires (K wires) and standard two-flute twist drills. It was hypothesized that cumulative heat would result in a higher temperature with each subsequent drill pass. Nine holes in a 3 × 3 array were drilled sequentially on moistened cadaveric tibia bone kept at body temperature (about 37 °C). Four thermocouples were placed at the center of four adjacent holes and 2 mm below the surface. A battery-driven hand drill guided by a servo-controlled motion system was used. Six samples were drilled with each tool (2.0 mm K wire and 2.0 and 2.5 mm standard drills). K wire drilling increased temperature from 5 °C at the first hole to 20 °C at holes 6 through 9. A similar trend was found in standard drills with less significant increments. The maximum temperatures of both tools increased from drill sizes was found to be insignificant (P > 0.05). In conclusion, heat accumulated during sequential drilling, with size difference being insignificant. K wire produced more heat than its twist-drill counterparts. This study has demonstrated the heat accumulation phenomenon and its significant effect on temperature. Maximizing the drilling field and reducing the number of drill passes may decrease bone injury. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

18. Sequential Monte Carlo with Highly Informative Observations

OpenAIRE

Del Moral, Pierre; Murray, Lawrence M.

2014-01-01

We propose sequential Monte Carlo (SMC) methods for sampling the posterior distribution of state-space models under highly informative observation regimes, a situation in which standard SMC methods can perform poorly. A special case is simulating bridges between given initial and final values. The basic idea is to introduce a schedule of intermediate weighting and resampling times between observation times, which guide particles towards the final state. This can always be done for continuous-...

19. Sequential test procedures for inventory differences

International Nuclear Information System (INIS)

Goldman, A.S.; Kern, E.A.; Emeigh, C.W.

1985-01-01

By means of a simulation study, we investigated the appropriateness of Page's and power-one sequential tests on sequences of inventory differences obtained from an example materials control unit, a sub-area of a hypothetical UF 6 -to-U 3 O 8 conversion process. The study examined detection probability and run length curves obtained from different loss scenarios. 12 refs., 10 figs., 2 tabs

20. Sequential neural models with stochastic layers

DEFF Research Database (Denmark)

Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

2016-01-01

How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

1. CLAss-Specific Subspace Kernel Representations and Adaptive Margin Slack Minimization for Large Scale Classification.

Science.gov (United States)

Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan

2018-02-01

In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.

2. Global Analysis of Minimal Surfaces

CERN Document Server

Dierkes, Ulrich; Tromba, Anthony J

2010-01-01

Many properties of minimal surfaces are of a global nature, and this is already true for the results treated in the first two volumes of the treatise. Part I of the present book can be viewed as an extension of these results. For instance, the first two chapters deal with existence, regularity and uniqueness theorems for minimal surfaces with partially free boundaries. Here one of the main features is the possibility of 'edge-crawling' along free parts of the boundary. The third chapter deals with a priori estimates for minimal surfaces in higher dimensions and for minimizers of singular integ

3. Minimal Surfaces for Hitchin Representations

DEFF Research Database (Denmark)

Li, Qiongling; Dai, Song

2018-01-01

. In this paper, we investigate the properties of immersed minimal surfaces inside symmetric space associated to a subloci of Hitchin component: $q_n$ and $q_{n-1}$ case. First, we show that the pullback metric of the minimal surface dominates a constant multiple of the hyperbolic metric in the same conformal...... class and has a strong rigidity property. Secondly, we show that the immersed minimal surface is never tangential to any flat inside the symmetric space. As a direct corollary, the pullback metric of the minimal surface is always strictly negatively curved. In the end, we find a fully decoupled system...

4. A convergent overlapping domain decomposition method for total variation minimization

KAUST Repository

Fornasier, Massimo

2010-06-22

In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.

5. Guidelines for mixed waste minimization

International Nuclear Information System (INIS)

Owens, C.

1992-02-01

Currently, there is no commercial mixed waste disposal available in the United States. Storage and treatment for commercial mixed waste is limited. Host States and compacts region officials are encouraging their mixed waste generators to minimize their mixed wastes because of management limitations. This document provides a guide to mixed waste minimization

6. Minimal changes in health status questionnaires: distinction between minimally detectable change and minimally important change

Directory of Open Access Journals (Sweden)

Knol Dirk L

2006-08-01

Full Text Available Abstract Changes in scores on health status questionnaires are difficult to interpret. Several methods to determine minimally important changes (MICs have been proposed which can broadly be divided in distribution-based and anchor-based methods. Comparisons of these methods have led to insight into essential differences between these approaches. Some authors have tried to come to a uniform measure for the MIC, such as 0.5 standard deviation and the value of one standard error of measurement (SEM. Others have emphasized the diversity of MIC values, depending on the type of anchor, the definition of minimal importance on the anchor, and characteristics of the disease under study. A closer look makes clear that some distribution-based methods have been merely focused on minimally detectable changes. For assessing minimally important changes, anchor-based methods are preferred, as they include a definition of what is minimally important. Acknowledging the distinction between minimally detectable and minimally important changes is useful, not only to avoid confusion among MIC methods, but also to gain information on two important benchmarks on the scale of a health status measurement instrument. Appreciating the distinction, it becomes possible to judge whether the minimally detectable change of a measurement instrument is sufficiently small to detect minimally important changes.

7. An anomaly detection and isolation scheme with instance-based learning and sequential analysis

International Nuclear Information System (INIS)

Yoo, T. S.; Garcia, H. E.

2006-01-01

This paper presents an online anomaly detection and isolation (FDI) technique using an instance-based learning method combined with a sequential change detection and isolation algorithm. The proposed method uses kernel density estimation techniques to build statistical models of the given empirical data (null hypothesis). The null hypothesis is associated with the set of alternative hypotheses modeling the abnormalities of the systems. A decision procedure involves a sequential change detection and isolation algorithm. Notably, the proposed method enjoys asymptotic optimality as the applied change detection and isolation algorithm is optimal in minimizing the worst mean detection/isolation delay for a given mean time before a false alarm or a false isolation. Applicability of this methodology is illustrated with redundant sensor data set and its performance. (authors)

8. Multistrain models predict sequential multidrug treatment strategies to result in less antimicrobial resistance than combination treatment

DEFF Research Database (Denmark)

Ahmad, Amais; Zachariasen, Camilla; Christiansen, Lasse Engbo

2016-01-01

Background: Combination treatment is increasingly used to fight infections caused by bacteria resistant to two or more antimicrobials. While multiple studies have evaluated treatment strategies to minimize the emergence of resistant strains for single antimicrobial treatment, fewer studies have...... the sensitive fraction of the commensal flora.Growth parameters for competing bacterial strains were estimated from the combined in vitro pharmacodynamic effect of two antimicrobials using the relationship between concentration and net bacterial growth rate. Predictions of in vivo bacterial growth were...... (how frequently antibiotics are alternated in a sequential treatment) of the two drugs was dependent upon the order in which the two drugs were used.Conclusion: Sequential treatment was more effective in preventing the growth of resistant strains when compared to the combination treatment. The cycling...

9. Facilitated assignment of large protein NMR signals with covariance sequential spectra using spectral derivatives.

Science.gov (United States)

Harden, Bradley J; Nichols, Scott R; Frueh, Dominique P

2014-09-24

Nuclear magnetic resonance (NMR) studies of larger proteins are hampered by difficulties in assigning NMR resonances. Human intervention is typically required to identify NMR signals in 3D spectra, and subsequent procedures depend on the accuracy of this so-called peak picking. We present a method that provides sequential connectivities through correlation maps constructed with covariance NMR, bypassing the need for preliminary peak picking. We introduce two novel techniques to minimize false correlations and merge the information from all original 3D spectra. First, we take spectral derivatives prior to performing covariance to emphasize coincident peak maxima. Second, we multiply covariance maps calculated with different 3D spectra to destroy erroneous sequential correlations. The maps are easy to use and can readily be generated from conventional triple-resonance experiments. Advantages of the method are demonstrated on a 37 kDa nonribosomal peptide synthetase domain subject to spectral overlap.

10. Waste minimization handbook, Volume 1

International Nuclear Information System (INIS)

Boing, L.E.; Coffey, M.J.

1995-12-01

This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility's life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996

11. Waste minimization handbook, Volume 1

Energy Technology Data Exchange (ETDEWEB)

Boing, L.E.; Coffey, M.J.

1995-12-01

This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facilitys life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996.

12. Minimal Webs in Riemannian Manifolds

DEFF Research Database (Denmark)

Markvorsen, Steen

2008-01-01

For a given combinatorial graph $G$ a {\\it geometrization} $(G, g)$ of the graph is obtained by considering each edge of the graph as a $1-$dimensional manifold with an associated metric $g$. In this paper we are concerned with {\\it minimal isometric immersions} of geometrized graphs $(G, g......)$ into Riemannian manifolds $(N^{n}, h)$. Such immersions we call {\\em{minimal webs}}. They admit a natural 'geometric' extension of the intrinsic combinatorial discrete Laplacian. The geometric Laplacian on minimal webs enjoys standard properties such as the maximum principle and the divergence theorems, which...... are of instrumental importance for the applications. We apply these properties to show that minimal webs in ambient Riemannian spaces share several analytic and geometric properties with their smooth (minimal submanifold) counterparts in such spaces. In particular we use appropriate versions of the divergence...

13. Moving Synergistically Acting Drug Combinations to the Clinic by Comparing Sequential versus Simultaneous Drug Administrations.

Science.gov (United States)

Dinavahi, Saketh S; Noory, Mohammad A; Gowda, Raghavendra; Drabick, Joseph J; Berg, Arthur; Neves, Rogerio I; Robertson, Gavin P

2018-03-01

Drug combinations acting synergistically to kill cancer cells have become increasingly important in melanoma as an approach to manage the recurrent resistant disease. Protein kinase B (AKT) is a major target in this disease but its inhibitors are not effective clinically, which is a major concern. Targeting AKT in combination with WEE1 (mitotic inhibitor kinase) seems to have potential to make AKT-based therapeutics effective clinically. Since agents targeting AKT and WEE1 have been tested individually in the clinic, the quickest way to move the drug combination to patients would be to combine these agents sequentially, enabling the use of existing phase I clinical trial toxicity data. Therefore, a rapid preclinical approach is needed to evaluate whether simultaneous or sequential drug treatment has maximal therapeutic efficacy, which is based on a mechanistic rationale. To develop this approach, melanoma cell lines were treated with AKT inhibitor AZD5363 [4-amino- N -[(1 S )-1-(4-chlorophenyl)-3-hydroxypropyl]-1-(7 H -pyrrolo[2,3- d ]pyrimidin-4-yl)piperidine-4-carboxamide] and WEE1 inhibitor AZD1775 [2-allyl-1-(6-(2-hydroxypropan-2-yl)pyridin-2-yl)-6-((4-(4-methylpiperazin-1-yl)phenyl)amino)-1 H -pyrazolo[3,4- d ]pyrimidin-3(2 H )-one] using simultaneous and sequential dosing schedules. Simultaneous treatment synergistically reduced melanoma cell survival and tumor growth. In contrast, sequential treatment was antagonistic and had a minimal tumor inhibitory effect compared with individual agents. Mechanistically, simultaneous targeting of AKT and WEE1 enhanced deregulation of the cell cycle and DNA damage repair pathways by modulating transcription factors p53 and forkhead box M1, which was not observed with sequential treatment. Thus, this study identifies a rapid approach to assess the drug combinations with a mechanistic basis for selection, which suggests that combining AKT and WEE1 inhibitors is needed for maximal efficacy. Copyright © 2018 by The American

14. Minimal Poems Written in 1979 Minimal Poems Written in 1979

Directory of Open Access Journals (Sweden)

Sandra Sirangelo Maggio

2008-04-01

Full Text Available The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism. The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism.

15. On Locally Most Powerful Sequential Rank Tests

Czech Academy of Sciences Publication Activity Database

Kalina, Jan

2017-01-01

Roč. 36, č. 1 (2017), s. 111-125 ISSN 0747-4946 R&D Projects: GA ČR GA17-07384S Grant - others:Nadační fond na podporu vědy(CZ) Neuron Institutional support: RVO:67985556 Keywords : nonparametric test s * sequential ranks * stopping variable Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.339, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/kalina-0474065.pdf

16. Decoding restricted participation in sequential electricity markets

Energy Technology Data Exchange (ETDEWEB)

Knaut, Andreas; Paschmann, Martin

2017-06-15

Restricted participation in sequential markets may cause high price volatility and welfare losses. In this paper we therefore analyze the drivers of restricted participation in the German intraday auction which is a short-term electricity market with quarter-hourly products. Applying a fundamental electricity market model with 15-minute temporal resolution, we identify the lack of sub-hourly market coupling being the most relevant driver of restricted participation. We derive a proxy for price volatility and find that full market coupling may trigger quarter-hourly price volatility to decrease by a factor close to four.

17. THE DEVELOPMENT OF SPECIAL SEQUENTIALLY-TIMED

Directory of Open Access Journals (Sweden)

Stanislav LICHOROBIEC

2016-06-01

Full Text Available This article documents the development of the noninvasive use of explosives during the destruction of ice mass in river flows. The system of special sequentially-timed charges utilizes the increase in efficiency of cutting charges by covering them with bags filled with water, while simultaneously increasing the effect of the entire system of timed charges. Timing, spatial combinations during placement, and the linking of these charges results in the loosening of ice barriers on a frozen waterway, while at the same time regulating the size of the ice fragments. The developed charges will increase the operability and safety of IRS units.

18. Pass-transistor asynchronous sequential circuits

Science.gov (United States)

Whitaker, Sterling R.; Maki, Gary K.

1989-01-01

Design methods for asynchronous sequential pass-transistor circuits, which result in circuits that are hazard- and critical-race-free and which have added degrees of freedom for the input signals, are discussed. The design procedures are straightforward and easy to implement. Two single-transition-time state assignment methods are presented, and hardware bounds for each are established. A surprising result is that the hardware realizations for each next state variable and output variable is identical for a given flow table. Thus, a state machine with N states and M outputs can be constructed using a single layout replicated N + M times.

19. Estimation After a Group Sequential Trial.

Science.gov (United States)

Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

2015-10-01

Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why

20. A sequential/parallel track selector

CERN Document Server

Bertolino, F; Bressani, Tullio; Chiavassa, E; Costa, S; Dellacasa, G; Gallio, M; Musso, A

1980-01-01

A medium speed ( approximately 1 mu s) hardware pre-analyzer for the selection of events detected in four planes of drift chambers in the magnetic field of the Omicron Spectrometer at the CERN SC is described. Specific geometrical criteria determine patterns of hits in the four planes of vertical wires that have to be recognized and that are stored as patterns of '1's in random access memories. Pairs of good hits are found sequentially, then the RAMs are used as look-up tables. (6 refs).

1. Boundary conditions in random sequential adsorption

Science.gov (United States)

Cieśla, Michał; Ziff, Robert M.

2018-04-01

The influence of different boundary conditions on the density of random packings of disks is studied. Packings are generated using the random sequential adsorption algorithm with three different types of boundary conditions: periodic, open, and wall. It is found that the finite size effects are smallest for periodic boundary conditions, as expected. On the other hand, in the case of open and wall boundaries it is possible to introduce an effective packing size and a constant correction term to significantly improve the packing densities.

2. From sequential to parallel programming with patterns

CERN Document Server

CERN. Geneva

2018-01-01

To increase in both performance and efficiency, our programming models need to adapt to better exploit modern processors. The classic idioms and patterns for programming such as loops, branches or recursion are the pillars of almost every code and are well known among all programmers. These patterns all have in common that they are sequential in nature. Embracing parallel programming patterns, which allow us to program for multi- and many-core hardware in a natural way, greatly simplifies the task of designing a program that scales and performs on modern hardware, independently of the used programming language, and in a generic way.

3. Sequential extraction of uranium metal contamination

International Nuclear Information System (INIS)

Murry, M.M.; Spitz, H.B.; Connick, W.B.

2016-01-01

Samples of uranium contaminated dirt collected from the dirt floor of an abandoned metal rolling mill were analyzed for uranium using a sequential extraction protocol involving a series of five increasingly aggressive solvents. The quantity of uranium extracted from the contaminated dirt by each reagent can aid in predicting the fate and transport of the uranium contamination in the environment. Uranium was separated from each fraction using anion exchange, electrodeposition and analyzed by alpha spectroscopy analysis. Results demonstrate that approximately 77 % of the uranium was extracted using NH 4 Ac in 25 % acetic acid. (author)

4. Simultaneous optimization of sequential IMRT plans

International Nuclear Information System (INIS)

Popple, Richard A.; Prellop, Perri B.; Spencer, Sharon A.; Santos, Jennifer F. de los; Duan, Jun; Fiveash, John B.; Brezovich, Ivan A.

2005-01-01

Radiotherapy often comprises two phases, in which irradiation of a volume at risk for microscopic disease is followed by a sequential dose escalation to a smaller volume either at a higher risk for microscopic disease or containing only gross disease. This technique is difficult to implement with intensity modulated radiotherapy, as the tolerance doses of critical structures must be respected over the sum of the two plans. Techniques that include an integrated boost have been proposed to address this problem. However, clinical experience with such techniques is limited, and many clinicians are uncomfortable prescribing nonconventional fractionation schemes. To solve this problem, we developed an optimization technique that simultaneously generates sequential initial and boost IMRT plans. We have developed an optimization tool that uses a commercial treatment planning system (TPS) and a high level programming language for technical computing. The tool uses the TPS to calculate the dose deposition coefficients (DDCs) for optimization. The DDCs were imported into external software and the treatment ports duplicated to create the boost plan. The initial, boost, and tolerance doses were specified and used to construct cost functions. The initial and boost plans were optimized simultaneously using a gradient search technique. Following optimization, the fluence maps were exported to the TPS for dose calculation. Seven patients treated using sequential techniques were selected from our clinical database. The initial and boost plans used to treat these patients were developed independently of each other by dividing the tolerance doses proportionally between the initial and boost plans and then iteratively optimizing the plans until a summation that met the treatment goals was obtained. We used the simultaneous optimization technique to generate plans that met the original planning goals. The coverage of the initial and boost target volumes in the simultaneously optimized

5. Combined search for the quarks of a sequential fourth generation

CERN Document Server

2012-01-01

Results are presented from a search for a fourth generation of quarks produced singly or in pairs in a data set corresponding to an integrated luminosity of 5 inverse femtobarns recorded by the CMS experiment at the LHC in 2011. A novel strategy has been developed for a combined search for quarks of the up- and down-type in decay channels with at least one isolated muon or electron. Limits on the mass of the fourth-generation quarks and the relevant CKM matrix elements are derived in the context of a simple extension of the standard model with a sequential fourth generation of fermions. The existence of mass-degenerate fourth-generation quarks with masses below 685 GeV is excluded at 95% confidence level for minimal off-diagonal mixing between the third- and the fourth-generation quarks. With a mass difference of 25 GeV between the quark masses, the obtained limit on the masses of the fourth-generation quarks shifts by about +/- 20 GeV. This result significantly reduces the allowed parameter space for a fourt...

6. Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders

Science.gov (United States)

Rußwurm, Marc; Körner, Marco

2018-03-01

Earth observation (EO) sensors deliver data with daily or weekly temporal resolution. Most land use and land cover (LULC) approaches, however, expect cloud-free and mono-temporal observations. The increasing temporal capabilities of today's sensors enables the use of temporal, along with spectral and spatial features. Domains, such as speech recognition or neural machine translation, work with inherently temporal data and, today, achieve impressive results using sequential encoder-decoder structures. Inspired by these sequence-to-sequence models, we adapt an encoder structure with convolutional recurrent layers in order to approximate a phenological model for vegetation classes based on a temporal sequence of Sentinel 2 (S2) images. In our experiments, we visualize internal activations over a sequence of cloudy and non-cloudy images and find several recurrent cells, which reduce the input activity for cloudy observations. Hence, we assume that our network has learned cloud-filtering schemes solely from input data, which could alleviate the need for tedious cloud-filtering as a preprocessing step for many EO approaches. Moreover, using unfiltered temporal series of top-of-atmosphere (TOA) reflectance data, we achieved in our experiments state-of-the-art classification accuracies on a large number of crop classes with minimal preprocessing compared to other classification approaches.

7. Minimal Flavour Violation and Beyond

CERN Document Server

Isidori, Gino

2012-01-01

We review the formulation of the Minimal Flavour Violation (MFV) hypothesis in the quark sector, as well as some "variations on a theme" based on smaller flavour symmetry groups and/or less minimal breaking terms. We also review how these hypotheses can be tested in B decays and by means of other flavour-physics observables. The phenomenological consequences of MFV are discussed both in general terms, employing a general effective theory approach, and in the specific context of the Minimal Supersymmetric extension of the SM.

8. Minimizing waste in environmental restoration

International Nuclear Information System (INIS)

Thuot, J.R.; Moos, L.

1996-01-01

Environmental restoration, decontamination and decommissioning, and facility dismantlement projects are not typically known for their waste minimization and pollution prevention efforts. Typical projects are driven by schedules and milestones with little attention given to cost or waste minimization. Conventional wisdom in these projects is that the waste already exists and cannot be reduced or minimized; however, there are significant areas where waste and cost can be reduced by careful planning and execution. Waste reduction can occur in three ways: beneficial reuse or recycling, segregation of waste types, and reducing generation of secondary waste

9. Minimizing waste in environmental restoration

International Nuclear Information System (INIS)

Moos, L.; Thuot, J.R.

1996-01-01

Environmental restoration, decontamination and decommissioning and facility dismantelment projects are not typically known for their waste minimization and pollution prevention efforts. Typical projects are driven by schedules and milestones with little attention given to cost or waste minimization. Conventional wisdom in these projects is that the waste already exists and cannot be reduced or minimized. In fact, however, there are three significant areas where waste and cost can be reduced. Waste reduction can occur in three ways: beneficial reuse or recycling; segregation of waste types; and reducing generation of secondary waste. This paper will discuss several examples of reuse, recycle, segregation, and secondary waste reduction at ANL restoration programs

10. A new smoothing modified three-term conjugate gradient method for [Formula: see text]-norm minimization problem.

Science.gov (United States)

Du, Shouqiang; Chen, Miao

2018-01-01

We consider a kind of nonsmooth optimization problems with [Formula: see text]-norm minimization, which has many applications in compressed sensing, signal reconstruction, and the related engineering problems. Using smoothing approximate techniques, this kind of nonsmooth optimization problem can be transformed into a general unconstrained optimization problem, which can be solved by the proposed smoothing modified three-term conjugate gradient method. The smoothing modified three-term conjugate gradient method is based on Polak-Ribière-Polyak conjugate gradient method. For the Polak-Ribière-Polyak conjugate gradient method has good numerical properties, the proposed method possesses the sufficient descent property without any line searches, and it is also proved to be globally convergent. Finally, the numerical experiments show the efficiency of the proposed method.

11. Sludge minimization technologies - an overview

Energy Technology Data Exchange (ETDEWEB)

Oedegaard, Hallvard

2003-07-01

The management of wastewater sludge from wastewater treatment plants represents one of the major challenges in wastewater treatment today. The cost of the sludge treatment amounts to more that the cost of the liquid in many cases. Therefore the focus on and interest in sludge minimization is steadily increasing. In the paper an overview is given for sludge minimization (sludge mass reduction) options. It is demonstrated that sludge minimization may be a result of reduced production of sludge and/or disintegration processes that may take place both in the wastewater treatment stage and in the sludge stage. Various sludge disintegration technologies for sludge minimization are discussed, including mechanical methods (focusing on stirred ball-mill, high-pressure homogenizer, ultrasonic disintegrator), chemical methods (focusing on the use of ozone), physical methods (focusing on thermal and thermal/chemical hydrolysis) and biological methods (focusing on enzymatic processes). (author)

12. Wilson loops in minimal surfaces

International Nuclear Information System (INIS)

Drukker, Nadav; Gross, David J.; Ooguri, Hirosi

1999-01-01

The AdS/CFT correspondence suggests that the Wilson loop of the large N gauge theory with N = 4 supersymmetry in 4 dimensions is described by a minimal surface in AdS 5 x S 5 . The authors examine various aspects of this proposal, comparing gauge theory expectations with computations of minimal surfaces. There is a distinguished class of loops, which the authors call BPS loops, whose expectation values are free from ultra-violet divergence. They formulate the loop equation for such loops. To the extent that they have checked, the minimal surface in AdS 5 x S 5 gives a solution of the equation. The authors also discuss the zig-zag symmetry of the loop operator. In the N = 4 gauge theory, they expect the zig-zag symmetry to hold when the loop does not couple the scalar fields in the supermultiplet. They will show how this is realized for the minimal surface

13. Classical strings and minimal surfaces

International Nuclear Information System (INIS)

Urbantke, H.

1986-01-01

Real Lorentzian forms of some complex or complexified Euclidean minimal surfaces are obtained as an application of H.A. Schwarz' solution to the initial value problem or a search for surfaces admitting a group of Poincare transformations. (Author)

14. Minimal Gromov-Witten rings

International Nuclear Information System (INIS)

Przyjalkowski, V V

2008-01-01

We construct an abstract theory of Gromov-Witten invariants of genus 0 for quantum minimal Fano varieties (a minimal class of varieties which is natural from the quantum cohomological viewpoint). Namely, we consider the minimal Gromov-Witten ring: a commutative algebra whose generators and relations are of the form used in the Gromov-Witten theory of Fano varieties (of unspecified dimension). The Gromov-Witten theory of any quantum minimal variety is a homomorphism from this ring to C. We prove an abstract reconstruction theorem which says that this ring is isomorphic to the free commutative ring generated by 'prime two-pointed invariants'. We also find solutions of the differential equation of type DN for a Fano variety of dimension N in terms of the generating series of one-pointed Gromov-Witten invariants

15. Wilson loops and minimal surfaces

International Nuclear Information System (INIS)

Drukker, Nadav; Gross, David J.; Ooguri, Hirosi

1999-01-01

The AdS-CFT correspondence suggests that the Wilson loop of the large N gauge theory with N=4 supersymmetry in four dimensions is described by a minimal surface in AdS 5 xS 5 . We examine various aspects of this proposal, comparing gauge theory expectations with computations of minimal surfaces. There is a distinguished class of loops, which we call BPS loops, whose expectation values are free from ultraviolet divergence. We formulate the loop equation for such loops. To the extent that we have checked, the minimal surface in AdS 5 xS 5 gives a solution of the equation. We also discuss the zigzag symmetry of the loop operator. In the N=4 gauge theory, we expect the zigzag symmetry to hold when the loop does not couple the scalar fields in the supermultiplet. We will show how this is realized for the minimal surface. (c) 1999 The American Physical Society

16. Time scale of random sequential adsorption.

Science.gov (United States)

2007-04-01

A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

17. Hybrid Computerized Adaptive Testing: From Group Sequential Design to Fully Sequential Design

Science.gov (United States)

Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff

2016-01-01

Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…

18. Sequential and simultaneous choices: testing the diet selection and sequential choice models.

Science.gov (United States)

Freidin, Esteban; Aw, Justine; Kacelnik, Alex

2009-03-01

We investigate simultaneous and sequential choices in starlings, using Charnov's Diet Choice Model (DCM) and Shapiro, Siller and Kacelnik's Sequential Choice Model (SCM) to integrate function and mechanism. During a training phase, starlings encountered one food-related option per trial (A, B or R) in random sequence and with equal probability. A and B delivered food rewards after programmed delays (shorter for A), while R ('rejection') moved directly to the next trial without reward. In this phase we measured latencies to respond. In a later, choice, phase, birds encountered the pairs A-B, A-R and B-R, the first implementing a simultaneous choice and the second and third sequential choices. The DCM predicts when R should be chosen to maximize intake rate, and SCM uses latencies of the training phase to predict choices between any pair of options in the choice phase. The predictions of both models coincided, and both successfully predicted the birds' preferences. The DCM does not deal with partial preferences, while the SCM does, and experimental results were strongly correlated to this model's predictions. We believe that the SCM may expose a very general mechanism of animal choice, and that its wider domain of success reflects the greater ecological significance of sequential over simultaneous choices.

19. Minimal string theory is logarithmic

International Nuclear Information System (INIS)

Ishimoto, Yukitaka; Yamaguchi, Shun-ichi

2005-01-01

We study the simplest examples of minimal string theory whose worldsheet description is the unitary (p,q) minimal model coupled to two-dimensional gravity ( Liouville field theory). In the Liouville sector, we show that four-point correlation functions of 'tachyons' exhibit logarithmic singularities, and that the theory turns out to be logarithmic. The relation with Zamolodchikov's logarithmic degenerate fields is also discussed. Our result holds for generic values of (p,q)

20. Annual Waste Minimization Summary Report

International Nuclear Information System (INIS)

Haworth, D.M.

2011-01-01

This report summarizes the waste minimization efforts undertaken by National Security TechnoIogies, LLC, for the U. S. Department of Energy, National Nuclear Security Administration Nevada Site Office (NNSA/NSO), during calendar year 2010. The NNSA/NSO Pollution Prevention Program establishes a process to reduce the volume and toxicity of waste generated by NNSA/NSO activities and ensures that proposed methods of treatment, storage, and/or disposal of waste minimize potential threats to human health and the environment.

1. Impact of controlling the sum of error probability in the sequential probability ratio test

Directory of Open Access Journals (Sweden)

2013-05-01

Full Text Available A generalized modified method is proposed to control the sum of error probabilities in sequential probability ratio test to minimize the weighted average of the two average sample numbers under a simple null hypothesis and a simple alternative hypothesis with the restriction that the sum of error probabilities is a pre-assigned constant to find the optimal sample size and finally a comparison is done with the optimal sample size found from fixed sample size procedure. The results are applied to the cases when the random variate follows a normal law as well as Bernoullian law.

2. A sequential quadratic programming algorithm using an incomplete solution of the subproblem

Energy Technology Data Exchange (ETDEWEB)

Murray, W. [Stanford Univ., CA (United States). Systems Optimization Lab.; Prieto, F.J. [Universidad Carlos III de Madrid (Spain). Dept. de Estadistica y Econometria

1993-05-01

We analyze sequential quadratic programming (SQP) methods to solve nonlinear constrained optimization problems that are more flexible in their definition than standard SQP methods. The type of flexibility introduced is motivated by the necessity to deviate from the standard approach when solving large problems. Specifically we no longer require a minimizer of the QP subproblem to be determined or particular Lagrange multiplier estimates to be used. Our main focus is on an SQP algorithm that uses a particular augmented Lagrangian merit function. New results are derived for this algorithm under weaker conditions than previously assumed; in particular, it is not assumed that the iterates lie on a compact set.

3. Sequential probability ratio controllers for safeguards radiation monitors

International Nuclear Information System (INIS)

Fehlau, P.E.; Coop, K.L.; Nixon, K.V.

1984-01-01

Sequential hypothesis tests applied to nuclear safeguards accounting methods make the methods more sensitive to detecting diversion. The sequential tests also improve transient signal detection in safeguards radiation monitors. This paper describes three microprocessor control units with sequential probability-ratio tests for detecting transient increases in radiation intensity. The control units are designed for three specific applications: low-intensity monitoring with Poisson probability ratios, higher intensity gamma-ray monitoring where fixed counting intervals are shortened by sequential testing, and monitoring moving traffic where the sequential technique responds to variable-duration signals. The fixed-interval controller shortens a customary 50-s monitoring time to an average of 18 s, making the monitoring delay less bothersome. The controller for monitoring moving vehicles benefits from the sequential technique by maintaining more than half its sensitivity when the normal passage speed doubles

4. Equivalence between quantum simultaneous games and quantum sequential games

OpenAIRE

Kobayashi, Naoki

2007-01-01

A framework for discussing relationships between different types of games is proposed. Within the framework, quantum simultaneous games, finite quantum simultaneous games, quantum sequential games, and finite quantum sequential games are defined. In addition, a notion of equivalence between two games is defined. Finally, the following three theorems are shown: (1) For any quantum simultaneous game G, there exists a quantum sequential game equivalent to G. (2) For any finite quantum simultaneo...

5. Discrimination between sequential and simultaneous virtual channels with electrical hearing

OpenAIRE

Landsberger, David; Galvin, John J.

2011-01-01

In cochlear implants (CIs), simultaneous or sequential stimulation of adjacent electrodes can produce intermediate pitch percepts between those of the component electrodes. However, it is unclear whether simultaneous and sequential virtual channels (VCs) can be discriminated. In this study, CI users were asked to discriminate simultaneous and sequential VCs; discrimination was measured for monopolar (MP) and bipolar + 1 stimulation (BP + 1), i.e., relatively broad and focused stimulation mode...

6. C-quence: a tool for analyzing qualitative sequential data.

Science.gov (United States)

Duncan, Starkey; Collier, Nicholson T

2002-02-01

C-quence is a software application that matches sequential patterns of qualitative data specified by the user and calculates the rate of occurrence of these patterns in a data set. Although it was designed to facilitate analyses of face-to-face interaction, it is applicable to any data set involving categorical data and sequential information. C-quence queries are constructed using a graphical user interface. The program does not limit the complexity of the sequential patterns specified by the user.

7. Sequential rhodium/palladium catalysis: enantioselective formation of dihydroquinolinones in the presence of achiral and chiral ligands.

Science.gov (United States)

Zhang, Lei; Qureshi, Zafar; Sonaglia, Lorenzo; Lautens, Mark

2014-12-08

Compatible combinations of achiral and chiral ligands can be used in rhodium/palladium catalysis to achieve highly enantioselective domino reactions. The difference in rates of catalysis and minimal effects of ligand interference confer control in the domino sequence. The "all-in-one" 1,4-conjugate arylation and C-N cross-coupling through sequential Rh/Pd catalysis provides access to enantioenriched dihydroquinolinone building blocks. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

8. Minimal but non-minimal inflation and electroweak symmetry breaking

Energy Technology Data Exchange (ETDEWEB)

Marzola, Luca [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia); Institute of Physics, University of Tartu,Ravila 14c, 50411 Tartu (Estonia); Racioppi, Antonio [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia)

2016-10-07

We consider the most minimal scale invariant extension of the standard model that allows for successful radiative electroweak symmetry breaking and inflation. The framework involves an extra scalar singlet, that plays the rôle of the inflaton, and is compatibile with current experimental bounds owing to the non-minimal coupling of the latter to gravity. This inflationary scenario predicts a very low tensor-to-scalar ratio r≈10{sup −3}, typical of Higgs-inflation models, but in contrast yields a scalar spectral index n{sub s}≃0.97 which departs from the Starobinsky limit. We briefly discuss the collider phenomenology of the framework.

9. Discrimination between sequential and simultaneous virtual channels with electrical hearing.

Science.gov (United States)

Landsberger, David; Galvin, John J

2011-09-01

In cochlear implants (CIs), simultaneous or sequential stimulation of adjacent electrodes can produce intermediate pitch percepts between those of the component electrodes. However, it is unclear whether simultaneous and sequential virtual channels (VCs) can be discriminated. In this study, CI users were asked to discriminate simultaneous and sequential VCs; discrimination was measured for monopolar (MP) and bipolar + 1 stimulation (BP + 1), i.e., relatively broad and focused stimulation modes. For sequential VCs, the interpulse interval (IPI) varied between 0.0 and 1.8 ms. All stimuli were presented at comfortably loud, loudness-balanced levels at a 250 pulse per second per electrode (ppse) stimulation rate. On average, CI subjects were able to reliably discriminate between sequential and simultaneous VCs. While there was no significant effect of IPI or stimulation mode on VC discrimination, some subjects exhibited better VC discrimination with BP + 1 stimulation. Subjects' discrimination between sequential and simultaneous VCs was correlated with electrode discrimination, suggesting that spatial selectivity may influence perception of sequential VCs. To maintain equal loudness, sequential VC amplitudes were nearly double those of simultaneous VCs, presumably resulting in a broader spread of excitation. These results suggest that perceptual differences between simultaneous and sequential VCs might be explained by differences in the spread of excitation. © 2011 Acoustical Society of America

10. Lineup composition, suspect position, and the sequential lineup advantage.

Science.gov (United States)

Carlson, Curt A; Gronlund, Scott D; Clark, Steven E

2008-06-01

N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate in the simultaneous lineup, and no sequential lineup advantage was found. This led the authors to hypothesize that protection from a sequential lineup might emerge only when an innocent suspect stands out from the other lineup members. In Experiment 2, participants viewed a simultaneous or sequential lineup with either the guilty suspect or 1 of 3 innocent suspects. Lineup fairness was varied to influence the degree to which a suspect stood out. A sequential lineup advantage was found only for the unfair lineups. Additional analyses of suspect position in the sequential lineups showed an increase in the diagnosticity of suspect identifications as the suspect was placed later in the sequential lineup. These results suggest that the sequential lineup advantage is dependent on lineup composition and suspect position. (c) 2008 APA, all rights reserved

11. Group-sequential analysis may allow for early trial termination

DEFF Research Database (Denmark)

Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich

2017-01-01

BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...

12. Sequential infiltration synthesis for advanced lithography

Energy Technology Data Exchange (ETDEWEB)

Darling, Seth B.; Elam, Jeffrey W.; Tseng, Yu-Chih; Peng, Qing

2017-10-10

A plasma etch resist material modified by an inorganic protective component via sequential infiltration synthesis (SIS) and methods of preparing the modified resist material. The modified resist material is characterized by an improved resistance to a plasma etching or related process relative to the unmodified resist material, thereby allowing formation of patterned features into a substrate material, which may be high-aspect ratio features. The SIS process forms the protective component within the bulk resist material through a plurality of alternating exposures to gas phase precursors which infiltrate the resist material. The plasma etch resist material may be initially patterned using photolithography, electron-beam lithography or a block copolymer self-assembly process.

13. Clinical evaluation of synthetic aperture sequential beamforming

DEFF Research Database (Denmark)

Hansen, Peter Møller; Hemmsen, Martin Christian; Lange, Theis

2012-01-01

In this study clinically relevant ultrasound images generated with synthetic aperture sequential beamforming (SASB) is compared to images generated with a conventional technique. The advantage of SASB is the ability to produce high resolution ultrasound images with a high frame rate and at the same...... time massively reduce the amount of generated data. SASB was implemented in a system consisting of a conventional ultrasound scanner connected to a PC via a research interface. This setup enables simultaneous recording with both SASB and conventional technique. Eighteen volunteers were ultrasound...... scanned abdominally, and 84 sequence pairs were recorded. Each sequence pair consists of two simultaneous recordings of the same anatomical location with SASB and conventional B-mode imaging. The images were evaluated in terms of spatial resolution, contrast, unwanted artifacts, and penetration depth...

14. Sequential cooling insert for turbine stator vane

Science.gov (United States)

Jones, Russel B

2017-04-04

A sequential flow cooling insert for a turbine stator vane of a small gas turbine engine, where the impingement cooling insert is formed as a single piece from a metal additive manufacturing process such as 3D metal printing, and where the insert includes a plurality of rows of radial extending impingement cooling air holes alternating with rows of radial extending return air holes on a pressure side wall, and where the insert includes a plurality of rows of chordwise extending second impingement cooling air holes on a suction side wall. The insert includes alternating rows of radial extending cooling air supply channels and return air channels that form a series of impingement cooling on the pressure side followed by the suction side of the insert.

15. Gleason-Busch theorem for sequential measurements

Science.gov (United States)

Flatt, Kieran; Barnett, Stephen M.; Croke, Sarah

2017-12-01

Gleason's theorem is a statement that, given some reasonable assumptions, the Born rule used to calculate probabilities in quantum mechanics is essentially unique [A. M. Gleason, Indiana Univ. Math. J. 6, 885 (1957), 10.1512/iumj.1957.6.56050]. We show that Gleason's theorem contains within it also the structure of sequential measurements, and along with this the state update rule. We give a small set of axioms, which are physically motivated and analogous to those in Busch's proof of Gleason's theorem [P. Busch, Phys. Rev. Lett. 91, 120403 (2003), 10.1103/PhysRevLett.91.120403], from which the familiar Kraus operator form follows. An axiomatic approach has practical relevance as well as fundamental interest, in making clear those assumptions which underlie the security of quantum communication protocols. Interestingly, the two-time formalism is seen to arise naturally in this approach.

16. Multilevel sequential Monte-Carlo samplers

KAUST Repository

Jasra, Ajay

2016-01-01

Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

17. Sequential Stereotype Priming: A Meta-Analysis.

Science.gov (United States)

Kidder, Ciara K; White, Katherine R; Hinojos, Michelle R; Sandoval, Mayra; Crites, Stephen L

2017-08-01

Psychological interest in stereotype measurement has spanned nearly a century, with researchers adopting implicit measures in the 1980s to complement explicit measures. One of the most frequently used implicit measures of stereotypes is the sequential priming paradigm. The current meta-analysis examines stereotype priming, focusing specifically on this paradigm. To contribute to ongoing discussions regarding methodological rigor in social psychology, one primary goal was to identify methodological moderators of the stereotype priming effect-whether priming is due to a relation between the prime and target stimuli, the prime and target response, participant task, stereotype dimension, stimulus onset asynchrony (SOA), and stimuli type. Data from 39 studies yielded 87 individual effect sizes from 5,497 participants. Analyses revealed that stereotype priming is significantly moderated by the presence of prime-response relations, participant task, stereotype dimension, target stimulus type, SOA, and prime repetition. These results carry both practical and theoretical implications for future research on stereotype priming.

18. Sequential Acral Lentiginous Melanomas of the Foot

Directory of Open Access Journals (Sweden)

Jiro Uehara

2010-12-01

Full Text Available A 64-year-old Japanese woman had a lightly brown-blackish pigmented macule (1.2 cm in diameter on the left sole of her foot. She received surgical excision following a diagnosis of acral lentiginous melanoma (ALM, which was confirmed histopathologically. One month after the operation, a second melanoma lesion was noticed adjacent to the grafted site. Histopathologically, the two lesions had no continuity, but HMB-45 and cyclin D1 double-positive cells were detected not only on aggregates of atypical melanocytes but also on single cells near the cutting edge of the first lesion. The unique occurrence of a sequential lesion of a primary melanoma might be caused by stimulated subclinical field cells during the wound healing process following the initial operation. This case warrants further investigation to establish the appropriate surgical margin of ALM lesions.

19. Dancing Twins: Stellar Hierarchies That Formed Sequentially?

Science.gov (United States)

Tokovinin, Andrei

2018-04-01

This paper draws attention to the class of resolved triple stars with moderate ratios of inner and outer periods (possibly in a mean motion resonance) and nearly circular, mutually aligned orbits. Moreover, stars in the inner pair are twins with almost identical masses, while the mass sum of the inner pair is comparable to the mass of the outer component. Such systems could be formed either sequentially (inside-out) by disk fragmentation with subsequent accretion and migration, or by a cascade hierarchical fragmentation of a rotating cloud. Orbits of the outer and inner subsystems are computed or updated in four such hierarchies: LHS 1070 (GJ 2005, periods 77.6 and 17.25 years), HIP 9497 (80 and 14.4 years), HIP 25240 (1200 and 47.0 years), and HIP 78842 (131 and 10.5 years).

20. Multilevel sequential Monte-Carlo samplers

KAUST Repository

Jasra, Ajay

2016-01-05

Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

1. Sequential Therapy in Metastatic Renal Cell Carcinoma

Directory of Open Access Journals (Sweden)

2016-04-01

Full Text Available The treatment of metastatic renal cell carcinoma (mRCC has changed dramatically in the past decade. As the number of available agents, and related volume of research, has grown, it is increasingly complex to know how to optimally treat patients. The authors are practicing medical oncologists at the US Oncology Network, the largest community-based network of oncology providers in the country, and represent the leadership of the Network's Genitourinary Research Committee. We outline our thought process in approaching sequential therapy of mRCC and the use of real-world data to inform our approach. We also highlight the evolving literature that will impact practicing oncologists in the near future.

2. Microstructure history effect during sequential thermomechanical processing

International Nuclear Information System (INIS)

Yassar, Reza S.; Murphy, John; Burton, Christina; Horstemeyer, Mark F.; El kadiri, Haitham; Shokuhfar, Tolou

2008-01-01

The key to modeling the material processing behavior is the linking of the microstructure evolution to its processing history. This paper quantifies various microstructural features of an aluminum automotive alloy that undergoes sequential thermomechanical processing which is comprised hot rolling of a 150-mm billet to a 75-mm billet, rolling to 3 mm, annealing, and then cold rolling to a 0.8-mm thickness sheet. The microstructural content was characterized by means of electron backscatter diffraction, scanning electron microscopy, and transmission electron microscopy. The results clearly demonstrate the evolution of precipitate morphologies, dislocation structures, and grain orientation distributions. These data can be used to improve material models that claim to capture the history effects of the processing materials

3. Prosody and alignment: a sequential perspective

Science.gov (United States)

Szczepek Reed, Beatrice

2010-12-01

In their analysis of a corpus of classroom interactions in an inner city high school, Roth and Tobin describe how teachers and students accomplish interactional alignment by prosodically matching each other's turns. Prosodic matching, and specific prosodic patterns are interpreted as signs of, and contributions to successful interactional outcomes and positive emotions. Lack of prosodic matching, and other specific prosodic patterns are interpreted as features of unsuccessful interactions, and negative emotions. This forum focuses on the article's analysis of the relation between interpersonal alignment, emotion and prosody. It argues that prosodic matching, and other prosodic linking practices, play a primarily sequential role, i.e. one that displays the way in which participants place and design their turns in relation to other participants' turns. Prosodic matching, rather than being a conversational action in itself, is argued to be an interactional practice (Schegloff 1997), which is not always employed for the accomplishment of positive', or aligning actions.

4. New sequential separation procedure for Sr, Nd and Pb isotope ratio measurement in geological material using MC-ICP-MS and TIMS

Digital Repository Service at National Institute of Oceanography (India)

Makishima, A.; Nath, B.N.; Nakamura, E.

A new 3-step sequential separation chemistry for Sr, Nd and Pb from silicate samples, which is suitable for isotope analysis by MC-ICP-MS as well as TIMS, has been developed. The chemistry is designed to minimize the number of evaporation steps...

5. Monitoring sequential electron transfer with EPR

International Nuclear Information System (INIS)

Thurnauer, M.C.; Feezel, L.L.; Snyder, S.W.; Tang, J.; Norris, J.R.; Morris, A.L.; Rustandi, R.R.

1989-01-01

A completely general model which treats electron spin polarization (ESP) found in a system in which radical pairs with different magnetic interactions are formed sequentially has been described. This treatment has been applied specifically to the ESP found in the bacterial reaction center. Test cases show clearly how parameters such as structure, lifetime, and magnetic interactions within the successive radical pairs affect the ESP, and demonstrate that previous treatments of this problem have been incomplete. The photosynthetic bacterial reaction center protein is an ideal system for testing the general model of ESP. The radical pair which exhibits ESP, P 870 + Q - (P 870 + is the oxidized, primary electron donor, a bacteriochlorophyll special pair and Q - is the reduced, primary quinone acceptor) is formed via sequential electron transport through the intermediary radical pair P 870 + I - (I - is the reduced, intermediary electron acceptor, a bacteriopheophytin). In addition, it is possible to experimentally vary most of the important parameters, such as the lifetime of the intermediary radical pair and the magnetic interactions in each pair. It has been shown how selective isotopic substitution ( 1 H or 2 H) on P 870 , I and Q affects the ESP of the EPR spectrum of P 870 + Q - , observed at two different microwave frequencies, in Fe 2+ -depleted bacterial reaction centers of Rhodobacter sphaeroides R26. Thus, the relative magnitudes of the magnetic properties (nuclear hyperfine and g-factor differences) which influence ESP development were varied. The results support the general model of ESP in that they suggest that the P 870 + Q - radical pair interactions are the dominant source of ESP production in 2 H bacterial reaction centers

6. Production of microbial biomass protein by sequential culture fermentation of Arachniotus sp., and Candida utilis

International Nuclear Information System (INIS)

Ahmed, S.; Ahmad, F.; Hashmi, A.S.

2010-01-01

Sequential culture fermentation by Arachniotus sp. at 35 deg. C for 72 h and followed by Candida utilis fermentation at 35 deg. C for 72 h more resulted in higher production of microbial biomass protein. 6% (w/v) corn stover, 0.0075% CaCl/sub 2/.2H/sub 2/O, 0.005% MgSO/sub 4/.7H/sub 2/O, 0.01% KH/sub 2/PO/sub 4/, C:N ratio of 30:1 and 1% molasses gave higher microbial biomass protein production by the sequential culture fermentation of Arachniotus sp., and C. utilis. The mixed microbial biomass protein produced in the 75-L fermentor contained 16.41%, 23.51%, 10.9%, 12.11% and 0.12% true protein, crude protein, crude fiber, ash and RNA content, respectively. The amino acid profile of final mixed microbial biomass protein showed that it was enriched with essential amino acids. Thus, the potential utilization of corn stover can minimize the cost for growth of these microorganisms and enhance microbial biomass protein production by sequential culture fermentation. (author)

7. Novel 2D-sequential color code system employing Image Sensor Communications for Optical Wireless Communications

Directory of Open Access Journals (Sweden)

Trang Nguyen

2016-06-01

Full Text Available The IEEE 802.15.7r1 Optical Wireless Communications Task Group (TG7r1, also known as the revision of the IEEE 802.15.7 Visible Light Communication standard targeting the commercial usage of visible light communication systems, is of interest in this paper. The paper is mainly concerned with Image Sensor Communications (ISC of TG7r1; however, the major challenge facing ISC, as addressed in the Technical Consideration Document (TCD of TG7r1, is Image Sensor Compatibility among the variety of different commercial cameras on the market. One of the most challenging but interesting compatibility requirements is the need to support the verified presence of frame rate variation. This paper proposes a novel design for 2D-sequential color code. Compared to a QR-code-based sequential transmission, the proposed design of 2D-sequential code can overcome the above challenge that it is compatible with different frame rate variations and different shutter operations, and has the ability to mitigate the rolling effect as well as the rotating effect while effectively minimizing transmission overhead. Practical implementations are demonstrated and a performance comparison is presented.

8. Minimal modification to tribimaximal mixing

International Nuclear Information System (INIS)

He Xiaogang; Zee, A.

2011-01-01

We explore some ways of minimally modifying the neutrino mixing matrix from tribimaximal, characterized by introducing at most one mixing angle and a CP violating phase thus extending our earlier work. One minimal modification, motivated to some extent by group theoretic considerations, is a simple case with the elements V α2 of the second column in the mixing matrix equal to 1/√(3). Modifications by keeping one of the columns or one of the rows unchanged from tribimaximal mixing all belong to the class of minimal modification. Some of the cases have interesting experimentally testable consequences. In particular, the T2K and MINOS collaborations have recently reported indications of a nonzero θ 13 . For the cases we consider, the new data sharply constrain the CP violating phase angle δ, with δ close to 0 (in some cases) and π disfavored.

9. Topological gravity with minimal matter

International Nuclear Information System (INIS)

Li Keke

1991-01-01

Topological minimal matter, obtained by twisting the minimal N = 2 supeconformal field theory, is coupled to two-dimensional topological gravity. The free field formulation of the coupled system allows explicit representations of BRST charge, physical operators and their correlation functions. The contact terms of the physical operators may be evaluated by extending the argument used in a recent solution of topological gravity without matter. The consistency of the contact terms in correlation functions implies recursion relations which coincide with the Virasoro constraints derived from the multi-matrix models. Topological gravity with minimal matter thus provides the field theoretic description for the multi-matrix models of two-dimensional quantum gravity. (orig.)

10. Minimal Marking: A Success Story

Directory of Open Access Journals (Sweden)

Anne McNeilly

2014-11-01

Full Text Available The minimal-marking project conducted in Ryerson’s School of Journalism throughout 2012 and early 2013 resulted in significantly higher grammar scores in two first-year classes of minimally marked university students when compared to two traditionally marked classes. The “minimal-marking” concept (Haswell, 1983, which requires dramatically more student engagement, resulted in more successful learning outcomes for surface-level knowledge acquisition than the more traditional approach of “teacher-corrects-all.” Results suggest it would be effective, not just for grammar, punctuation, and word usage, the objective here, but for any material that requires rote-memory learning, such as the Associated Press or Canadian Press style rules used by news publications across North America.

11. UNCONSTRAINED MELTING AND SOLIDIFICATION INSIDE ...

African Journals Online (AJOL)

2015-09-01

Sep 1, 2015 ... There is a large number of experimental and numerical works on melting and solidification of PCM[6-10], and also its usage as thermal management in building [11-14], electronic devices [15-16] and solar energy. [17-20].Most investigated geometries in melting and freezing process are sphere (spherical.

12. Non-minimal inflation revisited

International Nuclear Information System (INIS)

2010-01-01

We reconsider an inflationary model that inflaton field is non-minimally coupled to gravity. We study the parameter space of the model up to the second (and in some cases third) order of the slow-roll parameters. We calculate inflation parameters in both Jordan and Einstein frames, and the results are compared in these two frames and also with observations. Using the recent observational data from combined WMAP5+SDSS+SNIa datasets, we study constraints imposed on our model parameters, especially the non-minimal coupling ξ.

13. Minimal Flavor Constraints for Technicolor

DEFF Research Database (Denmark)

Sakuma, Hidenori; Sannino, Francesco

2010-01-01

We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self-coupling and mas......We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self...

14. Harm minimization among teenage drinkers

DEFF Research Database (Denmark)

Jørgensen, Morten Hulvej; Curtis, Tine; Christensen, Pia Haudrup

2007-01-01

AIM: To examine strategies of harm minimization employed by teenage drinkers. DESIGN, SETTING AND PARTICIPANTS: Two periods of ethnographic fieldwork were conducted in a rural Danish community of approximately 2000 inhabitants. The fieldwork included 50 days of participant observation among 13....... In regulating the social context of drinking they relied on their personal experiences more than on formalized knowledge about alcohol and harm, which they had learned from prevention campaigns and educational programmes. CONCLUSIONS: In this study we found that teenagers may help each other to minimize alcohol...

15. Context Effects in Forensic Entomology and Use of Sequential Unmasking in Casework.

Science.gov (United States)

Archer, Melanie S; Wallman, James F

2016-09-01

Context effects are pervasive in forensic science, and are being recognized by a growing number of disciplines as a threat to objectivity. Cognitive processes can be affected by extraneous context information, and many proactive scientists are therefore introducing context-minimizing systems into their laboratories. Forensic entomologists are also subject to context effects, both in the processes they undertake (e.g., evidence collection) and decisions they make (e.g., whether an invertebrate taxon is found in a certain geographic area). We stratify the risk of bias into low, medium, and high for the decisions and processes undertaken by forensic entomologists, and propose that knowledge of the time the deceased was last seen alive is the most potentially biasing piece of information for forensic entomologists. Sequential unmasking is identified as the best system for minimizing context information, illustrated with the results of a casework trial (n = 19) using this approach in Victoria, Australia. © 2016 American Academy of Forensic Sciences.

16. Isoperimetric inequalities for minimal graphs

International Nuclear Information System (INIS)

Pacelli Bessa, G.; Montenegro, J.F.

2007-09-01

Based on Markvorsen and Palmer's work on mean time exit and isoperimetric inequalities we establish slightly better isoperimetric inequalities and mean time exit estimates for minimal graphs in N x R. We also prove isoperimetric inequalities for submanifolds of Hadamard spaces with tamed second fundamental form. (author)

17. A Defense of Semantic Minimalism

Science.gov (United States)

Kim, Su

2012-01-01

Semantic Minimalism is a position about the semantic content of declarative sentences, i.e., the content that is determined entirely by syntax. It is defined by the following two points: "Point 1": The semantic content is a complete/truth-conditional proposition. "Point 2": The semantic content is useful to a theory of…

18. Torsional Rigidity of Minimal Submanifolds

DEFF Research Database (Denmark)

Markvorsen, Steen; Palmer, Vicente

2006-01-01

We prove explicit upper bounds for the torsional rigidity of extrinsic domains of minimal submanifolds $P^m$ in ambient Riemannian manifolds $N^n$ with a pole $p$. The upper bounds are given in terms of the torsional rigidities of corresponding Schwarz symmetrizations of the domains in warped...

19. The debate on minimal deterrence

International Nuclear Information System (INIS)

Arbatov, A.; Karp, R.C.; Toth, T.

1993-01-01

Revitalization of debates on minimal nuclear deterrence at the present time is induced by the end of the Cold War and a number of unilateral and bilateral actions by the great powers to curtail nuclear arms race and reduce nuclear weapons arsenals

20. LLNL Waste Minimization Program Plan

International Nuclear Information System (INIS)

1990-01-01

This document is the February 14, 1990 version of the LLNL Waste Minimization Program Plan (WMPP). The Waste Minimization Policy field has undergone continuous changes since its formal inception in the 1984 HSWA legislation. The first LLNL WMPP, Revision A, is dated March 1985. A series of informal revision were made on approximately a semi-annual basis. This Revision 2 is the third formal issuance of the WMPP document. EPA has issued a proposed new policy statement on source reduction and recycling. This policy reflects a preventative strategy to reduce or eliminate the generation of environmentally-harmful pollutants which may be released to the air, land surface, water, or ground water. In accordance with this new policy new guidance to hazardous waste generators on the elements of a Waste Minimization Program was issued. In response to these policies, DOE has revised and issued implementation guidance for DOE Order 5400.1, Waste Minimization Plan and Waste Reduction reporting of DOE Hazardous, Radioactive, and Radioactive Mixed Wastes, final draft January 1990. This WMPP is formatted to meet the current DOE guidance outlines. The current WMPP will be revised to reflect all of these proposed changes when guidelines are established. Updates, changes and revisions to the overall LLNL WMPP will be made as appropriate to reflect ever-changing regulatory requirements. 3 figs., 4 tabs

1. Minimizing TLD-DRD differences

International Nuclear Information System (INIS)

Riley, D.L.; McCoy, R.A.; Connell, W.D.

1987-01-01

When substantial differences exist in exposures recorded by TLD's and DRD's, it is often necessary to perform an exposure investigation to reconcile the difference. In working with several operating plants, the authors have observed a number of causes for these differences. This paper outlines these observations and discusses procedures that can be used to minimize them

2. Acquiring minimally invasive surgical skills

NARCIS (Netherlands)

Hiemstra, Ellen

2012-01-01

Many topics in surgical skills education have been implemented without a solid scientific basis. For that reason we have tried to find this scientific basis. We have focused on training and evaluation of minimally invasive surgical skills in a training setting and in practice in the operating room.

3. Campbell and moment measures for finite sequential spatial processes

NARCIS (Netherlands)

M.N.M. van Lieshout (Marie-Colette)

2006-01-01

textabstractWe define moment and Campbell measures for sequential spatial processes, prove a Campbell-Mecke theorem, and relate the results to their counterparts in the theory of point processes. In particular, we show that any finite sequential spatial process model can be derived as the vector

4. Simultaneous versus sequential penetrating keratoplasty and cataract surgery.

Science.gov (United States)

Hayashi, Ken; Hayashi, Hideyuki

2006-10-01

To compare the surgical outcomes of simultaneous penetrating keratoplasty and cataract surgery with those of sequential surgery. Thirty-nine eyes of 39 patients scheduled for simultaneous keratoplasty and cataract surgery and 23 eyes of 23 patients scheduled for sequential keratoplasty and secondary phacoemulsification surgery were recruited. Refractive error, regular and irregular corneal astigmatism determined by Fourier analysis, and endothelial cell loss were studied at 1 week and 3, 6, and 12 months after combined surgery in the simultaneous surgery group or after subsequent phacoemulsification surgery in the sequential surgery group. At 3 and more months after surgery, mean refractive error was significantly greater in the simultaneous surgery group than in the sequential surgery group, although no difference was seen at 1 week. The refractive error at 12 months was within 2 D of that targeted in 15 eyes (39%) in the simultaneous surgery group and within 2 D in 16 eyes (70%) in the sequential surgery group; the incidence was significantly greater in the sequential group (P = 0.0344). The regular and irregular astigmatism was not significantly different between the groups at 3 and more months after surgery. No significant difference was also found in the percentage of endothelial cell loss between the groups. Although corneal astigmatism and endothelial cell loss were not different, refractive error from target refraction was greater after simultaneous keratoplasty and cataract surgery than after sequential surgery, indicating a better outcome after sequential surgery than after simultaneous surgery.

5. Reading Remediation Based on Sequential and Simultaneous Processing.

Science.gov (United States)

Gunnison, Judy; And Others

1982-01-01

The theory postulating a dichotomy between sequential and simultaneous processing is reviewed and its implications for remediating reading problems are reviewed. Research is cited on sequential-simultaneous processing for early and advanced reading. A list of remedial strategies based on the processing dichotomy addresses decoding and lexical…

6. Induction of simultaneous and sequential malolactic fermentation in durian wine.

Science.gov (United States)

Taniasuri, Fransisca; Lee, Pin-Rou; Liu, Shao-Quan

2016-08-02

This study represented for the first time the impact of malolactic fermentation (MLF) induced by Oenococcus oeni and its inoculation strategies (simultaneous vs. sequential) on the fermentation performance as well as aroma compound profile of durian wine. There was no negative impact of simultaneous inoculation of O. oeni and Saccharomyces cerevisiae on the growth and fermentation kinetics of S. cerevisiae as compared to sequential fermentation. Simultaneous MLF did not lead to an excessive increase in volatile acidity as compared to sequential MLF. The kinetic changes of organic acids (i.e. malic, lactic, succinic, acetic and α-ketoglutaric acids) varied with simultaneous and sequential MLF relative to yeast alone. MLF, regardless of inoculation mode, resulted in higher production of fermentation-derived volatiles as compared to control (alcoholic fermentation only), including esters, volatile fatty acids, and terpenes, except for higher alcohols. Most indigenous volatile sulphur compounds in durian were decreased to trace levels with little differences among the control, simultaneous and sequential MLF. Among the different wines, the wine with simultaneous MLF had higher concentrations of terpenes and acetate esters while sequential MLF had increased concentrations of medium- and long-chain ethyl esters. Relative to alcoholic fermentation only, both simultaneous and sequential MLF reduced acetaldehyde substantially with sequential MLF being more effective. These findings illustrate that MLF is an effective and novel way of modulating the volatile and aroma compound profile of durian wine. Copyright © 2016 Elsevier B.V. All rights reserved.

7. A Survey of Multi-Objective Sequential Decision-Making

NARCIS (Netherlands)

Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.

2013-01-01

Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential

8. Dynamics-based sequential memory: Winnerless competition of patterns

International Nuclear Information System (INIS)

Seliger, Philip; Tsimring, Lev S.; Rabinovich, Mikhail I.

2003-01-01

We introduce a biologically motivated dynamical principle of sequential memory which is based on winnerless competition (WLC) of event images. This mechanism is implemented in a two-layer neural model of sequential spatial memory. We present the learning dynamics which leads to the formation of a WLC network. After learning, the system is capable of associative retrieval of prerecorded sequences of patterns

9. Sequential, progressive, equal-power, reflective beam-splitter arrays

Science.gov (United States)

Manhart, Paul K.

2017-11-01

The equations to calculate equal-power reflectivity of a sequential series of beam splitters is presented. Non-sequential optical design examples are offered for uniform illumination using diode lasers. Objects created using Boolean operators and Swept Surfaces can create objects capable of reflecting light into predefined elevation and azimuth angles. Analysis of the illumination patterns for the array are also presented.

10. The sequential price of anarchy for atomic congestion games

NARCIS (Netherlands)

de Jong, Jasper; Uetz, Marc Jochen; Liu, Tie-Yan; Qi, Qi; Ye, Yinyu

2014-01-01

In situations without central coordination, the price of anarchy relates the quality of any Nash equilibrium to the quality of a global optimum. Instead of assuming that all players choose their actions simultaneously, we consider games where players choose their actions sequentially. The sequential

11. Quantum Probability Zero-One Law for Sequential Terminal Events

Science.gov (United States)

Rehder, Wulf

1980-07-01

On the basis of the Jauch-Piron quantum probability calculus a zero-one law for sequential terminal events is proven, and the significance of certain crucial axioms in the quantum probability calculus is discussed. The result shows that the Jauch-Piron set of axioms is appropriate for the non-Boolean algebra of sequential events.

12. Lineup Composition, Suspect Position, and the Sequential Lineup Advantage

Science.gov (United States)

Carlson, Curt A.; Gronlund, Scott D.; Clark, Steven E.

2008-01-01

N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate…

13. Accounting for Heterogeneous Returns in Sequential Schooling Decisions

NARCIS (Netherlands)

Zamarro, G.

2006-01-01

This paper presents a method for estimating returns to schooling that takes into account that returns may be heterogeneous among agents and that educational decisions are made sequentially.A sequential decision model is interesting because it explicitly considers that the level of education of each

14. Assessing the effect of sodium dichloroisocyanurate concentration on transfer of Salmonella enterica serotype Typhimurium in wash water for production of minimally processed iceberg lettuce (Lactuca sativa L.).

Science.gov (United States)

Maffei, D F; Sant'Ana, A S; Monteiro, G; Schaffner, D W; Franco, B D G M

2016-06-01

This study evaluated the impact of sodium dichloroisocyanurate (5, 10, 20, 30, 40, 50 and 250 mg l(-1) ) in wash water on transfer of Salmonella Typhimurium from contaminated lettuce to wash water and then to other noncontaminated lettuces washed sequentially in the same water. Experiments were designed mimicking the conditions commonly seen in minimally processed vegetable (MPV) processing plants in Brazil. The scenarios were as follows: (1) Washing one inoculated lettuce portion in nonchlorinated water, followed by washing 10 noninoculated portions sequentially. (2) Washing one inoculated lettuce portion in chlorinated water followed by washing five noninoculated portions sequentially. (3) Washing five inoculated lettuce portions in chlorinated water sequentially, followed by washing five noninoculated portions sequentially. (4) Washing five noninoculated lettuce portions in chlorinated water sequentially, followed by washing five inoculated portions sequentially and then by washing five noninoculated portions sequentially in the same water. Salm. Typhimurium transfer from inoculated lettuce to wash water and further dissemination to noninoculated lettuces occurred when nonchlorinated water was used (scenario 1). When chlorinated water was used (scenarios 2, 3 and 4), no measurable Salm. Typhimurium transfer occurred if the sanitizer was ≥10 mg l(-1) . Use of sanitizers in correct concentrations is important to minimize the risk of microbial transfer during MPV washing. In this study, the impact of sodium dichloroisocyanurate in the wash water on transfer of Salmonella Typhimurium from inoculated lettuce to wash water and then to other noninoculated lettuces washed sequentially in the same water was evaluated. The use of chlorinated water, at concentration above 10 mg l(-1) , effectively prevented Salm. Typhimurium transfer under several different washing scenarios. Conversely, when nonchlorinated water was used, Salm. Typhimurium transfer occurred in

15. The Unconstrained Event Bulletin (UEB) for the IMS Seismic Network Spaning the Period May 15-28, 2010: a New Resource for Algorithm Development and Testing

Science.gov (United States)

Brogan, R.; Young, C. J.; Ballard, S.

2017-12-01

A major problem with developing new data processing algorithms for seismic event monitoring is the lack of standard, high-quality "ground-truth" data sets to test against. The unfortunate effect of this is that new algorithms are often developed and tested with new data sets, making comparison of algorithms difficult and subjective. In an effort towards resolving this problem, we have developed the Unconstrained Event Bulletin (UEB), a ground-truth data set from the International Monitoring System (IMS) primary and auxiliary seismic networks for a two-week period in May 2010. All UEB analysis was performed by the same expert, who has more than 30 years of experience analyzing seismic data for nuclear explosion monitoring. We used the most complete International Data Centre (IDC) analyst-review event bulletin (the Late Event Bulletin or LEB) as a starting point for this analysis. To make the UEB more complete, we relaxed the minimum event definite criteria to the level of a pair of P-type and an S-type phases at a single station and using azimuth/slowness as defining. To add even more events that our analyst recognized and didn't want to omit, in rare cases, events were constructed using only 1 P-phase. Perhaps most importantly, on average our analyst spent more than 60 hours per day of data, far more than was possible in the production of the LEB. The result of all this was that while the LEB had 2,101 LEB events for the 2-week time period, we ended up with 11,435 events in the UEB, an increase of over 400%. New events are located all over the world and include both earthquakes and manmade events such as mining explosions. Our intent is to make our UEB data set openly available for all researchers to use for testing detection, correlation, and location algorithms, thus making it much easier to objectively compare different research efforts. Acknowledgement: Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and

16. Hazardous waste minimization tracking system

International Nuclear Information System (INIS)

Railan, R.

1994-01-01

Under RCRA section 3002 9(b) and 3005f(h), hazardous waste generators and owners/operators of treatment, storage, and disposal facilities (TSDFs) are required to certify that they have a program in place to reduce the volume or quantity and toxicity of hazardous waste to the degree determined to be economically practicable. In many cases, there are environmental, as well as, economic benefits, for agencies that pursue pollution prevention options. Several state governments have already enacted waste minimization legislation (e.g., Massachusetts Toxic Use Reduction Act of 1989, and Oregon Toxic Use Reduction Act and Hazardous Waste Reduction Act, July 2, 1989). About twenty six other states have established legislation that will mandate some type of waste minimization program and/or facility planning. The need to address the HAZMIN (Hazardous Waste Minimization) Program at government agencies and private industries has prompted us to identify the importance of managing The HAZMIN Program, and tracking various aspects of the program, as well as the progress made in this area. The open-quotes WASTEclose quotes is a tracking system, which can be used and modified in maintaining the information related to Hazardous Waste Minimization Program, in a manageable fashion. This program maintains, modifies, and retrieves information related to hazardous waste minimization and recycling, and provides automated report generating capabilities. It has a built-in menu, which can be printed either in part or in full. There are instructions on preparing The Annual Waste Report, and The Annual Recycling Report. The program is very user friendly. This program is available in 3.5 inch or 5 1/4 inch floppy disks. A computer with 640K memory is required

17. Sequential Administration of Carbon Nanotubes and Near Infrared Radiation for the Treatment of Gliomas

Directory of Open Access Journals (Sweden)

Tiago eSantos

2014-07-01

Full Text Available The objective was to use carbon nanotubes (CNT coupled with near infrared radiation (NIR to induce hyperthermia, as a novel non-ionizing radiation treatment for primary brain tumors, glioblastoma multiforme (GBM. In this study we report the therapeutic potential of hyperthermia-induced thermal ablation using the sequential administration of carbon nanotubes and NIR. In vitro studies were performed using glioma tumor cell lines (U251, U87, LN229, T98G. Glioma cells were incubated with CNTs for 24 hours followed by exposure to NIR for 10 minutes. Glioma cells preferentially internalized CNTs, which upon NIR exposure, generated heat, causing necrotic cell death. There were minimal effects to normal cells, which correlate to their minimal uptake of CNTs. Furthermore, this protocol caused cell death to glioma cancer stem cells, and drug-resistant as well as drug-sensitive glioma cells. This sequential hyperthermia therapy was effective in vivo, in the rodent tumor model resulting in tumor shrinkage and no recurrence after only one treatment. In conclusion, this sequence of selective CNT administration followed by NIR activation provides a new approach to the treatment of glioma, particularly drug-resistant gliomas.

18. Contribution to computer aided design of digital circuits - Minimization of alphanumeric expressions - Program CHOPIN

International Nuclear Information System (INIS)

Blanca, Ernest

1974-10-01

Alpha-numeric boolean expressions, written in the form of sums of products and/or products of sums with many brackets, may be minimized in two steps: syntaxic recognition analysis using precedence operator grammar, syntaxic reduction analysis. These two phases of execution and the different programs of the corresponding machine algorithm are described. Examples of minimization of alpha-numeric boolean expressions written with the help of brackets, utilisation note of the program CHOPIN and theoretical considerations related to language, grammar, precedence operator grammar, sequential systems, boolean sets, boolean representations and treatments of boolean expressions, boolean matrices and their use in grammar theory, are discussed and described. (author) [fr

19. Combined Orbital Fractures: Surgical Strategy of Sequential Repair

Directory of Open Access Journals (Sweden)

Su Won Hur

2015-07-01

Full Text Available BackgroundReconstruction of combined orbital floor and medial wall fractures with a comminuted inferomedial strut (IMS is challenging and requires careful practice. We present our surgical strategy and postoperative outcomes.MethodsWe divided 74 patients who underwent the reconstruction of the orbital floor and medial wall concomitantly into a comminuted IMS group (41 patients and non-comminuted IMS group (33 patients. In the comminuted IMS group, we first reconstructed the floor stably and then the medial wall by using separate implant pieces. In the non-comminuted IMS group, we reconstructed the floor and the medial wall with a single large implant.ResultsIn the follow-up of 6 to 65 months, most patients with diplopia improved in the first-week except one, who eventually improved at 1 year. All patients with an EOM limitation improved during the first month of follow-up. Enophthalmos (displacement, 2 mm was observed in two patients. The orbit volume measured on the CT scans was statistically significantly restored in both groups. No complications related to the surgery were observed.ConclusionsWe recommend the reconstruction of orbit walls in the comminuted IMS group by using the following surgical strategy: usage of multiple pieces of rigid implants instead of one large implant, sequential repair first of the floor and then of the medial wall, and a focus on the reconstruction of key areas. Our strategy of step-by-step reconstruction has the benefits of easy repair, less surgical trauma, and minimal stress to the surgeon.

20. Computerprogram for the determination of minimal cardiac transit times

International Nuclear Information System (INIS)

Bosiljanoff, P.; Herzog, H.; Schmid, A.; Sommer, D.; Vyska, K.; Feinendegen, L.E.

1982-10-01

An Anger-Type gamma-camera is used to register the first pass of a radioactive tracer of blood flow through the heart. The acquired data are processed by a suitable computer program yielding time-activity curves for sequential heart segments, which are selected by the region of interest technique. The program prints the minimal cardiac transit times, in terms of total transit times, as well as segmental transit times for the right atrium, right ventricle, lung, left atrium and left ventricle. The measured values are normalized to a rate of 80/min and are compared to normal mean values. The deviation from the normal mean values is characterized by a coefficient F. Moreover, these findings are qualitatively rated. (orig./MG)

1. Constrained treatment planning using sequential beam selection

International Nuclear Information System (INIS)

Woudstra, E.; Storchi, P.R.M.

2000-01-01

In this paper an algorithm is described for automated treatment plan generation. The algorithm aims at delivery of the prescribed dose to the target volume without violation of constraints for target, organs at risk and the surrounding normal tissue. Pre-calculated dose distributions for all candidate orientations are used as input. Treatment beams are selected in a sequential way. A score function designed for beam selection is used for the simultaneous selection of beam orientations and weights. In order to determine the optimum choice for the orientation and the corresponding weight of each new beam, the score function is first redefined to account for the dose distribution of the previously selected beams. Addition of more beams to the plan is stopped when the target dose is reached or when no additional dose can be delivered without violating a constraint. In the latter case the score function is modified by importance factor changes to enforce better sparing of the organ with the limiting constraint and the algorithm is run again. (author)

2. Phenomenology of the next sequential lepton

International Nuclear Information System (INIS)

Rizzo, T.G.

1980-01-01

We consider the phenomenology of a sequential, charged lepton in the mass range 6 --13 GeV. We find the semileptonic branching ratio of such a lepton to be approx. 13%; the dominant two-body modes are found to include the decay L → ν/sub L/F* with a branching ratio approx. 6%. In this analysis we assume that the mass of the lepton under consideration is lighter than the t quark such that decays such as L → ν/sub L/t-barq, where q= (d, s, or b) are kinematically forbidden. We also find that decays such as L → ν/sub L/B* (c-barb) can also be as large as approx. 6% depending on the mixing angles; the lifetime of such a lepton is found to be approx. 2.6 x 10 -12 M/sub L/ -5 sec, where M/sub L/ is in GeV

3. The Origin of Sequential Chromospheric Brightenings

Science.gov (United States)

Kirk, M. S.; Balasubramaniam, K. S.; Jackiewicz, J.; Gilbert, H. R.

2017-06-01

Sequential chromospheric brightenings (SCBs) are often observed in the immediate vicinity of erupting flares and are associated with coronal mass ejections. Since their initial discovery in 2005, there have been several subsequent investigations of SCBs. These studies have used differing detection and analysis techniques, making it difficult to compare results between studies. This work employs the automated detection algorithm of Kirk et al. (Solar Phys. 283, 97, 2013) to extract the physical characteristics of SCBs in 11 flares of varying size and intensity. We demonstrate that the magnetic substructure within the SCB appears to have a significantly smaller area than the corresponding Hα emission. We conclude that SCBs originate in the lower corona around 0.1 R_{⊙} above the photosphere, propagate away from the flare center at speeds of 35 - 85 km s^{-1}, and have peak photosphere magnetic intensities of 148±2.9 G. In light of these measurements, we infer SCBs to be distinctive chromospheric signatures of erupting coronal mass ejections.

4. Sequential decoders for large MIMO systems

KAUST Repository

Ali, Konpal S.

2014-05-01

Due to their ability to provide high data rates, multiple-input multiple-output (MIMO) systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In this paper, we employ the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. Numerical results are done that show moderate bias values result in a decent performance-complexity trade-off. We also attempt to bound the error by bounding the bias, using the minimum distance of a lattice. The variations in complexity with SNR have an interesting trend that shows room for considerable improvement. Our work is compared against linear decoders (LDs) aided with Element-based Lattice Reduction (ELR) and Complex Lenstra-Lenstra-Lovasz (CLLL) reduction. © 2014 IFIP.

5. Social Influences in Sequential Decision Making.

Directory of Open Access Journals (Sweden)

Markus Schöbel

Full Text Available People often make decisions in a social environment. The present work examines social influence on people's decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others' authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions.

6. Social Influences in Sequential Decision Making

Science.gov (United States)

Schöbel, Markus; Rieskamp, Jörg; Huber, Rafael

2016-01-01

People often make decisions in a social environment. The present work examines social influence on people’s decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others’ authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions. PMID:26784448

7. Sequential acquisition of mutations in myelodysplastic syndromes.

Science.gov (United States)

Makishima, Hideki

2017-01-01

Recent progress in next-generation sequencing technologies allows us to discover frequent mutations throughout the coding regions of myelodysplastic syndromes (MDS), potentially providing us with virtually a complete spectrum of driver mutations in this disease. As shown by many study groups these days, such driver mutations are acquired in a gene-specific fashion. For instance, DDX41 mutations are observed in germline cells long before MDS presentation. In blood samples from healthy elderly individuals, somatic DNMT3A and TET2 mutations are detected as age-related clonal hematopoiesis and are believed to be a risk factor for hematological neoplasms. In MDS, mutations of genes such as NRAS and FLT3, designated as Type-1 genes, may be significantly associated with leukemic evolution. Another type (Type-2) of genes, including RUNX1 and GATA2, are related to progression from low-risk to high-risk MDS. Overall, various driver mutations are sequentially acquired in MDS, at a specific time, in either germline cells, normal hematopoietic cells, or clonal MDS cells.

8. Building a Lego wall: Sequential action selection.

Science.gov (United States)

Arnold, Amy; Wing, Alan M; Rotshtein, Pia

2017-05-01

The present study draws together two distinct lines of enquiry into the selection and control of sequential action: motor sequence production and action selection in everyday tasks. Participants were asked to build 2 different Lego walls. The walls were designed to have hierarchical structures with shared and dissociated colors and spatial components. Participants built 1 wall at a time, under low and high load cognitive states. Selection times for correctly completed trials were measured using 3-dimensional motion tracking. The paradigm enabled precise measurement of the timing of actions, while using real objects to create an end product. The experiment demonstrated that action selection was slowed at decision boundary points, relative to boundaries where no between-wall decision was required. Decision points also affected selection time prior to the actual selection window. Dual-task conditions increased selection errors. Errors mostly occurred at boundaries between chunks and especially when these required decisions. The data support hierarchical control of sequenced behavior. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

9. Sequential experimental design based generalised ANOVA

Energy Technology Data Exchange (ETDEWEB)

Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

2016-07-15

Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

10. Social Influences in Sequential Decision Making.

Science.gov (United States)

Schöbel, Markus; Rieskamp, Jörg; Huber, Rafael

2016-01-01

People often make decisions in a social environment. The present work examines social influence on people's decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others' authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions.

11. Minimalism and the Pragmatic Frame

Directory of Open Access Journals (Sweden)

Ana Falcato

2016-02-01

Full Text Available In the debate between literalism and contextualism in semantics, Kent Bach’s project is often taken to stand on the latter side of the divide. In this paper I argue this is a misleading assumption and justify it by contrasting Bach’s assessment of the theoretical eliminability of minimal propositions arguably expressed by well-formed sentences with standard minimalist views, and by further contrasting his account of the division of interpretative processes ascribable to the semantics and pragmatics of a language with a parallel analysis carried out by the most radical opponent to semantic minimalism, i.e., by occasionalism. If my analysis proves right, the sum of its conclusions amounts to a refusal of Bach’s main dichotomies.

12. Principle of minimal work fluctuations.

Science.gov (United States)

Xiao, Gaoyang; Gong, Jiangbin

2015-08-01

Understanding and manipulating work fluctuations in microscale and nanoscale systems are of both fundamental and practical interest. For example, in considering the Jarzynski equality 〈e-βW〉=e-βΔF, a change in the fluctuations of e-βW may impact how rapidly the statistical average of e-βW converges towards the theoretical value e-βΔF, where W is the work, β is the inverse temperature, and ΔF is the free energy difference between two equilibrium states. Motivated by our previous study aiming at the suppression of work fluctuations, here we obtain a principle of minimal work fluctuations. In brief, adiabatic processes as treated in quantum and classical adiabatic theorems yield the minimal fluctuations in e-βW. In the quantum domain, if a system initially prepared at thermal equilibrium is subjected to a work protocol but isolated from a bath during the time evolution, then a quantum adiabatic process without energy level crossing (or an assisted adiabatic process reaching the same final states as in a conventional adiabatic process) yields the minimal fluctuations in e-βW, where W is the quantum work defined by two energy measurements at the beginning and at the end of the process. In the classical domain where the classical work protocol is realizable by an adiabatic process, then the classical adiabatic process also yields the minimal fluctuations in e-βW. Numerical experiments based on a Landau-Zener process confirm our theory in the quantum domain, and our theory in the classical domain explains our previous numerical findings regarding the suppression of classical work fluctuations [G. Y. Xiao and J. B. Gong, Phys. Rev. E 90, 052132 (2014)].

13. Optimizing Processes to Minimize Risk

Science.gov (United States)

Loyd, David

2017-01-01

NASA, like the other hazardous industries, has suffered very catastrophic losses. Human error will likely never be completely eliminated as a factor in our failures. When you can't eliminate risk, focus on mitigating the worst consequences and recovering operations. Bolstering processes to emphasize the role of integration and problem solving is key to success. Building an effective Safety Culture bolsters skill-based performance that minimizes risk and encourages successful engagement.

14. Minimal Length, Measurability and Gravity

Directory of Open Access Journals (Sweden)

Alexander Shalyt-Margolin

2016-03-01

Full Text Available The present work is a continuation of the previous papers written by the author on the subject. In terms of the measurability (or measurable quantities notion introduced in a minimal length theory, first the consideration is given to a quantum theory in the momentum representation. The same terms are used to consider the Markov gravity model that here illustrates the general approach to studies of gravity in terms of measurable quantities.

15. [Minimally invasive coronary artery surgery].

Science.gov (United States)

Zalaquett, R; Howard, M; Irarrázaval, M J; Morán, S; Maturana, G; Becker, P; Medel, J; Sacco, C; Lema, G; Canessa, R; Cruz, F

1999-01-01

16. Minimal massive 3D gravity

International Nuclear Information System (INIS)

Bergshoeff, Eric; Merbis, Wout; Hohm, Olaf; Routh, Alasdair J; Townsend, Paul K

2014-01-01

We present an alternative to topologically massive gravity (TMG) with the same ‘minimal’ bulk properties; i.e. a single local degree of freedom that is realized as a massive graviton in linearization about an anti-de Sitter (AdS) vacuum. However, in contrast to TMG, the new ‘minimal massive gravity’ has both a positive energy graviton and positive central charges for the asymptotic AdS-boundary conformal algebra. (paper)

17. Construction schedules slack time minimizing

Science.gov (United States)

Krzemiński, Michał

2017-07-01

The article presents two copyright models for minimizing downtime working brigades. Models have been developed for construction schedules performed using the method of work uniform. Application of flow shop models is possible and useful for the implementation of large objects, which can be divided into plots. The article also presents a condition describing gives which model should be used, as well as a brief example of optimization schedule. The optimization results confirm the legitimacy of the work on the newly-developed models.

18. Acquiring minimally invasive surgical skills

OpenAIRE

Hiemstra, Ellen

2012-01-01

Many topics in surgical skills education have been implemented without a solid scientific basis. For that reason we have tried to find this scientific basis. We have focused on training and evaluation of minimally invasive surgical skills in a training setting and in practice in the operating room. This thesis has led to an enlarged insight in the organization of surgical skills training during residency training of surgical medical specialists.

19. Comparison of ablation centration after bilateral sequential versus simultaneous LASIK.

Science.gov (United States)

Lin, Jane-Ming; Tsai, Yi-Yu

2005-01-01

To compare ablation centration after bilateral sequential and simultaneous myopic LASIK. A retrospective randomized case series was performed of 670 eyes of 335 consecutive patients who had undergone either bilateral sequential (group 1) or simultaneous (group 2) myopic LASIK between July 2000 and July 2001 at the China Medical University Hospital, Taichung, Taiwan. The ablation centrations of the first and second eyes in the two groups were compared 3 months postoperatively. Of 670 eyes, 274 eyes (137 patients) comprised the sequential group and 396 eyes (198 patients) comprised the simultaneous group. Three months post-operatively, 220 eyes of 110 patients (80%) in the sequential group and 236 eyes of 118 patients (60%) in the simultaneous group provided topographic data for centration analysis. For the first eyes, mean decentration was 0.39 +/- 0.26 mm in the sequential group and 0.41 +/- 0.19 mm in the simultaneous group (P = .30). For the second eyes, mean decentration was 0.28 +/- 0.23 mm in the sequential group and 0.30 +/- 0.21 mm in the simultaneous group (P = .36). Decentration in the second eyes significantly improved in both groups (group 1, P = .02; group 2, P sequential group and 0.32 +/- 0.18 mm in the simultaneous group (P = .33). The difference of ablation center angles between the first and second eyes was 43.2 sequential group and 45.1 +/- 50.8 degrees in the simultaneous group (P = .42). Simultaneous bilateral LASIK is comparable to sequential surgery in ablation centration.

20. The sequential trauma score - a new instrument for the sequential mortality prediction in major trauma*

Directory of Open Access Journals (Sweden)

Huber-Wagner S

2010-05-01

Full Text Available Abstract Background There are several well established scores for the assessment of the prognosis of major trauma patients that all have in common that they can be calculated at the earliest during intensive care unit stay. We intended to develop a sequential trauma score (STS that allows prognosis at several early stages based on the information that is available at a particular time. Study design In a retrospective, multicenter study using data derived from the Trauma Registry of the German Trauma Society (2002-2006, we identified the most relevant prognostic factors from the patients basic data (P, prehospital phase (A, early (B1, and late (B2 trauma room phase. Univariate and logistic regression models as well as score quality criteria and the explanatory power have been calculated. Results A total of 2,354 patients with complete data were identified. From the patients basic data (P, logistic regression showed that age was a significant predictor of survival (AUCmodel p, area under the curve = 0.63. Logistic regression of the prehospital data (A showed that blood pressure, pulse rate, Glasgow coma scale (GCS, and anisocoria were significant predictors (AUCmodel A = 0.76; AUCmodel P + A = 0.82. Logistic regression of the early trauma room phase (B1 showed that peripheral oxygen saturation, GCS, anisocoria, base excess, and thromboplastin time to be significant predictors of survival (AUCmodel B1 = 0.78; AUCmodel P +A + B1 = 0.85. Multivariate analysis of the late trauma room phase (B2 detected cardiac massage, abbreviated injury score (AIS of the head ≥ 3, the maximum AIS, the need for transfusion or massive blood transfusion, to be the most important predictors (AUCmodel B2 = 0.84; AUCfinal model P + A + B1 + B2 = 0.90. The explanatory power - a tool for the assessment of the relative impact of each segment to mortality - is 25% for P, 7% for A, 17% for B1 and 51% for B2. A spreadsheet for the easy calculation of the sequential trauma

1. Fast sequential Monte Carlo methods for counting and optimization

CERN Document Server

2013-01-01

A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

2. Sequential formation of subgroups in OB associations

International Nuclear Information System (INIS)

1977-01-01

We reconsider the structure and formation of OB association in view of recent radio and infrared observations of the adjacent molecular clouds. As a result of this reexamination, we propose that OB subgroups are formed in a step-by-step process which involves the propagation of ionization (I) and shock (S) fronts through a molecular cloud complex. OB stars formed at the edge of a molecular cloud drive these I-S fronts into the cloud. A layer of dense neutral material accumulates between the I and S fronts and eventually becomes gravitationally unstable. This process is analyzed in detail. Several arguments concerning the temperature and mass of this layer suggest that a new OB subgroup will form. After approximately one-half million years, these stars will emerge from and disrupt the star-forming layer. A new shock will be driven into the remaining molecular cloud and will initiate another cycle of star formation.Several observed properties of OB associations are shown to follow from a sequential star-forming mechanism. These include the spatial separation and systematic differences in age of OB subgroups in a given association, the regularity of subgroup masses, the alignment of subgroups along the galactic plane, and their physical expansion. Detailed observations of ionization fronts, masers, IR sources, and molecular clouds are also in agreement with this model. Finally, this mechanism provides a means of dissipating a molecular cloud and exposing less massive stars (e.g., T Tauri stars) which may have formed ahead of the shock as part of the original cloud collapsed and fragmented

3. District heating in sequential energy supply

International Nuclear Information System (INIS)

2012-01-01

Highlights: ► European excess heat recovery and utilisation by district heat distribution. ► Heat recovery in district heating systems – a structural energy efficiency measure. ► Introduction of new theoretical concepts to express excess heat recovery. ► Fourfold potential for excess heat utilisation in EU27 compared to current levels. ► Large scale excess heat recovery – a collaborative challenge for future Europe. -- Abstract: Increased recovery of excess heat from thermal power generation and industrial processes has great potential to reduce primary energy demands in EU27. In this study, current excess heat utilisation levels by means of district heat distribution are assessed and expressed by concepts such as recovery efficiency, heat recovery rate, and heat utilisation rate. For two chosen excess heat activities, current average EU27 heat recovery levels are compared to currently best Member State practices, whereby future potentials of European excess heat recovery and utilisation are estimated. The principle of sequential energy supply is elaborated to capture the conceptual idea of excess heat recovery in district heating systems as a structural and organisational energy efficiency measure. The general conditions discussed concerning expansion of heat recovery into district heating systems include infrastructure investments in district heating networks, collaboration agreements, maintained value chains, policy support, world market energy prices, allocation of synergy benefits, and local initiatives. The main conclusion from this study is that a future fourfold increase of current EU27 excess heat utilisation by means of district heat distribution to residential and service sectors is conceived as plausible if applying best Member State practice. This estimation is higher than the threefold increase with respect to direct feasible distribution costs estimated by the same authors in a previous study. Hence, no direct barriers appear with

4. An Efficient System Based On Closed Sequential Patterns for Web Recommendations

OpenAIRE

Utpala Niranjan; R.B.V. Subramanyam; V-Khana

2010-01-01

Sequential pattern mining, since its introduction has received considerable attention among the researchers with broad applications. The sequential pattern algorithms generally face problems when mining long sequential patterns or while using very low support threshold. One possible solution of such problems is by mining the closed sequential patterns, which is a condensed representation of sequential patterns. Recently, several researchers have utilized the sequential pattern discovery for d...

5. Minimal families of curves on surfaces

KAUST Repository

Lubbes, Niels

2014-01-01

A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal

6. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

Energy Technology Data Exchange (ETDEWEB)

Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

2016-10-01

The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

7. Minimal families of curves on surfaces

KAUST Repository

Lubbes, Niels

2014-11-01

A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal families of a given surface.The classification of minimal families of curves can be reduced to the classification of minimal families which cover weak Del Pezzo surfaces. We classify the minimal families of weak Del Pezzo surfaces and present a table with the number of minimal families of each weak Del Pezzo surface up to Weyl equivalence.As an application of this classification we generalize some results of Schicho. We classify algebraic surfaces that carry a family of conics. We determine the minimal lexicographic degree for the parametrization of a surface that carries at least 2 minimal families. © 2014 Elsevier B.V.

8. EXPERIMENTS TOWARDS DETERMINING BEST TRAINING SAMPLE SIZE FOR AUTOMATED EVALUATION OF DESCRIPTIVE ANSWERS THROUGH SEQUENTIAL MINIMAL OPTIMIZATION

Directory of Open Access Journals (Sweden)

Sunil Kumar C

2014-01-01

Full Text Available With number of students growing each year there is a strong need to automate systems capable of evaluating descriptive answers. Unfortunately, there aren’t many systems capable of performing this task. In this paper, we use a machine learning tool called LightSIDE to accomplish auto evaluation and scoring of descriptive answers. Our experiments are designed to cater to our primary goal of identifying the optimum training sample size so as to get optimum auto scoring. Besides the technical overview and the experiments design, the paper also covers challenges, benefits of the system. We also discussed interdisciplinary areas for future research on this topic.

9. Further comments on the sequential probability ratio testing methods

Energy Technology Data Exchange (ETDEWEB)

Kulacsy, K. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics

1997-05-23

The Bayesian method for belief updating proposed in Racz (1996) is examined. The interpretation of the belief function introduced therein is found, and the method is compared to the classical binary Sequential Probability Ratio Testing method (SPRT). (author).

10. Sequential lineups: shift in criterion or decision strategy?

Science.gov (United States)

Gronlund, Scott D

2004-04-01

R. C. L. Lindsay and G. L. Wells (1985) argued that a sequential lineup enhanced discriminability because it elicited use of an absolute decision strategy. E. B. Ebbesen and H. D. Flowe (2002) argued that a sequential lineup led witnesses to adopt a more conservative response criterion, thereby affecting bias, not discriminability. Height was encoded as absolute (e.g., 6 ft [1.83 m] tall) or relative (e.g., taller than). If a sequential lineup elicited an absolute decision strategy, the principle of transfer-appropriate processing predicted that performance should be best when height was encoded absolutely. Conversely, if a simultaneous lineup elicited a relative decision strategy, performance should be best when height was encoded relatively. The predicted interaction was observed, providing direct evidence for the decision strategies explanation of what happens when witnesses view a sequential lineup.

11. Relations between the simultaneous and sequential transfer of two nucleons

International Nuclear Information System (INIS)

Satchler, G.R.

1982-01-01

The author discusses the perturbative treatment of simultaneous and sequential two-nucleon transfer reactions with special regards to the DWBA. As examples the (t,p), (p,t), and (α,d) reactions are considered. (HSI)

12. Retrieval of sea surface velocities using sequential Ocean Colour ...

R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

pended sediment dispersion patterns, in sequential two time lapsed images. .... face advective velocities consists essentially of iden- tifying the ... matrix is time consuming, a significant reduction .... Chauhan, P. 2002 Personal Communication.

13. Process tomography via sequential measurements on a single quantum system

CSIR Research Space (South Africa)

Bassa, H

2015-09-01

Full Text Available The authors utilize a discrete (sequential) measurement protocol to investigate quantum process tomography of a single two-level quantum system, with an unknown initial state, undergoing Rabi oscillations. The ignorance of the dynamical parameters...

14. Generalized infimum and sequential product of quantum effects

International Nuclear Information System (INIS)

Li Yuan; Sun Xiuhong; Chen Zhengli

2007-01-01

The quantum effects for a physical system can be described by the set E(H) of positive operators on a complex Hilbert space H that are bounded above by the identity operator I. For A, B(set-membership sign)E(H), the operation of sequential product A(convolution sign)B=A 1/2 BA 1/2 was proposed as a model for sequential quantum measurements. A nice investigation of properties of the sequential product has been carried over [Gudder, S. and Nagy, G., 'Sequential quantum measurements', J. Math. Phys. 42, 5212 (2001)]. In this note, we extend some results of this reference. In particular, a gap in the proof of Theorem 3.2 in this reference is overcome. In addition, some properties of generalized infimum A sqcap B are studied

15. Sequential Low Cost Interventions Double Hand Hygiene Rates ...

African Journals Online (AJOL)

Sequential Low Cost Interventions Double Hand Hygiene Rates Among Medical Teams in a Resource Limited Setting. Results of a Hand Hygiene Quality Improvement Project Conducted At University Teaching Hospital of Kigali (Chuk), Kigali, Rwanda.

16. The impact of eyewitness identifications from simultaneous and sequential lineups.

Science.gov (United States)

Wright, Daniel B

2007-10-01

Recent guidelines in the US allow either simultaneous or sequential lineups to be used for eyewitness identification. This paper investigates how potential jurors weight the probative value of the different outcomes from both of these types of lineups. Participants (n=340) were given a description of a case that included some exonerating and some incriminating evidence. There was either a simultaneous or a sequential lineup. Depending on the condition, an eyewitness chose the suspect, chose a filler, or made no identification. The participant had to judge the guilt of the suspect and decide whether to render a guilty verdict. For both simultaneous and sequential lineups an identification had a large effect,increasing the probability of a guilty verdict. There were no reliable effects detected between making no identification and identifying a filler. The effect sizes were similar for simultaneous and sequential lineups. These findings are important for judges and other legal professionals to know for trials involving lineup identifications.

17. A sequential mixed methods research approach to investigating HIV ...

African Journals Online (AJOL)

2016-09-03

Sep 3, 2016 ... Sequential mixed methods research is an effective approach for ... show the effectiveness of the research method. ... qualitative data before quantitative datasets ..... whereby both types of data are collected simultaneously.

18. LLNL Waste Minimization Program Plan

International Nuclear Information System (INIS)

1990-05-01

This document is the February 14, 1990 version of the LLNL Waste Minimization Program Plan (WMPP). Now legislation at the federal level is being introduced. Passage will result in new EPA regulations and also DOE orders. At the state level the Hazardous Waste Reduction and Management Review Act of 1989 was signed by the Governor. DHS is currently promulgating regulations to implement the new law. EPA has issued a proposed new policy statement on source reduction and recycling. This policy reflects a preventative strategy to reduce or eliminate the generation of environmentally-harmful pollutants which may be released to the air, land surface, water, or ground water. In accordance with this policy new guidance to hazardous waste generators on the elements of a Waste Minimization Program was issued. This WMPP is formatted to meet the current DOE guidance outlines. The current WMPP will be revised to reflect all of these proposed changes when guidelines are established. Updates, changes and revisions to the overall LLNL WMPP will be made as appropriate to reflect ever-changing regulatory requirements

19. Symmetry breaking for drag minimization

Science.gov (United States)

Roper, Marcus; Squires, Todd M.; Brenner, Michael P.

2005-11-01

For locomotion at high Reynolds numbers drag minimization favors fore-aft asymmetric slender shapes with blunt noses and sharp trailing edges. On the other hand, in an inertialess fluid the drag experienced by a body is independent of whether it travels forward or backward through the fluid, so there is no advantage to having a single preferred swimming direction. In fact numerically determined minimum drag shapes are known to exhibit almost no fore-aft asymmetry even at moderate Re. We show that asymmetry persists, albeit extremely weakly, down to vanishingly small Re, scaling asymptotically as Re^3. The need to minimize drag to maximize speed for a given propulsive capacity gives one possible mechanism for the increasing asymmetry in the body plans seen in nature, as organisms increase in size and swimming speed from bacteria like E-Coli up to pursuit predator fish such as tuna. If it is the dominant mechanism, then this signature scaling will be observed in the shapes of motile micro-organisms.

20. Concatenated coding system with iterated sequential inner decoding

DEFF Research Database (Denmark)

1995-01-01

We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder......We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder...

1. Multichannel, sequential or combined X-ray spectrometry

International Nuclear Information System (INIS)

Florestan, J.

1979-01-01

X-ray spectrometer qualities and defects are evaluated for sequential and multichannel categories. Multichannel X-ray spectrometer has time-coherency advantage and its results could be more reproducible; on the other hand some spatial incoherency limits low percentage and traces applications, specially when backgrounds are very variable. In this last case, sequential X-ray spectrometer would find again great usefulness [fr

2. A Survey of Multi-Objective Sequential Decision-Making

OpenAIRE

Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.

2013-01-01

Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-obj...

3. Configural and component processing in simultaneous and sequential lineup procedures

OpenAIRE

Flowe, HD; Smith, HMJ; Karoğlu, N; Onwuegbusi, TO; Rai, L

2015-01-01

Configural processing supports accurate face recognition, yet it has never been examined within the context of criminal identification lineups. We tested, using the inversion paradigm, the role of configural processing in lineups. Recent research has found that face discrimination accuracy in lineups is better in a simultaneous compared to a sequential lineup procedure. Therefore, we compared configural processing in simultaneous and sequential lineups to examine whether there are differences...

4. Sequential weak continuity of null Lagrangians at the boundary

Czech Academy of Sciences Publication Activity Database

Kalamajska, A.; Kraemer, S.; Kružík, Martin

2014-01-01

Roč. 49, 3/4 (2014), s. 1263-1278 ISSN 0944-2669 R&D Projects: GA ČR GAP201/10/0357 Institutional support: RVO:67985556 Keywords : null Lagrangians * nonhomogeneous nonlinear mappings * sequential weak/in measure continuity Subject RIV: BA - General Mathematics Impact factor: 1.518, year: 2014 http://library.utia.cas.cz/separaty/2013/MTR/kruzik-sequential weak continuity of null lagrangians at the boundary.pdf

5. Sequential bayes estimation algorithm with cubic splines on uniform meshes

International Nuclear Information System (INIS)

Hossfeld, F.; Mika, K.; Plesser-Walk, E.

1975-11-01

After outlining the principles of some recent developments in parameter estimation, a sequential numerical algorithm for generalized curve-fitting applications is presented combining results from statistical estimation concepts and spline analysis. Due to its recursive nature, the algorithm can be used most efficiently in online experimentation. Using computer-sumulated and experimental data, the efficiency and the flexibility of this sequential estimation procedure is extensively demonstrated. (orig.) [de

6. Sequential contrast-enhanced MR imaging of the penis.

Science.gov (United States)

Kaneko, K; De Mouy, E H; Lee, B E

1994-04-01

To determine the enhancement patterns of the penis at magnetic resonance (MR) imaging. Sequential contrast material-enhanced MR images of the penis in a flaccid state were obtained in 16 volunteers (12 with normal penile function and four with erectile dysfunction). Subjects with normal erectile function showed gradual and centrifugal enhancement of the corpora cavernosa, while those with erectile dysfunction showed poor enhancement with abnormal progression. Sequential contrast-enhanced MR imaging provides additional morphologic information for the evaluation of erectile dysfunction.

7. Group-sequential analysis may allow for early trial termination

DEFF Research Database (Denmark)

Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich

2017-01-01

BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...... assumed to be normally distributed, and sequential one-sided hypothesis tests on the population standard deviation of the differences against a hypothesised value of 1.5 were performed, employing an alpha spending function. The fixed-sample analysis (N = 45) was compared with the group-sequential analysis...... strategies comprising one (at N = 23), two (at N = 15, 30), or three interim analyses (at N = 11, 23, 34), respectively, which were defined post hoc. RESULTS: When performing interim analyses with one third and two thirds of patients, sufficient agreement could be concluded after the first interim analysis...

8. Laparoscopic colonic resection in inflammatory bowel disease: minimal surgery, minimal access and minimal hospital stay.

LENUS (Irish Health Repository)

Boyle, E

2008-11-01

Laparoscopic surgery for inflammatory bowel disease (IBD) is technically demanding but can offer improved short-term outcomes. The introduction of minimally invasive surgery (MIS) as the default operative approach for IBD, however, may have inherent learning curve-associated disadvantages. We hypothesise that the establishment of MIS as the standard operative approach does not increase patient morbidity as assessed in the initial period of its introduction into a specialised unit, and that it confers earlier postoperative gastrointestinal recovery and reduced hospitalisation compared with conventional open resection.

9. Minimally invasive aortic valve replacement

DEFF Research Database (Denmark)

Foghsgaard, Signe; Schmidt, Thomas Andersen; Kjaergard, Henrik K

2009-01-01

In this descriptive prospective study, we evaluate the outcomes of surgery in 98 patients who were scheduled to undergo minimally invasive aortic valve replacement. These patients were compared with a group of 50 patients who underwent scheduled aortic valve replacement through a full sternotomy...... operations were completed as mini-sternotomies, 4 died later of noncardiac causes. The aortic cross-clamp and perfusion times were significantly different across all groups (P replacement...... is an excellent operation in selected patients, but its true advantages over conventional aortic valve replacement (other than a smaller scar) await evaluation by means of randomized clinical trial. The "extended mini-aortic valve replacement" operation, on the other hand, is a risky procedure that should...

10. Minimization over randomly selected lines

Directory of Open Access Journals (Sweden)

Ismet Sahin

2013-07-01

Full Text Available This paper presents a population-based evolutionary optimization method for minimizing a given cost function. The mutation operator of this method selects randomly oriented lines in the cost function domain, constructs quadratic functions interpolating the cost function at three different points over each line, and uses extrema of the quadratics as mutated points. The crossover operator modifies each mutated point based on components of two points in population, instead of one point as is usually performed in other evolutionary algorithms. The stopping criterion of this method depends on the number of almost degenerate quadratics. We demonstrate that the proposed method with these mutation and crossover operations achieves faster and more robust convergence than the well-known Differential Evolution and Particle Swarm algorithms.

11. Strategies to Minimize Antibiotic Resistance

Directory of Open Access Journals (Sweden)

Sang Hee Lee

2013-09-01

Full Text Available Antibiotic resistance can be reduced by using antibiotics prudently based on guidelines of antimicrobial stewardship programs (ASPs and various data such as pharmacokinetic (PK and pharmacodynamic (PD properties of antibiotics, diagnostic testing, antimicrobial susceptibility testing (AST, clinical response, and effects on the microbiota, as well as by new antibiotic developments. The controlled use of antibiotics in food animals is another cornerstone among efforts to reduce antibiotic resistance. All major resistance-control strategies recommend education for patients, children (e.g., through schools and day care, the public, and relevant healthcare professionals (e.g., primary-care physicians, pharmacists, and medical students regarding unique features of bacterial infections and antibiotics, prudent antibiotic prescribing as a positive construct, and personal hygiene (e.g., handwashing. The problem of antibiotic resistance can be minimized only by concerted efforts of all members of society for ensuring the continued efficiency of antibiotics.

12. A minimally invasive smile enhancement.

Science.gov (United States)

Peck, Fred H

2014-01-01

Minimally invasive dentistry refers to a wide variety of dental treatments. On the restorative aspect of dental procedures, direct resin bonding can be a very conservative treatment option for the patient. When tooth structure does not need to be removed, the patient benefits. Proper treatment planning is essential to determine how conservative the restorative treatment will be. This article describes the diagnosis, treatment options, and procedural techniques in the restoration of 4 maxillary anterior teeth with direct composite resin. The procedural steps are reviewed with regard to placing the composite and the variety of colors needed to ensure a natural result. Finishing and polishing of the composite are critical to ending with a natural looking dentition that the patient will be pleased with for many years.

13. Waste minimization in analytical methods

International Nuclear Information System (INIS)

Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S. Schilling, J.B.

1995-01-01

The US Department of Energy (DOE) will require a large number of waste characterizations over a multi-year period to accomplish the Department's goals in environmental restoration and waste management. Estimates vary, but two million analyses annually are expected. The waste generated by the analytical procedures used for characterizations is a significant source of new DOE waste. Success in reducing the volume of secondary waste and the costs of handling this waste would significantly decrease the overall cost of this DOE program. Selection of appropriate analytical methods depends on the intended use of the resultant data. It is not always necessary to use a high-powered analytical method, typically at higher cost, to obtain data needed to make decisions about waste management. Indeed, for samples taken from some heterogeneous systems, the meaning of high accuracy becomes clouded if the data generated are intended to measure a property of this system. Among the factors to be considered in selecting the analytical method are the lower limit of detection, accuracy, turnaround time, cost, reproducibility (precision), interferences, and simplicity. Occasionally, there must be tradeoffs among these factors to achieve the multiple goals of a characterization program. The purpose of the work described here is to add waste minimization to the list of characteristics to be considered. In this paper the authors present results of modifying analytical methods for waste characterization to reduce both the cost of analysis and volume of secondary wastes. Although tradeoffs may be required to minimize waste while still generating data of acceptable quality for the decision-making process, they have data demonstrating that wastes can be reduced in some cases without sacrificing accuracy or precision

14. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

Directory of Open Access Journals (Sweden)

Gonglin Yuan

Full Text Available Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1 βk ≥ 0 2 the search direction has the trust region property without the use of any line search method 3 the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

15. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

Science.gov (United States)

Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

2015-01-01

Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

16. Simultaneous Versus Sequential Ptosis and Strabismus Surgery in Children.

Science.gov (United States)

Revere, Karen E; Binenbaum, Gil; Li, Jonathan; Mills, Monte D; Katowitz, William R; Katowitz, James A

The authors sought to compare the clinical outcomes of simultaneous versus sequential ptosis and strabismus surgery in children. Retrospective, single-center cohort study of children requiring both ptosis and strabismus surgery on the same eye. Simultaneous surgeries were performed during a single anesthetic event; sequential surgeries were performed at least 7 weeks apart. Outcomes were ptosis surgery success (margin reflex distance 1 ≥ 2 mm, good eyelid contour, and good eyelid crease); strabismus surgery success (ocular alignment within 10 prism diopters of orthophoria and/or improved head position); surgical complications; and reoperations. Fifty-six children were studied, 38 had simultaneous surgery and 18 sequential. Strabismus surgery was performed first in 38/38 simultaneous and 6/18 sequential cases. Mean age at first surgery was 64 months, with mean follow up 27 months. A total of 75% of children had congenital ptosis; 64% had comitant strabismus. A majority of ptosis surgeries were frontalis sling (59%) or Fasanella-Servat (30%) procedures. There were no significant differences between simultaneous and sequential groups with regards to surgical success rates, complications, or reoperations (all p > 0.28). In the first comparative study of simultaneous versus sequential ptosis and strabismus surgery, no advantage for sequential surgery was seen. Despite a theoretical risk of postoperative eyelid malposition or complications when surgeries were performed in a combined manner, the rate of such outcomes was not increased with simultaneous surgeries. Performing ptosis and strabismus surgery together appears to be clinically effective and safe, and reduces anesthesia exposure during childhood.

17. A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.

Science.gov (United States)

Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L

2016-03-01

Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015

18. Human visual system automatically encodes sequential regularities of discrete events.

Science.gov (United States)

Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki

2010-06-01

For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential

19. Primal Interior Point Method for Minimization of Generalized Minimax Functions

Czech Academy of Sciences Publication Activity Database

2010-01-01

Roč. 46, č. 4 (2010), s. 697-721 ISSN 0023-5954 R&D Projects: GA ČR GA201/09/1957 Institutional research plan: CEZ:AV0Z10300504 Keywords : unconstrained optimization * large-scale optimization * nonsmooth optimization * generalized minimax optimization * interior-point methods * modified Newton methods * variable metric methods * global convergence * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.461, year: 2010 http://dml.cz/handle/10338.dmlcz/140779

20. Sequential assessment of prey through the use of multiple sensory cues by an eavesdropping bat

Science.gov (United States)

Page, Rachel A.; Schnelle, Tanja; Kalko, Elisabeth K. V.; Bunge, Thomas; Bernal, Ximena E.

2012-06-01

1. Expert system for identification of simultaneous and sequential reactor fuel failures with gas tagging

Science.gov (United States)

Gross, Kenny C.

1994-01-01

Failure of a fuel element in a nuclear reactor core is determined by a gas tagging failure detection system and method. Failures are catalogued and characterized after the event so that samples of the reactor's cover gas are taken at regular intervals and analyzed by mass spectroscopy. Employing a first set of systematic heuristic rules which are applied in a transformed node space allows the number of node combinations which must be processed within a barycentric algorithm to be substantially reduced. A second set of heuristic rules treats the tag nodes of the most recent one or two leakers as "background" gases, further reducing the number of trial node combinations. Lastly, a "fuzzy" set theory formalism minimizes experimental uncertainties in the identification of the most likely volumes of tag gases. This approach allows for the identification of virtually any number of sequential leaks and up to five simultaneous gas leaks from fuel elements.

2. Expert system for identification of simultaneous and sequential reactor fuel failures with gas tagging

International Nuclear Information System (INIS)

Gross, K.C.

1994-01-01

Failure of a fuel element in a nuclear reactor core is determined by a gas tagging failure detection system and method. Failures are catalogued and characterized after the event so that samples of the reactor's cover gas are taken at regular intervals and analyzed by mass spectroscopy. Employing a first set of systematic heuristic rules which are applied in a transformed node space allows the number of node combinations which must be processed within a barycentric algorithm to be substantially reduced. A second set of heuristic rules treats the tag nodes of the most recent one or two leakers as ''background'' gases, further reducing the number of trial node combinations. Lastly, a ''fuzzy'' set theory formalism minimizes experimental uncertainties in the identification of the most likely volumes of tag gases. This approach allows for the identification of virtually any number of sequential leaks and up to five simultaneous gas leaks from fuel elements. 14 figs

3. Multistrain models predict sequential multidrug treatment strategies to result in less antimicrobial resistance than combination treatment

DEFF Research Database (Denmark)

Ahmad, Amais; Zachariasen, Camilla; Christiansen, Lasse Engbo

2016-01-01

generated by a mathematical model of the competitive growth of multiple strains of Escherichia coli.Results: Simulation studies showed that sequential use of tetracycline and ampicillin reduced the level of double resistance, when compared to the combination treatment. The effect of the cycling frequency...... frequency did not play a role in suppressing the growth of resistant strains, but the specific order of the two antimicrobials did. Predictions made from the study could be used to redesign multidrug treatment strategies not only for intramuscular treatment in pigs, but also for other dosing routes.......Background: Combination treatment is increasingly used to fight infections caused by bacteria resistant to two or more antimicrobials. While multiple studies have evaluated treatment strategies to minimize the emergence of resistant strains for single antimicrobial treatment, fewer studies have...

4. A novel non-sequential hydrogen-pulsed deep reactive ion etching of silicon

International Nuclear Information System (INIS)

Gharooni, M; Mohajerzadeh, A; Sandoughsaz, A; Khanof, S; Mohajerzadeh, S; Asl-Soleimani, E

2013-01-01

A non-sequential pulsed-mode deep reactive ion etching of silicon is reported that employs continuous etching and passivation based on SF 6 and H 2 gases. The passivation layer, as an important step for deep vertical etching of silicon, is feasible by hydrogen pulses in proper time-slots. By adjusting the etching parameters such as plasma power, H 2 and SF 6 flows and hydrogen pulse timing, the process can be controlled for minimum underetch and high etch-rate at the same time. High-aspect-ratio features can be realized with low-density plasma power and by controlling the reaction chemistry. The so-called reactive ion etching lag has been minimized by operating the reactor at higher pressures. X-ray photoelectron spectroscopy and scanning electron microscopy have been used to study the formation of the passivation layer and the passivation mechanism. (paper)

5. Cyclone Simulation via Action Minimization

Science.gov (United States)

Plotkin, D. A.; Weare, J.; Abbot, D. S.

2016-12-01

A postulated impact of climate change is an increase in intensity of tropical cyclones (TCs). This hypothesized effect results from the fact that TCs are powered subsaturated boundary layer air picking up water vapor from the surface ocean as it flows inwards towards the eye. This water vapor serves as the energy input for TCs, which can be idealized as heat engines. The inflowing air has a nearly identical temperature as the surface ocean; therefore, warming of the surface leads to a warmer atmospheric boundary layer. By the Clausius-Clapeyron relationship, warmer boundary layer air can hold more water vapor and thus results in more energetic storms. Changes in TC intensity are difficult to predict due to the presence of fine structures (e.g. convective structures and rainbands) with length scales of less than 1 km, while general circulation models (GCMs) generally have horizontal resolutions of tens of kilometers. The models are therefore unable to capture these features, which are critical to accurately simulating cyclone structure and intensity. Further, strong TCs are rare events, meaning that long multi-decadal simulations are necessary to generate meaningful statistics about intense TC activity. This adds to the computational expense, making it yet more difficult to generate accurate statistics about long-term changes in TC intensity due to global warming via direct simulation. We take an alternative approach, applying action minimization techniques developed in molecular dynamics to the WRF weather/climate model. We construct artificial model trajectories that lead from quiescent (TC-free) states to TC states, then minimize the deviation of these trajectories from true model dynamics. We can thus create Monte Carlo model ensembles that are biased towards cyclogenesis, which reduces computational expense by limiting time spent in non-TC states. This allows for: 1) selective interrogation of model states with TCs; 2) finding the likeliest paths for

6. Native Frames: Disentangling Sequential from Concerted Three-Body Fragmentation

Science.gov (United States)

Rajput, Jyoti; Severt, T.; Berry, Ben; Jochim, Bethany; Feizollah, Peyman; Kaderiya, Balram; Zohrabi, M.; Ablikim, U.; Ziaee, Farzaneh; Raju P., Kanaka; Rolles, D.; Rudenko, A.; Carnes, K. D.; Esry, B. D.; Ben-Itzhak, I.

2018-03-01

A key question concerning the three-body fragmentation of polyatomic molecules is the distinction of sequential and concerted mechanisms, i.e., the stepwise or simultaneous cleavage of bonds. Using laser-driven fragmentation of OCS into O++C++S+ and employing coincidence momentum imaging, we demonstrate a novel method that enables the clear separation of sequential and concerted breakup. The separation is accomplished by analyzing the three-body fragmentation in the native frame associated with each step and taking advantage of the rotation of the intermediate molecular fragment, CO2 + or CS2 + , before its unimolecular dissociation. This native-frame method works for any projectile (electrons, ions, or photons), provides details on each step of the sequential breakup, and enables the retrieval of the relevant spectra for sequential and concerted breakup separately. Specifically, this allows the determination of the branching ratio of all these processes in OCS3 + breakup. Moreover, we find that the first step of sequential breakup is tightly aligned along the laser polarization and identify the likely electronic states of the intermediate dication that undergo unimolecular dissociation in the second step. Finally, the separated concerted breakup spectra show clearly that the central carbon atom is preferentially ejected perpendicular to the laser field.

7. Synthesizing genetic sequential logic circuit with clock pulse generator.

Science.gov (United States)

Chuang, Chia-Hua; Lin, Chun-Liang

2014-05-28

Rhythmic clock widely occurs in biological systems which controls several aspects of cell physiology. For the different cell types, it is supplied with various rhythmic frequencies. How to synthesize a specific clock signal is a preliminary but a necessary step to further development of a biological computer in the future. This paper presents a genetic sequential logic circuit with a clock pulse generator based on a synthesized genetic oscillator, which generates a consecutive clock signal whose frequency is an inverse integer multiple to that of the genetic oscillator. An analogous electronic waveform-shaping circuit is constructed by a series of genetic buffers to shape logic high/low levels of an oscillation input in a basic sinusoidal cycle and generate a pulse-width-modulated (PWM) output with various duty cycles. By controlling the threshold level of the genetic buffer, a genetic clock pulse signal with its frequency consistent to the genetic oscillator is synthesized. A synchronous genetic counter circuit based on the topology of the digital sequential logic circuit is triggered by the clock pulse to synthesize the clock signal with an inverse multiple frequency to the genetic oscillator. The function acts like a frequency divider in electronic circuits which plays a key role in the sequential logic circuit with specific operational frequency. A cascaded genetic logic circuit generating clock pulse signals is proposed. Based on analogous implement of digital sequential logic circuits, genetic sequential logic circuits can be constructed by the proposed approach to generate various clock signals from an oscillation signal.

8. Minimalism through intraoperative functional mapping.

Science.gov (United States)

Berger, M S

1996-01-01

Intraoperative stimulation mapping may be used to avoid unnecessary risk to functional regions subserving language and sensori-motor pathways. Based on the data presented here, language localization is variable in the entire population, with only certainty existing for the inferior frontal region responsible for motor speech. Anatomical landmarks such as the anterior temporal tip for temporal lobe language sites and the posterior aspect of the lateral sphenoid wing for the frontal lobe language zones are unreliable in avoiding postoperative aphasias. Thus, individual mapping to identify essential language sites has the greatest likelihood of avoiding permanent deficits in naming, reading, and motor speech. In a similar approach, motor and sensory pathways from the cortex and underlying white matter may be reliably stimulated and mapped in both awake and asleep patients. Although these techniques require an additional operative time and equipment nominally priced, the result is often gratifying, as postoperative morbidity has been greatly reduced in the process of incorporating these surgical strategies. The patients quality of life is improved in terms of seizure control, with or without antiepileptic drugs. This avoids having to perform a second costly operative procedure, which is routinely done when extraoperative stimulation and recording is done via subdural grids. In addition, an aggressive tumor resection at the initial operation lengthens the time to tumor recurrence and often obviates the need for a subsequent reoperation. Thus, intraoperative functional mapping may be best alluded to as a surgical technique that results in "minimalism in the long term".

9. Against explanatory minimalism in psychiatry

Directory of Open Access Journals (Sweden)

Tim eThornton

2015-12-01

Full Text Available The idea that psychiatry contains, in principle, a series of levels of explanation has been criticised both as empirically false but also, by Campbell, as unintelligible because it presupposes a discredited pre-Humean view of causation. Campbell’s criticism is based on an interventionist-inspired denial that mechanisms and rational connections underpin physical and mental causation respectively and hence underpin levels of explanation. These claims echo some superficially similar remarks in Wittgenstein’s Zettel. But attention to the context of Wittgenstein’s remarks suggests a reason to reject explanatory minimalism in psychiatry and reinstate a Wittgensteinian notion of level of explanation. Only in a context broader than the one provided by interventionism is the ascription of propositional attitudes, even in the puzzling case of delusions, justified. Such a view, informed by Wittgenstein, can reconcile the idea that the ascription mental phenomena presupposes a particular level of explanation with the rejection of an a priori claim about its connection to a neurological level of explanation.

10. Against Explanatory Minimalism in Psychiatry.

Science.gov (United States)

Thornton, Tim

2015-01-01

The idea that psychiatry contains, in principle, a series of levels of explanation has been criticized not only as empirically false but also, by Campbell, as unintelligible because it presupposes a discredited pre-Humean view of causation. Campbell's criticism is based on an interventionist-inspired denial that mechanisms and rational connections underpin physical and mental causation, respectively, and hence underpin levels of explanation. These claims echo some superficially similar remarks in Wittgenstein's Zettel. But attention to the context of Wittgenstein's remarks suggests a reason to reject explanatory minimalism in psychiatry and reinstate a Wittgensteinian notion of levels of explanation. Only in a context broader than the one provided by interventionism is that the ascription of propositional attitudes, even in the puzzling case of delusions, justified. Such a view, informed by Wittgenstein, can reconcile the idea that the ascription mental phenomena presupposes a particular level of explanation with the rejection of an a priori claim about its connection to a neurological level of explanation.

11. Robotic assisted minimally invasive surgery

Directory of Open Access Journals (Sweden)

Palep Jaydeep

2009-01-01

Full Text Available The term "robot" was coined by the Czech playright Karel Capek in 1921 in his play Rossom′s Universal Robots. The word "robot" is from the check word robota which means forced labor.The era of robots in surgery commenced in 1994 when the first AESOP (voice controlled camera holder prototype robot was used clinically in 1993 and then marketed as the first surgical robot ever in 1994 by the US FDA. Since then many robot prototypes like the Endoassist (Armstrong Healthcare Ltd., High Wycombe, Buck, UK, FIPS endoarm (Karlsruhe Research Center, Karlsruhe, Germany have been developed to add to the functions of the robot and try and increase its utility. Integrated Surgical Systems (now Intuitive Surgery, Inc. redesigned the SRI Green Telepresence Surgery system and created the daVinci Surgical System ® classified as a master-slave surgical system. It uses true 3-D visualization and EndoWrist ® . It was approved by FDA in July 2000 for general laparoscopic surgery, in November 2002 for mitral valve repair surgery. The da Vinci robot is currently being used in various fields such as urology, general surgery, gynecology, cardio-thoracic, pediatric and ENT surgery. It provides several advantages to conventional laparoscopy such as 3D vision, motion scaling, intuitive movements, visual immersion and tremor filtration. The advent of robotics has increased the use of minimally invasive surgery among laparoscopically naοve surgeons and expanded the repertoire of experienced surgeons to include more advanced and complex reconstructions.

12. Is non-minimal inflation eternal?

International Nuclear Information System (INIS)

Feng, Chao-Jun; Li, Xin-Zhou

2010-01-01

The possibility that the non-minimal coupling inflation could be eternal is investigated. We calculate the quantum fluctuation of the inflaton in a Hubble time and find that it has the same value as that in the minimal case in the slow-roll limit. Armed with this result, we have studied some concrete non-minimal inflationary models including the chaotic inflation and the natural inflation, in which the inflaton is non-minimally coupled to the gravity. We find that the non-minimal coupling inflation could be eternal in some parameter spaces.

13. Lexical decoder for continuous speech recognition: sequential neural network approach

International Nuclear Information System (INIS)

Iooss, Christine

1991-01-01

The work presented in this dissertation concerns the study of a connectionist architecture to treat sequential inputs. In this context, the model proposed by J.L. Elman, a recurrent multilayers network, is used. Its abilities and its limits are evaluated. Modifications are done in order to treat erroneous or noisy sequential inputs and to classify patterns. The application context of this study concerns the realisation of a lexical decoder for analytical multi-speakers continuous speech recognition. Lexical decoding is completed from lattices of phonemes which are obtained after an acoustic-phonetic decoding stage relying on a K Nearest Neighbors search technique. Test are done on sentences formed from a lexicon of 20 words. The results are obtained show the ability of the proposed connectionist model to take into account the sequentiality at the input level, to memorize the context and to treat noisy or erroneous inputs. (author) [fr

14. Computing Sequential Equilibria for Two-Player Games

DEFF Research Database (Denmark)

Miltersen, Peter Bro; Sørensen, Troels Bjerre

2006-01-01

Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Koller and Pfeffer pointed out that the strategies...... obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming...... a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial time. In addition, the equilibrium we find is normal-form perfect. Our technique generalizes to general-sum games...

15. Computing sequential equilibria for two-player games

DEFF Research Database (Denmark)

Miltersen, Peter Bro

2006-01-01

Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Their algorithm has been used by AI researchers...... for constructing prescriptive strategies for concrete, often fairly large games. Koller and Pfeffer pointed out that the strategies obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency...... by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial...

16. A Bayesian sequential processor approach to spectroscopic portal system decisions

Energy Technology Data Exchange (ETDEWEB)

Sale, K; Candy, J; Breitfeller, E; Guidry, B; Manatt, D; Gosnell, T; Chambers, D

2007-07-31

The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waiting for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.

17. Configural and component processing in simultaneous and sequential lineup procedures.

Science.gov (United States)

Flowe, Heather D; Smith, Harriet M J; Karoğlu, Nilda; Onwuegbusi, Tochukwu O; Rai, Lovedeep

2016-01-01

Configural processing supports accurate face recognition, yet it has never been examined within the context of criminal identification lineups. We tested, using the inversion paradigm, the role of configural processing in lineups. Recent research has found that face discrimination accuracy in lineups is better in a simultaneous compared to a sequential lineup procedure. Therefore, we compared configural processing in simultaneous and sequential lineups to examine whether there are differences. We had participants view a crime video, and then they attempted to identify the perpetrator from a simultaneous or sequential lineup. The test faces were presented either upright or inverted, as previous research has shown that inverting test faces disrupts configural processing. The size of the inversion effect for faces was the same across lineup procedures, indicating that configural processing underlies face recognition in both procedures. Discrimination accuracy was comparable across lineup procedures in both the upright and inversion condition. Theoretical implications of the results are discussed.

18. Visual short-term memory for sequential arrays.

Science.gov (United States)

Kumar, Arjun; Jiang, Yuhong

2005-04-01

The capacity of visual short-term memory (VSTM) for a single visual display has been investigated in past research, but VSTM for multiple sequential arrays has been explored only recently. In this study, we investigate the capacity of VSTM across two sequential arrays separated by a variable stimulus onset asynchrony (SOA). VSTM for spatial locations (Experiment 1), colors (Experiments 2-4), orientations (Experiments 3 and 4), and conjunction of color and orientation (Experiment 4) were tested, with the SOA across the two sequential arrays varying from 100 to 1,500 msec. We find that VSTM for the trailing array is much better than VSTM for the leading array, but when averaged across the two arrays VSTM has a constant capacity independent of the SOA. We suggest that multiple displays compete for retention in VSTM and that separating information into two temporally discrete groups does not enhance the overall capacity of VSTM.

19. Minimal nuclear energy density functional

Science.gov (United States)

Bulgac, Aurel; Forbes, Michael McNeil; Jin, Shi; Perez, Rodrigo Navarro; Schunck, Nicolas

2018-04-01

We present a minimal nuclear energy density functional (NEDF) called "SeaLL1" that has the smallest number of possible phenomenological parameters to date. SeaLL1 is defined by seven significant phenomenological parameters, each related to a specific nuclear property. It describes the nuclear masses of even-even nuclei with a mean energy error of 0.97 MeV and a standard deviation of 1.46 MeV , two-neutron and two-proton separation energies with rms errors of 0.69 MeV and 0.59 MeV respectively, and the charge radii of 345 even-even nuclei with a mean error ɛr=0.022 fm and a standard deviation σr=0.025 fm . SeaLL1 incorporates constraints on the equation of state (EoS) of pure neutron matter from quantum Monte Carlo calculations with chiral effective field theory two-body (NN ) interactions at the next-to-next-to-next-to leading order (N3LO) level and three-body (NNN ) interactions at the next-to-next-to leading order (N2LO) level. Two of the seven parameters are related to the saturation density and the energy per particle of the homogeneous symmetric nuclear matter, one is related to the nuclear surface tension, two are related to the symmetry energy and its density dependence, one is related to the strength of the spin-orbit interaction, and one is the coupling constant of the pairing interaction. We identify additional phenomenological parameters that have little effect on ground-state properties but can be used to fine-tune features such as the Thomas-Reiche-Kuhn sum rule, the excitation energy of the giant dipole and Gamow-Teller resonances, the static dipole electric polarizability, and the neutron skin thickness.

20. Minimal models of multidimensional computations.

Directory of Open Access Journals (Sweden)

Jeffrey D Fitzgerald

2011-03-01

Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.

1. Westinghouse Hanford Company waste minimization actions

International Nuclear Information System (INIS)

Greenhalgh, W.O.

1988-09-01

Companies that generate hazardous waste materials are now required by national regulations to establish a waste minimization program. Accordingly, in FY88 the Westinghouse Hanford Company formed a waste minimization team organization. The purpose of the team is to assist the company in its efforts to minimize the generation of waste, train personnel on waste minimization techniques, document successful waste minimization effects, track dollar savings realized, and to publicize and administer an employee incentive program. A number of significant actions have been successful, resulting in the savings of materials and dollars. The team itself has been successful in establishing some worthwhile minimization projects. This document briefly describes the waste minimization actions that have been successful to date. 2 refs., 26 figs., 3 tabs

2. Sequential analysis in neonatal research-systematic review.

Science.gov (United States)

Lava, Sebastiano A G; Elie, Valéry; Ha, Phuong Thi Viet; Jacqz-Aigrain, Evelyne

2018-05-01

As more new drugs are discovered, traditional designs come at their limits. Ten years after the adoption of the European Paediatric Regulation, we performed a systematic review on the US National Library of Medicine and Excerpta Medica database of sequential trials involving newborns. Out of 326 identified scientific reports, 21 trials were included. They enrolled 2832 patients, of whom 2099 were analyzed: the median number of neonates included per trial was 48 (IQR 22-87), median gestational age was 28.7 (IQR 27.9-30.9) weeks. Eighteen trials used sequential techniques to determine sample size, while 3 used continual reassessment methods for dose-finding. In 16 studies reporting sufficient data, the sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674) with respect to a traditional trial. When the number of neonates finally included in the analysis was considered, the difference became significant: 35 (57%) patients (IQR 10 to 136.5, p = 0.0033). Sequential trial designs have not been frequently used in Neonatology. They might potentially be able to reduce the number of patients in drug trials, although this is not always the case. What is known: • In evaluating rare diseases in fragile populations, traditional designs come at their limits. About 20% of pediatric trials are discontinued, mainly because of recruitment problems. What is new: • Sequential trials involving newborns were infrequently used and only a few (n = 21) are available for analysis. • The sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674).

3. The pursuit of balance in sequential randomized trials

Directory of Open Access Journals (Sweden)

Raymond P. Guiteras

2016-06-01

Full Text Available In many randomized trials, subjects enter the sample sequentially. Because the covariates for all units are not known in advance, standard methods of stratification do not apply. We describe and assess the method of DA-optimal sequential allocation (Atkinson, 1982 for balancing stratification covariates across treatment arms. We provide simulation evidence that the method can provide substantial improvements in precision over commonly employed alternatives. We also describe our experience implementing the method in a field trial of a clean water and handwashing intervention in Dhaka, Bangladesh, the first time the method has been used. We provide advice and software for future researchers.

4. TELEGRAPHS TO INCANDESCENT LAMPS: A SEQUENTIAL PROCESS OF INNOVATION

Directory of Open Access Journals (Sweden)

Laurence J. Malone

2000-01-01

Full Text Available This paper outlines a sequential process of technological innovation in the emergence of the electrical industry in the United States from 1830 to 1880. Successive inventions that realize the commercial possibilities of electricity provided the foundation for an industry where technical knowledge, invention and diffusion were ultimately consolidated within the managerial structure of new firms. The genesis of the industry is traced, sequentially, through the development of the telegraph, arc light and incandescent lamp. Exploring the origins of the telegraph and incandescent lamp reveals a process where a series of inventions and firms result from successful efforts touse scientific principles to create new commodities and markets.

5. Properties of simultaneous and sequential two-nucleon transfer

International Nuclear Information System (INIS)

Pinkston, W.T.; Satchler, G.R.

1982-01-01

Approximate forms of the first- and second-order distorted-wave Born amplitudes are used to study the overall structure, particularly the selection rules, of the amplitudes for simultaneous and sequential transfer of two nucleons. The role of the spin-state assumed for the intermediate deuterons in sequential (t, p) reactions is stressed. The similarity of one-step and two-step amplitudes for (α, d) reactions is exhibited, and the consequent absence of any obvious J-dependence in their interference is noted. (orig.)

6. Event-shape analysis: Sequential versus simultaneous multifragment emission

International Nuclear Information System (INIS)

Cebra, D.A.; Howden, S.; Karn, J.; Nadasen, A.; Ogilvie, C.A.; Vander Molen, A.; Westfall, G.D.; Wilson, W.K.; Winfield, J.S.; Norbeck, E.

1990-01-01

The Michigan State University 4π array has been used to select central-impact-parameter events from the reaction 40 Ar+ 51 V at incident energies from 35 to 85 MeV/nucleon. The event shape in momentum space is an observable which is shown to be sensitive to the dynamics of the fragmentation process. A comparison of the experimental event-shape distribution to sequential- and simultaneous-decay predictions suggests that a transition in the breakup process may have occurred. At 35 MeV/nucleon, a sequential-decay simulation reproduces the data. For the higher energies, the experimental distributions fall between the two contrasting predictions

7. Sequential approach to Colombeau's theory of generalized functions

International Nuclear Information System (INIS)

Todorov, T.D.

1987-07-01

J.F. Colombeau's generalized functions are constructed as equivalence classes of the elements of a specially chosen ultrapower of the class of the C ∞ -functions. The elements of this ultrapower are considered as sequences of C ∞ -functions, so in a sense, the sequential construction presented here refers to the original Colombeau theory just as, for example, the Mikusinski sequential approach to the distribution theory refers to the original Schwartz theory of distributions. The paper could be used as an elementary introduction to the Colombeau theory in which recently a solution was found to the problem of multiplication of Schwartz distributions. (author). Refs

8. Sequential Combination of Electro-Fenton and Electrochemical Chlorination Processes for the Treatment of Anaerobically-Digested Food Wastewater.

Science.gov (United States)

Shin, Yong-Uk; Yoo, Ha-Young; Kim, Seonghun; Chung, Kyung-Mi; Park, Yong-Gyun; Hwang, Kwang-Hyun; Hong, Seok Won; Park, Hyunwoong; Cho, Kangwoo; Lee, Jaesang

2017-09-19

A two-stage sequential electro-Fenton (E-Fenton) oxidation followed by electrochemical chlorination (EC) was demonstrated to concomitantly treat high concentrations of organic carbon and ammonium nitrogen (NH 4 + -N) in real anaerobically digested food wastewater (ADFW). The anodic Fenton process caused the rapid mineralization of phenol as a model substrate through the production of hydroxyl radical as the main oxidant. The electrochemical oxidation of NH 4 + by a dimensionally stable anode (DSA) resulted in temporal concentration profiles of combined and free chlorine species that were analogous to those during the conventional breakpoint chlorination of NH 4 + . Together with the minimal production of nitrate, this confirmed that the conversion of NH 4 + to nitrogen gas was electrochemically achievable. The monitoring of treatment performance with varying key parameters (e.g., current density, H 2 O 2 feeding rate, pH, NaCl loading, and DSA type) led to the optimization of two component systems. The comparative evaluation of two sequentially combined systems (i.e., the E-Fenton-EC system versus the EC-E-Fenton system) using the mixture of phenol and NH 4 + under the predetermined optimal conditions suggested the superiority of the E-Fenton-EC system in terms of treatment efficiency and energy consumption. Finally, the sequential E-Fenton-EC process effectively mineralized organic carbon and decomposed NH 4 + -N in the real ADFW without external supply of NaCl.

9. [Minimally invasive approach for cervical spondylotic radiculopathy].

Science.gov (United States)

Ding, Liang; Sun, Taicun; Huang, Yonghui

2010-01-01

To summarize the recent minimally invasive approach for cervical spondylotic radiculopathy (CSR). The recent literature at home and abroad concerning minimally invasive approach for CSR was reviewed and summarized. There were two techniques of minimally invasive approach for CSR at present: percutaneous puncture techniques and endoscopic techniques. The degenerate intervertebral disc was resected or nucleolysis by percutaneous puncture technique if CSR was caused by mild or moderate intervertebral disc herniations. The cervical microendoscopic discectomy and foraminotomy was an effective minimally invasive approach which could provide a clear view. The endoscopy techniques were suitable to treat CSR caused by foraminal osteophytes, lateral disc herniations, local ligamentum flavum thickening and spondylotic foraminal stenosis. The minimally invasive procedure has the advantages of simple handling, minimally invasive and low incidence of complications. But the scope of indications is relatively narrow at present.

10. Fast regularizing sequential subspace optimization in Banach spaces

International Nuclear Information System (INIS)

Schöpfer, F; Schuster, T

2009-01-01

We are concerned with fast computations of regularized solutions of linear operator equations in Banach spaces in case only noisy data are available. To this end we modify recently developed sequential subspace optimization methods in such a way that the therein employed Bregman projections onto hyperplanes are replaced by Bregman projections onto stripes whose width is in the order of the noise level

11. A sequential hypothesis test based on a generalized Azuma inequality

NARCIS (Netherlands)

Reijsbergen, D.P.; Scheinhardt, Willem R.W.; de Boer, Pieter-Tjerk

We present a new power-one sequential hypothesis test based on a bound for the probability that a bounded zero-mean martingale ever crosses a curve of the form $a(n+k)^b$. The proof of the bound is of independent interest.

12. Sequential and simultaneous revascularization in adult orthotopic piggyback liver transplantation

NARCIS (Netherlands)

Polak, WG; Miyamoto, S; Nemes, BA; Peeters, PMJG; de Jong, KP; Porte, RJ; Slooff, MJH

The aim of the study was to assess whether there is a difference in outcome after sequential or simultaneous revascularization during orthotopic liver transplantation (OLT) in terms of patient and graft survival, mortality, morbidity, and liver function. The study population consisted of 102 adult

13. A generally applicable sequential alkaline phosphatase immunohistochemical double staining

NARCIS (Netherlands)

van der Loos, Chris M.; Teeling, Peter

2008-01-01

A universal type of sequential double alkaline phosphatase immunohistochemical staining is described that can be used for formalin-fixed, paraffin-embedded and cryostat tissue sections from human and mouse origin. It consists of two alkaline phosphatase detection systems including enzymatic

14. Excessive pressure in multichambered cuffs used for sequential compression therapy

NARCIS (Netherlands)

Segers, P; Belgrado, JP; Leduc, A; Leduc, O; Verdonck, P

2002-01-01

Background and Purpose. Pneumatic compression devices, used as part of the therapeutic strategy for lymphatic drainage, often have cuffs with multiple chambers that are, inflated sequentially. The purpose of this study was to investigate (1) the relationship between cuff chamber pressure

15. Retrieval of sea surface velocities using sequential Ocean Colour

The Indian remote sensing satellite, IRS-P4 (Oceansat-I) launched on May 26th, 1999 carried two sensors on board, i.e., the Ocean Colour Monitor (OCM) and the Multi-frequency Scanning Microwave Radiometer (MSMR) dedicated for oceanographic research. Sequential data of IRS-P4 OCM has been analysed over parts ...

16. Sequential and Biomechanical Factors Constrain Timing and Motion in Tapping

NARCIS (Netherlands)

Loehr, J.D.; Palmer, C.

2009-01-01

The authors examined how timing accuracy in tapping sequences is influenced by sequential effects of preceding finger movements and biomechanical interdependencies among fingers. Skilled pianists tapped Sequences at 3 rates; in each sequence, a finger whose motion was more or less independent of

17. What determines the impact of context on sequential action?

NARCIS (Netherlands)

Ruitenberg, M.F.L.; Verwey, Willem B.; Abrahamse, E.L.

2015-01-01

In the current study we build on earlier observations that memory-based sequential action is better in the original learning context than in other contexts. We examined whether changes in the perceptual context have differential impact across distinct processing phases (preparation versus execution

18. The Efficacy of Sequential Therapy in Eradication of Helicobacter ...

African Journals Online (AJOL)

2017-05-22

May 22, 2017 ... pylori (H. pylori) eradication rates of standard triple, sequential and quadruple therapies including claritromycin regimes in this study. Materials and Methods: A total of 160 patients with dyspeptic symptoms were enrolled to the study. The patients were randomized to four groups of treatment protocols.

19. The efficacy of sequential therapy in eradication of Helicobacter ...

African Journals Online (AJOL)

... the Helicobacter pylori (H. pylori) eradication rates of standard triple, sequential and quadruple therapies including claritromycin regimes in this study. Materials and Methods: A total of 160 patients with dyspeptic symptoms were enrolled to the study. The patients were randomized to four groups of treatment protocols.

20. In Vivo Evaluation of Synthetic Aperture Sequential Beamforming

DEFF Research Database (Denmark)

Hemmsen, Martin Christian; Hansen, Peter Møller; Lange, Theis

2012-01-01

Ultrasound in vivo imaging using synthetic aperture sequential beamformation (SASB) is compared with conventional imaging in a double blinded study using side-by-side comparisons. The objective is to evaluate if the image quality in terms of penetration depth, spatial resolution, contrast...

1. Quantum chromodynamics as the sequential fragmenting with inactivation

International Nuclear Information System (INIS)

Botet, R.

1996-01-01

We investigate the relation between the modified leading log approximation of the perturbative QCD and the sequential binary fragmentation process. We will show that in the absence of inactivation, this process is equivalent to the QCD gluodynamics. The inactivation term yields a precise prescription of how to include the hadronization in the QCD equations. (authors)

2. Quantum chromodynamics as the sequential fragmenting with inactivation

Energy Technology Data Exchange (ETDEWEB)

Botet, R. [Paris-11 Univ., 91 - Orsay (France). Lab. de Physique des Solides; Ploszajczak, M. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France)

1996-12-31

We investigate the relation between the modified leading log approximation of the perturbative QCD and the sequential binary fragmentation process. We will show that in the absence of inactivation, this process is equivalent to the QCD gluodynamics. The inactivation term yields a precise prescription of how to include the hadronization in the QCD equations. (authors). 15 refs.

3. The Motivating Language of Principals: A Sequential Transformative Strategy

Science.gov (United States)

Holmes, William Tobias

2012-01-01

This study implemented a Sequential Transformative Mixed Methods design with teachers (as recipients) and principals (to give voice) in the examination of principal talk in two different school accountability contexts (Continuously Improving and Continuously Zigzag) using the conceptual framework of Motivating Language Theory. In phase one,…

4. The sequential price of anarchy for atomic congestion games

NARCIS (Netherlands)

de Jong, Jasper; Uetz, Marc Jochen

2014-01-01

In situations without central coordination, the price of anarchy relates the quality of any Nash equilibrium to the quality of a global optimum. Instead of assuming that all players choose their actions simultaneously, here we consider games where players choose their actions sequentially. The

5. Sequential infiltration synthesis for enhancing multiple-patterning lithography

Science.gov (United States)

Darling, Seth B.; Elam, Jeffrey W.; Tseng, Yu-Chih

2017-06-20

Simplified methods of multiple-patterning photolithography using sequential infiltration synthesis to modify the photoresist such that it withstands plasma etching better than unmodified resist and replaces one or more hard masks and/or a freezing step in MPL processes including litho-etch-litho-etch photolithography or litho-freeze-litho-etch photolithography.

6. The sequential structure of brain activation predicts skill.

Science.gov (United States)

Anderson, John R; Bothell, Daniel; Fincham, Jon M; Moon, Jungaa

2016-01-29

In an fMRI study, participants were trained to play a complex video game. They were scanned early and then again after substantial practice. While better players showed greater activation in one region (right dorsal striatum) their relative skill was better diagnosed by considering the sequential structure of whole brain activation. Using a cognitive model that played this game, we extracted a characterization of the mental states that are involved in playing a game and the statistical structure of the transitions among these states. There was a strong correspondence between this measure of sequential structure and the skill of different players. Using multi-voxel pattern analysis, it was possible to recognize, with relatively high accuracy, the cognitive states participants were in during particular scans. We used the sequential structure of these activation-recognized states to predict the skill of individual players. These findings indicate that important features about information-processing strategies can be identified from a model-based analysis of the sequential structure of brain activation. Copyright © 2015 Elsevier Ltd. All rights reserved.

7. A path-level exact parallelization strategy for sequential simulation

Science.gov (United States)

Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

2018-01-01

Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

8. A sequential mixed methods research approach to investigating HIV ...

African Journals Online (AJOL)

Sequential mixed methods research is an effective approach for investigating complex problems, but it has not been extensively used in construction management research. In South Africa, the HIV/AIDS pandemic has seen construction management taking on a vital responsibility since the government called upon the ...

NARCIS (Netherlands)

Yu, C.; Hoogenboom, P.C.J.; Rots, J.G.; Saouma, V.; Bolander, J.; Landis, E.

2016-01-01

Sequentially linear analysis (SLA) is an alternative to the Newton-Raphson method for analyzing the nonlinear behavior of reinforced concrete and masonry structures. In this paper SLA is extended to load cases that are applied one after the other, for example first dead load and then wind load. It

10. Concurrent Learning of Control in Multi agent Sequential Decision Tasks

Science.gov (United States)

2018-04-17

Concurrent Learning of Control in Multi-agent Sequential Decision Tasks The overall objective of this project was to develop multi-agent reinforcement... learning (MARL) approaches for intelligent agents to autonomously learn distributed control policies in decentral- ized partially observable... learning of policies in Dec-POMDPs, established performance bounds, evaluated these algorithms both theoretically and empirically, The views

11. Sequential stenotic strictures of the small bowel leading to obstruction

Institute of Scientific and Technical Information of China (English)

2007-01-01

Small bowel obstructions (SBOs) are primarily caused by adhesions, hernias, neoplasms, or inflammatory strictures. Intraluminal strictures are an uncommon cause of SBO. This report describes our findings in a unique case of sequential, stenotic intraluminal strictures of the small intestine, discusses the differential diagnosis of intraluminal intestinal strictures, and reviews the literature regarding intraluminal pathology.

12. Decomposition of Copper (II) Sulfate Pentahydrate: A Sequential Gravimetric Analysis.

Science.gov (United States)

Harris, Arlo D.; Kalbus, Lee H.

1979-01-01

Describes an improved experiment of the thermal dehydration of copper (II) sulfate pentahydrate. The improvements described here are control of the temperature environment and a quantitative study of the decomposition reaction to a thermally stable oxide. Data will suffice to show sequential gravimetric analysis. (Author/SA)

13. Sequential Analysis of Metals in Municipal Dumpsite Composts of ...

African Journals Online (AJOL)

... Ni) in Municipal dumpsite compost were determined by the sequential extraction method. Chemical parameters such as pH, conductivity, and organic carbon contents of the samples were also determined. Analysis of the extracts was carried out by atomic absorption spectrophotometer machine (Buck Scientific VPG 210).

14. Investigation of the sequential validity of quality improvement team ...

African Journals Online (AJOL)

Background: Self-assessment is widely used in the health care improvement collaboratives quality improvement (QI) teams' to assess their own performance. There is mixed evidence on the validity of this approach. This study investigated sequential validity of self-assessments in a QI HIV collaborative in Tanzania.

15. On the origin of reproducible sequential activity in neural circuits

Science.gov (United States)

Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.

2004-12-01

Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

16. The one-shot deviation principle for sequential rationality

DEFF Research Database (Denmark)

Hendon, Ebbe; Whitta-Jacobsen, Hans Jørgen; Sloth, Birgitte

1996-01-01

We present a decentralization result which is useful for practical and theoretical work with sequential equilibrium, perfect Bayesian equilibrium, and related equilibrium concepts for extensive form games. A weak consistency condition is sufficient to obtain an analogy to the well known One-Stage......-Stage-Deviation Principle for subgame perfect equilibrium...

17. A solution for automatic parallelization of sequential assembly code

Directory of Open Access Journals (Sweden)

Kovačević Đorđe

2013-01-01

Full Text Available Since modern multicore processors can execute existing sequential programs only on a single core, there is a strong need for automatic parallelization of program code. Relying on existing algorithms, this paper describes one new software solution tool for parallelization of sequential assembly code. The main goal of this paper is to develop the parallelizator which reads sequential assembler code and at the output provides parallelized code for MIPS processor with multiple cores. The idea is the following: the parser translates assembler input file to program objects suitable for further processing. After that the static single assignment is done. Based on the data flow graph, the parallelization algorithm separates instructions on different cores. Once sequential code is parallelized by the parallelization algorithm, registers are allocated with the algorithm for linear allocation, and the result at the end of the program is distributed assembler code on each of the cores. In the paper we evaluate the speedup of the matrix multiplication example, which was processed by the parallelizator of assembly code. The result is almost linear speedup of code execution, which increases with the number of cores. The speed up on the two cores is 1.99, while on 16 cores the speed up is 13.88.

18. Making Career Decisions--A Sequential Elimination Approach.

Science.gov (United States)

Gati, Itamar

1986-01-01

Presents a model for career decision making based on the sequential elimination of occupational alternatives, an adaptation for career decisions of Tversky's (1972) elimination-by-aspects theory of choice. The expected utility approach is reviewed as a representative compensatory model for career decisions. Advantages, disadvantages, and…

19. Biohydrogen production from beet molasses by sequential dark and photofermentation

NARCIS (Netherlands)

Özgür, E.; Mars, A.E.; Peksel, B.; Louwerse, A.; Yücel, M.; Gündüz, U.; Claassen, P.A.M.; Eroglu, I.

2010-01-01

Biological hydrogen production using renewable resources is a promising possibility to generate hydrogen in a sustainable way. In this study, a sequential dark and photofermentation has been employed for biohydrogen production using sugar beet molasses as a feedstock. An extreme thermophile

20. Influence of synchronous and sequential stimulation on muscle fatigue

NARCIS (Netherlands)

Thomsen, M.; Thomsen, M.; Veltink, Petrus H.

1997-01-01

In acute experiments the sciatic nerve of the rat is electrically stimulated to induce fatigue in the medial Gastrocnemius muscle. Fatigue tests are carried out using intermittent stimulation of different compartments (sequential) or a single compartment (synchronous) of the sciatic nerve. The

1. Sequential Computerized Mastery Tests--Three Simulation Studies

Science.gov (United States)

Wiberg, Marie

2006-01-01

A simulation study of a sequential computerized mastery test is carried out with items modeled with the 3 parameter logistic item response theory model. The examinees' responses are either identically distributed, not identically distributed, or not identically distributed together with estimation errors in the item characteristics. The…

2. A Relational Account of Call-by-Value Sequentiality

DEFF Research Database (Denmark)

Riecke, Jon Gary; Sandholm, Anders Bo

2002-01-01

We construct a model for FPC, a purely functional, sequential, call-by-value language. The model is built from partial continuous functions, in the style of Plotkin, further constrained to be uniform with respect to a class of logical relations. We prove that the model is fully abstract....

3. Sequential kidney scintiscanning before and after vascular reconstruction

International Nuclear Information System (INIS)

Siems, H.H.; Allenberg, J.R.; Hupp, T.; Clorius, J.H.

1985-01-01

In this follow-up study sequential scintigraphy was performed on 20 of selected patients up to 3.4 years after operation, the results are compared with the pre-operative examinations and with the surgical effect on the increased blood pressure. (orig./MG) [de

4. Corporate tax minimization and stock price reactions

OpenAIRE

Blaufus, Kay; Möhlmann, Axel; Schwäbe, Alexander

2016-01-01

Tax minimization strategies may lead to significant tax savings, which could, in turn, increase firm value. However, such strategies are also associated with significant costs, such as expected penalties and planning, agency, and reputation costs. The overall impact of firms' tax minimization strategies on firm value is, therefore, unclear. To investigate whether corporate tax minimization increases firm value, we analyze the stock price reaction to news concerning corporate tax avoidance or ...

5. Trial Sequential Analysis in systematic reviews with meta-analysis

Directory of Open Access Journals (Sweden)

Jørn Wetterslev

2017-03-01

Full Text Available Abstract Background Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors and too many false negative conclusions (type II errors. Methods We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. Results The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D2 measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in

6. Safety control and minimization of radioactive wastes

International Nuclear Information System (INIS)

Wang Jinming; Rong Feng; Li Jinyan; Wang Xin

2010-01-01

Compared with the developed countries, the safety control and minimization of the radwastes in China are under-developed. The research of measures for the safety control and minimization of the radwastes is very important for the safety control of the radwastes, and the reduction of the treatment and disposal cost and environment radiation hazards. This paper has systematically discussed the safety control and the minimization of the radwastes produced in the nuclear fuel circulation, nuclear technology applications and the process of decommission of nuclear facilities, and has provided some measures and methods for the safety control and minimization of the radwastes. (authors)

7. Managerial adjustment and its limits: sequential fault in comparative perspective

Directory of Open Access Journals (Sweden)

Flávio da Cunha Rezende

2008-01-01

Full Text Available This article focuses on explanations for sequential faults in administrative reform. It deals with the limits of managerial adjustment in an approach that attempts to connect theory and empirical data, articulating three levels of analysis. The first level presents comparative evidence of sequential fault within reforms in national governments through a set of indicators geared toward understanding changes in the role of the state. In light of analyses of a representative set of comparative studies on reform implementation, the second analytical level proceeds to identify four typical mechanisms that are present in explanations on managerial adjustment faults. In this way, we seek to configure an explanatory matrix for theories on sequential fault. Next we discuss the experience of management reform in the Brazilian context, conferring special attention on one of the mechanisms that creates fault: the control dilemma. The major hypotheses that guide our article are that reforms lead to sequential fault and that there are at least four causal mechanisms that produce reforms: a transactions costs involved in producing reforms; b performance legacy; c predominance of fiscal adjustment and d the control dilemma. These mechanisms act separately or in concert, and act to decrease chances for a transformation of State managerial patterns. Major evidence that is analyzed in these articles lend consistency to the general argument that reforms have failed in their attempts to reduce public expenses, alter patterns of resource allocation, reduce the labor force and change the role of the State. Our major conclusion is that reforms fail sequentially and managerial adjustment displays considerable limitations, particularly those of a political nature.

8. Research on parallel algorithm for sequential pattern mining

Science.gov (United States)

Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

2008-03-01

Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

9. Entropy generation minimization: A practical approach for performance evaluation of temperature cascaded co-generation plants

KAUST Repository

Myat, Aung; Thu, Kyaw; Kim, Youngdeuk; Saha, Bidyut Baran; Ng, K. C.

2012-01-01

We present a practical tool that employs entropy generation minimization (EGM) approach for an in-depth performance evaluation of a co-generation plant with a temperature-cascaded concept. Co-generation plant produces useful effect production sequentially, i.e., (i) electricity from the micro-turbines, (ii) low pressure steam at 250 °C or about 8-10 bars, (iii) cooling capacity of 4 refrigeration tones (Rtons) and (iv) dehumidification of outdoor air for air conditioned space. The main objective is to configure the most efficient configuration of producing power and heat. We employed entropy generation minimization (EGM) which reflects to minimize the dissipative losses and maximize the cycle efficiency of the individual thermally activated systems. The minimization of dissipative losses or EGM is performed in two steps namely, (i) adjusting heat source temperatures for the heat-fired cycles and (ii) the use of Genetic Algorithm (GA), to seek out the sensitivity of heat transfer areas, flow rates of working fluids, inlet temperatures of heat sources and coolant, etc., over the anticipated range of operation to achieve maximum efficiency. With EGM equipped with GA, we verified that the local minimization of entropy generation individually at each of the heat-activated processes would lead to the maximum efficiency of the system. © 2012.

10. Entropy generation minimization: A practical approach for performance evaluation of temperature cascaded co-generation plants

KAUST Repository

Myat, Aung

2012-10-01

We present a practical tool that employs entropy generation minimization (EGM) approach for an in-depth performance evaluation of a co-generation plant with a temperature-cascaded concept. Co-generation plant produces useful effect production sequentially, i.e., (i) electricity from the micro-turbines, (ii) low pressure steam at 250 °C or about 8-10 bars, (iii) cooling capacity of 4 refrigeration tones (Rtons) and (iv) dehumidification of outdoor air for air conditioned space. The main objective is to configure the most efficient configuration of producing power and heat. We employed entropy generation minimization (EGM) which reflects to minimize the dissipative losses and maximize the cycle efficiency of the individual thermally activated systems. The minimization of dissipative losses or EGM is performed in two steps namely, (i) adjusting heat source temperatures for the heat-fired cycles and (ii) the use of Genetic Algorithm (GA), to seek out the sensitivity of heat transfer areas, flow rates of working fluids, inlet temperatures of heat sources and coolant, etc., over the anticipated range of operation to achieve maximum efficiency. With EGM equipped with GA, we verified that the local minimization of entropy generation individually at each of the heat-activated processes would lead to the maximum efficiency of the system. © 2012.

11. Minimizing opportunity costs to aquatic connectivity restoration while controlling an invasive species.

Science.gov (United States)

Milt, Austin W; Diebel, Matthew W; Doran, Patrick J; Ferris, Michael C; Herbert, Matthew; Khoury, Mary L; Moody, Allison T; Neeson, Thomas M; Ross, Jared; Treska, Ted; O'Hanley, Jesse R; Walter, Lisa; Wangen, Steven R; Yacobson, Eugene; McIntyre, Peter B

2018-03-08

Controlling invasive species is critical for conservation but can have unintended consequences for native species and divert resources away from other efforts. This dilemma occurs on a grand scale in the North American Great Lakes, where dams and culverts block tributary access to habitat of desirable fish species and are a lynchpin of long-standing efforts to limit ecological damage inflicted by the invasive, parasitic sea lamprey (Petromyzon marinus). Habitat restoration and sea-lamprey control create conflicting goals for managing aging infrastructure. We used optimization to minimize opportunity costs of habitat gains for 37 desirable migratory fishes that arose from restricting sea lamprey access (0-25% increase) when selecting barriers for removal under a limited budget (US$1-105 million). Imposing limits on sea lamprey habitat reduced gains in tributary access for desirable species by 15-50% relative to an unconstrained scenario. Additional investment to offset the effect of limiting sea-lamprey access resulted in high opportunity costs for 30 of 37 species (e.g., an additional US$20-80 million for lake sturgeon [Acipenser fulvescens]) and often required ≥5% increase in sea-lamprey access to identify barrier-removal solutions adhering to the budget and limiting access. Narrowly distributed species exhibited the highest opportunity costs but benefited more at less cost when small increases in sea-lamprey access were allowed. Our results illustrate the value of optimization in limiting opportunity costs when balancing invasion control against restoration benefits for diverse desirable species. Such trade-off analyses are essential to the restoration of connectivity within fragmented rivers without unleashing invaders. © 2018 Society for Conservation Biology.

12. Minimizing electrode contamination in an electrochemical cell

Science.gov (United States)

Kim, Yu Seung; Zelenay, Piotr; Johnston, Christina

2014-12-09

An electrochemical cell assembly that is expected to prevent or at least minimize electrode contamination includes one or more getters that trap a component or components leached from a first electrode and prevents or at least minimizes them from contaminating a second electrode.

13. Matthew Arnold and Minimal Competency Testing.

Science.gov (United States)

Tuman, Myron C.

1979-01-01

Presents arguments by Robert Lowe and Matthew Arnold on the 19th century British "Payment by Results" Plan, whereby schools received funds for students who passed minimal competency tests. Emphasizes that the Victorian experience produced acrimonious teachers with low morale and encourages contemporary minimal testing advocates not to…

14. Minimally processed fruit salad enriched with Lactobacillus ...

African Journals Online (AJOL)

paula

2015-06-17

Jun 17, 2015 ... Minimal processing promotes browning of some vegetal tissues due to cell membrane disruption, which results in the release of oxidative enzymes. This study evaluated the efficiency of citric acid, ascorbic acid, sodium metabisulfite and L-cysteine hydrochloride to retard enzymatic browning of minimally.

15. The minimal manual: is less really more?

NARCIS (Netherlands)

Lazonder, Adrianus W.; van der Meij, Hans

1993-01-01

Carroll, Smith-Kerker, Ford and Mazur-Rimetz (The minimal manual, Human-Computer Interaction , 3, 123-153, 1987) have introduced the minimal manual as an alternative to standard self-instruction manuals. While their research indicates strong gains, only a few attempts have been made to validate

16. Y-12 Plant waste minimization strategy

International Nuclear Information System (INIS)

Kane, M.A.

1987-01-01

The 1984 Amendments to the Resource Conservation and Recovery Act (RCRA) mandate that waste minimization be a major element of hazardous waste management. In response to this mandate and the increasing costs for waste treatment, storage, and disposal, the Oak Ridge Y-12 Plant developed a waste minimization program to encompass all types of wastes. Thus, waste minimization has become an integral part of the overall waste management program. Unlike traditional approaches, waste minimization focuses on controlling waste at the beginning of production instead of the end. This approach includes: (1) substituting nonhazardous process materials for hazardous ones, (2) recycling or reusing waste effluents, (3) segregating nonhazardous waste from hazardous and radioactive waste, and (4) modifying processes to generate less waste or less toxic waste. An effective waste minimization program must provide the appropriate incentives for generators to reduce their waste and provide the necessary support mechanisms to identify opportunities for waste minimization. This presentation focuses on the Y-12 Plant's strategy to implement a comprehensive waste minimization program. This approach consists of four major program elements: (1) promotional campaign, (2) process evaluation for waste minimization opportunities, (3) waste generation tracking system, and (4) information exchange network. The presentation also examines some of the accomplishments of the program and issues which need to be resolved

17. Making the Most of Minimalism in Music.

Science.gov (United States)

Geiersbach, Frederick J.

1998-01-01

Describes the minimalist movement in music. Discusses generations of minimalist musicians and, in general, the minimalist approach. Considers various ways that minimalist strategies can be integrated into the music classroom focusing on (1) minimalism and (2) student-centered composition and principles of minimalism for use with elementary band…

18. The relative volume growth of minimal submanifolds

DEFF Research Database (Denmark)

Markvorsen, Steen; Palmer, V.

2002-01-01

The volume growth of certain well-defined subsets of minimal submanifolds in riemannian spaces are compared with the volume growth of balls and spheres ill space forms of constant curvature.......The volume growth of certain well-defined subsets of minimal submanifolds in riemannian spaces are compared with the volume growth of balls and spheres ill space forms of constant curvature....

19. Specialized minimal PDFs for optimized LHC calculations

NARCIS (Netherlands)

Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Rojo, Juan

2016-01-01

We present a methodology for the construction of parton distribution functions (PDFs) designed to provide an accurate representation of PDF uncertainties for specific processes or classes of processes with a minimal number of PDF error sets: specialized minimal PDF sets, or SM-PDFs. We construct

20. Minimally processed fruit salad enriched with Lactobacillus ...

African Journals Online (AJOL)

Minimal processing promotes browning of some vegetal tissues due to cell membrane disruption, which results in the release of oxidative enzymes. This study evaluated the efficiency of citric acid, ascorbic acid, sodium metabisulfite and L-cysteine hydrochloride to retard enzymatic browning of minimally processed fruit ...

1. Results of simultaneous and sequential pediatric liver and kidney transplantation.

Science.gov (United States)

Rogers, J; Bueno, J; Shapiro, R; Scantlebury, V; Mazariegos, G; Fung, J; Reyes, J

2001-11-27

The indications for simultaneous and sequential pediatric liver (LTx) and kidney (KTx) transplantation have not been well defined. We herein report the results of our experience with these procedures in children with end-stage liver disease and/or subsequent end-stage renal disease. Between 1984 and 1995, 12 LTx recipients received 15 kidney allografts. Eight simultaneous and seven sequential LTx/KTx were performed. There were six males and six females, with a mean age of 10.9 years (1.5-23.7). One of the eight simultaneous LTx/KTx was part of a multivisceral allograft. Five KTx were performed at varied intervals after successful LTx, one KTx was performed after a previous simultaneous LTx/KTx, and one KTx was performed after previous sequential LTx/KTx. Immunosuppression was with tacrolimus or cyclosporine and steroids. Indications for LTx were oxalosis (four), congenital hepatic fibrosis (two), cystinosis (one), polycystic liver disease (one), A-1-A deficiency (one), Total Parenteral Nutrition (TPN)-related (one), cryptogenic cirrhosis (one), and hepatoblastoma (one). Indications for KTx were oxalosis (four), drug-induced (four), polycystic kidney disease (three), cystinosis (one), and glomerulonephritis (1). With a mean follow-up of 58 months (0.9-130), the overall patient survival rate was 58% (7/12). One-year and 5-year actuarial patient survival rates were 66% and 58%, respectively. Patient survival rates at 1 year after KTx according to United Network of Organ Sharing (liver) status were 100% for status 3, 50% for status 2, and 0% for status 1. The overall renal allograft survival rate was 47%. Actuarial renal allograft survival rates were 53% at 1 and 5 years. The overall hepatic allograft survival rate was equivalent to the overall patient survival rate (58%). Six of seven surviving patients have normal renal allograft function, and one patient has moderate chronic allograft nephropathy. All surviving patients have normal hepatic allograft function. Six

2. Retrieval of sea surface velocities using sequential ocean colour monitor (OCM) data

Digital Repository Service at National Institute of Oceanography (India)

velocities has been developed. The method is based on matching suspended sediment dispersion patterns, in sequential two time lapsed images. The pattern matching is performed on atmospherically corrected and geo-referenced sequential pair of images by Maximum...

3. Simultaneous minimizing monitor units and number of segments without leaf end abutment for segmental intensity modulated radiation therapy delivery

International Nuclear Information System (INIS)

Li Kaile; Dai Jianrong; Ma Lijun

2004-01-01

Leaf end abutment is seldom studied when delivering segmental intensity modulated radiation therapy (IMRT) fields. We developed an efficient leaf sequencing method to eliminate leaf end abutment for segmental IMRT delivery. Our method uses simple matrix and sorting operations to obtain a solution that simultaneously minimizes total monitor units and number of segments without leaf end abutment between segments. We implemented and demonstrated our method for multiple clinical cases. We compared the results of our method with the results from exhaustive search method. We found that our solution without leaf end abutment produced equivalent results to the unconstrained solutions in terms of minimum total monitor units and minimum number of leaf segments. We conclude that the leaf end abutment fields can be avoided without affecting the efficiency of segmental IMRT delivery. The major strength of our method is its simplicity and high computing speed. This potentially provides a useful means for generating segmental IMRT fields that require high spatial resolution or complex intensity distributions

4. Intimately coupling of photolysis accelerates nitrobenzene biodegradation, but sequential coupling slows biodegradation

Energy Technology Data Exchange (ETDEWEB)

Yang, Lihui [Department of Environmental Science and Engineering, College of Life and Environmental Science, Shanghai Normal University, Shanghai 200234 (China); Zhang, Yongming, E-mail: zhym@shnu.edu.cn [Department of Environmental Science and Engineering, College of Life and Environmental Science, Shanghai Normal University, Shanghai 200234 (China); Bai, Qi; Yan, Ning; Xu, Hua [Department of Environmental Science and Engineering, College of Life and Environmental Science, Shanghai Normal University, Shanghai 200234 (China); Rittmann, Bruce E. [Swette Center for Environmental Biotechnology, Biodesign Institute, Arizona State University, Tempe, AZ 85287-5701 (United States)

2015-04-28

5. Intimately coupling of photolysis accelerates nitrobenzene biodegradation, but sequential coupling slows biodegradation

International Nuclear Information System (INIS)

Yang, Lihui; Zhang, Yongming; Bai, Qi; Yan, Ning; Xu, Hua; Rittmann, Bruce E.

2015-01-01

6. Simultaneous determination of rutin and ascorbic acid in a sequential injection lab-at-valve system.

Science.gov (United States)

Al-Shwaiyat, Mohammed Khair E A; Miekh, Yuliia V; Denisenko, Tatyana A; Vishnikin, Andriy B; Andruch, Vasil; Bazel, Yaroslav R

2018-02-05

A green, simple, accurate and highly sensitive sequential injection lab-at-valve procedure has been developed for the simultaneous determination of ascorbic acid (Asc) and rutin using 18-molybdo-2-phosphate Wells-Dawson heteropoly anion (18-MPA). The method is based on the dependence of the reaction rate between 18-MPA and reducing agents on the solution pH. Only Asc is capable of interacting with 18-MPA at pH 4.7, while at pH 7.4 the reaction with both Asc and rutin proceeds simultaneously. In order to improve the precision and sensitivity of the analysis, to minimize reagent consumption and to remove the Schlieren effect, the manifold for the sequential injection analysis was supplemented with external reaction chamber, and the reaction mixture was segmented. By the reduction of 18-MPA with reducing agents one- and two-electron heteropoly blues are formed. The fraction of one-electron heteropoly blue increases at low concentrations of the reducer. Measurement of the absorbance at a wavelength corresponding to the isobestic point allows strictly linear calibration graphs to be obtained. The calibration curves were linear in the concentration ranges of 0.3-24mgL -1 and 0.2-14mgL -1 with detection limits of 0.13mgL -1 and 0.09mgL -1 for rutin and Asc, respectively. The determination of rutin was possible in the presence of up to a 20-fold molar excess of Asc. The method was applied to the determination of Asc and rutin in ascorutin tablets with acceptable accuracy and precision (1-2%). Copyright © 2017 Elsevier B.V. All rights reserved.

7. Sequential Foreign Investments, Regional Technology Platforms and the Evolution of Japanese Multinationals in East Asia

OpenAIRE

Song, Jaeyong

2001-01-01

IVABSTRACTIn this paper, we investigate the firm-level mechanisms that underlie the sequential foreign direct investment (FDI) decisions of multinational corporations (MNCs). To understand inter-firm heterogeneity in the sequential FDI behaviors of MNCs, we develop a firm capability-based model of sequential FDI decisions. In the setting of Japanese electronics MNCs in East Asia, we empirically examine how prior investments in firm capabilities affect sequential investments into existingprodu...

8. Programme for test generation for combinatorial and sequential systems

International Nuclear Information System (INIS)

Tran Huy Hoan

1973-01-01

This research thesis reports the computer-assisted search for tests aimed at failure detection in combinatorial and sequential logic circuits. As he wants to deal with complex circuits with many modules such as those met in large scale integrated circuits (LSI), the author used propagation paths. He reports the development of a method which is valid for combinatorial systems and for several sequential circuits comprising elementary logic modules and JK and RS flip-flops. This method is developed on an IBM 360/91 computer in PL/1 language. The used memory space is limited and adjustable with respect to circuit dimension. Computing time is short when compared to that needed by other programmes. The solution is practical and efficient for failure test and localisation

9. Moving mesh generation with a sequential approach for solving PDEs

DEFF Research Database (Denmark)

In moving mesh methods, physical PDEs and a mesh equation derived from equidistribution of an error metrics (so-called the monitor function) are simultaneously solved and meshes are dynamically concentrated on steep regions (Lim et al., 2001). However, the simultaneous solution procedure...... a simple and robust moving mesh algorithm in one or multidimension. In this study, we propose a sequential solution procedure including two separate parts: prediction step to obtain an approximate solution to a next time level (integration of physical PDEs) and regriding step at the next time level (mesh...... generation and solution interpolation). Convection terms, which appear in physical PDEs and a mesh equation, are discretized by a WENO (Weighted Essentially Non-Oscillatory) scheme under the consrvative form. This sequential approach is to keep the advantages of robustness and simplicity for the static...

10. Competence and Praxis: Sequential Analysis in German Sociology

Directory of Open Access Journals (Sweden)

Kai-Olaf Maiwald

2005-09-01

Full Text Available In German social research nowadays most qualitative methodologies employ sequential analysis. This article explores the similarities and differences in conceptualising and practising this method. First, the working consensus, conceived as a shared set of methodological assumptions, is explicated. Second, with regard to three major paradigms of qualitative research in Germany—conversation analysis, objective hermeneutics, and hermeneutic sociology of knowledge—the dif­ferent ways of doing sequential analysis are investigated to locate the points of departure from a working consensus. It is argued that differences arise from different case-perspectives and, relative to that, from different modes of introducing general knowl­edge, i.e. knowledge that is not specific for the analysed case, into the interpretation. An import­ant notion to emerge from the comparison is the distinction between competence and praxis. URN: urn:nbn:de:0114-fqs0503310

11. Bidding in sequential electricity markets: The Nordic case

DEFF Research Database (Denmark)

Boomsma, Trine Krogh; Juul, Nina; Fleten, Stein-Erik

2014-01-01

problem as a multi-stage stochastic program. We investigate whether higher risk exposure can explain the hesitation, often observed in practice, to bid into the balancing market, even in cases of higher expected price levels. Furthermore, we quantify the gain from coordinated bidding, and by deriving......For electricity market participants trading in sequential markets with differences in price levels and risk exposure, coordinated bidding is highly relevant. We consider a Nordic power producer who engages in the day-ahead spot market and the near real-time balancing market. In both markets......, clearing prices and dispatched volumes are unknown at the time of bidding. However, in the balancing market, the agent faces an additional risk of not being dispatched. Taking into account the sequential clearing of these markets and the gradual realization of market prices, we formulate the bidding...

12. POLYP: an automatic device for drawing sequential samples of gas

Energy Technology Data Exchange (ETDEWEB)

Gaglione, P; Koechler, C; Stanchi, L

1974-12-01

Polyp is an automatic device consisting of electronic equipment which drives sequentially 8 small pumps for drawing samples of gas. The electronic circuit is driven by a quartz oscillator and allows for the preselection of a waiting time in such a manner that a set of similar instruments placed in suitable position in the open country will start simultaneously. At the same time the first pump of each instrument will inflate a plastic bag for a preset time. The other seven pumps will inflate sequentially the other bags. The instrument is powered by rechargeable batteries and realized with C-MUS integrated circuits for a nearly negligible consumption. As it is foreseen for field operation it is waterproof.

13. Variation among heritage speakers: Sequential vs. simultaneous bilinguals

Directory of Open Access Journals (Sweden)

Teresa Lee

2013-08-01

Full Text Available This study examines the differences in the grammatical knowledge of two types of heritage speakers of Korean. Early simultaneous bilinguals are exposed to both English and the heritage language from birth, whereas early sequential bilinguals are exposed to the heritage language first and then to English upon schooling. A listening comprehension task involving relative clauses was conducted with 51 beginning-level Korean heritage speakers. The results showed that the early sequential bilinguals exhibited much more accurate knowledge than the early simultaneous bilinguals, who lacked rudimentary knowledge of Korean relative clauses. Drawing on the findings of adult and child Korean L1 data on the acquisition of relative clauses, the performance of each group is discussed with respect to attrition and incomplete acquisition of the heritage language.

14. Comparisons of memory for nonverbal auditory and visual sequential stimuli.

Science.gov (United States)

McFarland, D J; Cacace, A T

1995-01-01

Properties of auditory and visual sensory memory were compared by examining subjects' recognition performance of randomly generated binary auditory sequential frequency patterns and binary visual sequential color patterns within a forced-choice paradigm. Experiment 1 demonstrated serial-position effects in auditory and visual modalities consisting of both primacy and recency effects. Experiment 2 found that retention of auditory and visual information was remarkably similar when assessed across a 10s interval. Experiments 3 and 4, taken together, showed that the recency effect in sensory memory is affected more by the type of response required (recognition vs. reproduction) than by the sensory modality employed. These studies suggest that auditory and visual sensory memory stores for nonverbal stimuli share similar properties with respect to serial-position effects and persistence over time.

15. Sequential function approximation on arbitrarily distributed point sets

Science.gov (United States)

Wu, Kailiang; Xiu, Dongbin

2018-02-01

We present a randomized iterative method for approximating unknown function sequentially on arbitrary point set. The method is based on a recently developed sequential approximation (SA) method, which approximates a target function using one data point at each step and avoids matrix operations. The focus of this paper is on data sets with highly irregular distribution of the points. We present a nearest neighbor replacement (NNR) algorithm, which allows one to sample the irregular data sets in a near optimal manner. We provide mathematical justification and error estimates for the NNR algorithm. Extensive numerical examples are also presented to demonstrate that the NNR algorithm can deliver satisfactory convergence for the SA method on data sets with high irregularity in their point distributions.

16. Sequentially generated states for the study of two dimensional systems

Energy Technology Data Exchange (ETDEWEB)

Banuls, Mari-Carmen; Cirac, J. Ignacio [Max-Planck-Institut fuer Quantenoptik, Garching (Germany); Perez-Garcia, David [Depto. Analisis Matematico, Universidad Complutense de Madrid (Spain); Wolf, Michael M. [Niels Bohr Institut, Copenhagen (Denmark); Verstraete, Frank [Fakultaet fuer Physik, Universitaet Wien (Austria)

2009-07-01

The family of Matrix Product States represents a powerful tool for the study of physical one-dimensional quantum many-body systems, such as spin chains. Besides, Matrix Product States can be defined as the family of quantum states that can be sequentially generated in a one-dimensional system. We have introduced a new family of states which extends this sequential definition to two dimensions. Like in Matrix Product States, expectation values of few body observables can be efficiently evaluated and, for the case of translationally invariant systems, the correlation functions decay exponentially with the distance. We show that such states are a subclass of Projected Entangled Pair States and investigate their suitability for approximating the ground states of local Hamiltonians.

17. POLYP: an automatic device for drawing sequential samples of gas

International Nuclear Information System (INIS)

Gaglione, P.; Koechler, C.; Stanchi, L.

1974-12-01

POLYP is an automatic device consisting of an electronic equipment which drives sequentially 8 small pumps for drawing samples of gas. The electronic circuit is driven by a quartz oscillator and allows for the preselection of a waiting time in such a manner that a set of similar instruments placed in suitable position in the open country will start simultaneously. At the same time the first pump of each instrument will inflate a plastic bag for a preset time. Thereafter the other seven pumps will inflate sequentially the other bag. The instrument is powered by rechargeable batteries and realized with C-MOS integrated circuits for a nearly negligible consumption. As it is foreseen for field operation it is waterproof

18. Safeguarding a Lunar Rover with Wald's Sequential Probability Ratio Test

Science.gov (United States)

Furlong, Michael; Dille, Michael; Wong, Uland; Nefian, Ara

2016-01-01

The virtual bumper is a safeguarding mechanism for autonomous and remotely operated robots. In this paper we take a new approach to the virtual bumper system by using an old statistical test. By using a modified version of Wald's sequential probability ratio test we demonstrate that we can reduce the number of false positive reported by the virtual bumper, thereby saving valuable mission time. We use the concept of sequential probability ratio to control vehicle speed in the presence of possible obstacles in order to increase certainty about whether or not obstacles are present. Our new algorithm reduces the chances of collision by approximately 98 relative to traditional virtual bumper safeguarding without speed control.

19. Plasmon-driven sequential chemical reactions in an aqueous environment.

Science.gov (United States)

Zhang, Xin; Wang, Peijie; Zhang, Zhenglong; Fang, Yurui; Sun, Mengtao

2014-06-24

Plasmon-driven sequential chemical reactions were successfully realized in an aqueous environment. In an electrochemical environment, sequential chemical reactions were driven by an applied potential and laser irradiation. Furthermore, the rate of the chemical reaction was controlled via pH, which provides indirect evidence that the hot electrons generated from plasmon decay play an important role in plasmon-driven chemical reactions. In acidic conditions, the hot electrons were captured by the abundant H(+) in the aqueous environment, which prevented the chemical reaction. The developed plasmon-driven chemical reactions in an aqueous environment will significantly expand the applications of plasmon chemistry and may provide a promising avenue for green chemistry using plasmon catalysis in aqueous environments under irradiation by sunlight.

20. Sequential segmental classification of feline congenital heart disease.

Science.gov (United States)

Scansen, Brian A; Schneider, Matthias; Bonagura, John D

2015-12-01

Feline congenital heart disease is less commonly encountered in veterinary medicine than acquired feline heart diseases such as cardiomyopathy. Understanding the wide spectrum of congenital cardiovascular disease demands a familiarity with a variety of lesions, occurring both in isolation and in combination, along with an appreciation of complex nomenclature and variable classification schemes. This review begins with an overview of congenital heart disease in the cat, including proposed etiologies and prevalence, examination approaches, and principles of therapy. Specific congenital defects are presented and organized by a sequential segmental classification with respect to their morphologic lesions. Highlights of diagnosis, treatment options, and prognosis are offered. It is hoped that this review will provide a framework for approaching congenital heart disease in the cat, and more broadly in other animal species based on the sequential segmental approach, which represents an adaptation of the common methodology used in children and adults with congenital heart disease. Copyright © 2015 Elsevier B.V. All rights reserved.