WorldWideScience

Sample records for sequential indicator simulation

  1. The use of sequential indicator simulation to characterize geostatistical uncertainty

    International Nuclear Information System (INIS)

    Hansen, K.M.

    1992-10-01

    Sequential indicator simulation (SIS) is a geostatistical technique designed to aid in the characterization of uncertainty about the structure or behavior of natural systems. This report discusses a simulation experiment designed to study the quality of uncertainty bounds generated using SIS. The results indicate that, while SIS may produce reasonable uncertainty bounds in many situations, factors like the number and location of available sample data, the quality of variogram models produced by the user, and the characteristics of the geologic region to be modeled, can all have substantial effects on the accuracy and precision of estimated confidence limits. It is recommended that users of SIS conduct validation studies for the technique on their particular regions of interest before accepting the output uncertainty bounds

  2. [Using sequential indicator simulation method to define risk areas of soil heavy metals in farmland.

    Science.gov (United States)

    Yang, Hao; Song, Ying Qiang; Hu, Yue Ming; Chen, Fei Xiang; Zhang, Rui

    2018-05-01

    The heavy metals in soil have serious impacts on safety, ecological environment and human health due to their toxicity and accumulation. It is necessary to efficiently identify the risk area of heavy metals in farmland soil, which is of important significance for environment protection, pollution warning and farmland risk control. We collected 204 samples and analyzed the contents of seven kinds of heavy metals (Cu, Zn, Pb, Cd, Cr, As, Hg) in Zengcheng District of Guangzhou, China. In order to overcame the problems of the data, including the limitation of abnormal values and skewness distribution and the smooth effect with the traditional kriging methods, we used sequential indicator simulation method (SISIM) to define the spatial distribution of heavy metals, and combined Hakanson index method to identify potential ecological risk area of heavy metals in farmland. The results showed that: (1) Based on the similar accuracy of spatial prediction of soil heavy metals, the SISIM had a better expression of detail rebuild than ordinary kriging in small scale area. Compared to indicator kriging, the SISIM had less error rate (4.9%-17.1%) in uncertainty evaluation of heavy-metal risk identification. The SISIM had less smooth effect and was more applicable to simulate the spatial uncertainty assessment of soil heavy metals and risk identification. (2) There was no pollution in Zengcheng's farmland. Moderate potential ecological risk was found in the southern part of study area due to enterprise production, human activities, and river sediments. This study combined the sequential indicator simulation with Hakanson risk index method, and effectively overcame the outlier information loss and smooth effect of traditional kriging method. It provided a new way to identify the soil heavy metal risk area of farmland in uneven sampling.

  3. Using sequential indicator simulation to assess the uncertainty of delineating heavy-metal contaminated soils

    International Nuclear Information System (INIS)

    Juang, Kai-Wei; Chen, Yue-Shin; Lee, Dar-Yuan

    2004-01-01

    Mapping the spatial distribution of soil pollutants is essential for delineating contaminated areas. Currently, geostatistical interpolation, kriging, is increasingly used to estimate pollutant concentrations in soils. The kriging-based approach, indicator kriging (IK), may be used to model the uncertainty of mapping. However, a smoothing effect is usually produced when using kriging in pollutant mapping. The detailed spatial patterns of pollutants could, therefore, be lost. The local uncertainty of mapping pollutants derived by the IK technique is referred to as the conditional cumulative distribution function (ccdf) for one specific location (i.e. single-location uncertainty). The local uncertainty information obtained by IK is not sufficient as the uncertainty of mapping at several locations simultaneously (i.e. multi-location uncertainty or spatial uncertainty) is required to assess the reliability of the delineation of contaminated areas. The simulation approach, sequential indicator simulation (SIS), which has the ability to model not only single, but also multi-location uncertainties, was used, in this study, to assess the uncertainty of the delineation of heavy metal contaminated soils. To illustrate this, a data set of Cu concentrations in soil from Taiwan was used. The results show that contour maps of Cu concentrations generated by the SIS realizations exhausted all the spatial patterns of Cu concentrations without the smoothing effect found when using the kriging method. Based on the SIS realizations, the local uncertainty of Cu concentrations at a specific location of x', refers to the probability of the Cu concentration z(x') being higher than the defined threshold level of contamination (z c ). This can be written as Prob SIS [z(x')>z c ], representing the probability of contamination. The probability map of Prob SIS [z(x')>z c ] can then be used for delineating contaminated areas. In addition, the multi-location uncertainty of an area A

  4. Uncertainty assessment of PM2.5 contamination mapping using spatiotemporal sequential indicator simulations and multi-temporal monitoring data

    Science.gov (United States)

    Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang

    2016-04-01

    Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.

  5. A path-level exact parallelization strategy for sequential simulation

    Science.gov (United States)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  6. Sequentially linear analysis for simulating brittle failure

    NARCIS (Netherlands)

    van de Graaf, A.V.

    2017-01-01

    The numerical simulation of brittle failure at structural level with nonlinear finite
    element analysis (NLFEA) remains a challenge due to robustness issues. We attribute these problems to the dimensions of real-world structures combined with softening behavior and negative tangent stiffness at

  7. Sequential Computerized Mastery Tests--Three Simulation Studies

    Science.gov (United States)

    Wiberg, Marie

    2006-01-01

    A simulation study of a sequential computerized mastery test is carried out with items modeled with the 3 parameter logistic item response theory model. The examinees' responses are either identically distributed, not identically distributed, or not identically distributed together with estimation errors in the item characteristics. The…

  8. Selective Sequential Zero-Base Budgeting Procedures Based on Total Factor Productivity Indicators

    OpenAIRE

    A. Ishikawa; E. F. Sudit

    1981-01-01

    The authors' purpose in this paper is to develop productivity-based sequential budgeting procedures designed to expedite identification of major problem areas in bugetary performance, as well as to reduce the costs associated with comprehensive zero-base analyses. The concept of total factor productivity is reviewed and its relations to ordinary and zero-based budgeting are discussed in detail. An outline for a selective sequential analysis based on monitoring of three key indicators of (a) i...

  9. Accelerating Sequential Gaussian Simulation with a constant path

    Science.gov (United States)

    Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus

    2018-03-01

    Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.

  10. Reliability assessment of restructured power systems using reliability network equivalent and pseudo-sequential simulation techniques

    International Nuclear Information System (INIS)

    Ding, Yi; Wang, Peng; Goel, Lalit; Billinton, Roy; Karki, Rajesh

    2007-01-01

    This paper presents a technique to evaluate reliability of a restructured power system with a bilateral market. The proposed technique is based on the combination of the reliability network equivalent and pseudo-sequential simulation approaches. The reliability network equivalent techniques have been implemented in the Monte Carlo simulation procedure to reduce the computational burden of the analysis. Pseudo-sequential simulation has been used to increase the computational efficiency of the non-sequential simulation method and to model the chronological aspects of market trading and system operation. Multi-state Markov models for generation and transmission systems are proposed and implemented in the simulation. A new load shedding scheme is proposed during generation inadequacy and network congestion to minimize the load curtailment. The IEEE reliability test system (RTS) is used to illustrate the technique. (author)

  11. Sequential use of simulation and optimization in analysis and planning

    Science.gov (United States)

    Hans R. Zuuring; Jimmie D. Chew; J. Greg Jones

    2000-01-01

    Management activities are analyzed at landscape scales employing both simulation and optimization. SIMPPLLE, a stochastic simulation modeling system, is initially applied to assess the risks associated with a specific natural process occurring on the current landscape without management treatments, but with fire suppression. These simulation results are input into...

  12. The effect of sequential coupling on radial displacement accuracy in electromagnetic inside-bead forming: simulation and experimental analysis using Maxwell and ABAQUS software

    Energy Technology Data Exchange (ETDEWEB)

    Chaharmiri, Rasoul; Arezoodar, Alireza Fallahi [Amirkabir University, Tehran (Iran, Islamic Republic of)

    2016-05-15

    Electromagnetic forming (EMF) is a high strain rate forming technology which can effectively deform and shape high electrically conductive materials at room temperature. In this study, the electromagnetic and mechanical parts of the process simulated using Maxwell and ABAQUS software, respectively. To provide a link between the software, two approaches include 'loose' and 'sequential' coupling were applied. This paper is aimed to investigate how sequential coupling would affect radial displacement accuracy, as an indicator of tube final shape, at various discharge voltages. The results indicated a good agreement for the both approaches at lower discharge voltages with more accurate results for sequential coupling, but at high discharge voltages, there was a non-negligible overestimation of about 43% for the loose coupling reduced to only 8.2% difference by applying sequential coupling in the case studied. Therefore, in order to reach more accurate predictions, applying sequential coupling especially at higher discharge voltages is strongly recommended.

  13. Simulation Study of Real Time 3-D Synthetic Aperture Sequential Beamforming for Ultrasound Imaging

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian; Rasmussen, Morten Fischer; Stuart, Matthias Bo

    2014-01-01

    in the main system. The real-time imaging capability is achieved using a synthetic aperture beamforming technique, utilizing the transmit events to generate a set of virtual elements that in combination can generate an image. The two core capabilities in combination is named Synthetic Aperture Sequential......This paper presents a new beamforming method for real-time three-dimensional (3-D) ultrasound imaging using a 2-D matrix transducer. To obtain images with sufficient resolution and contrast, several thousand elements are needed. The proposed method reduces the required channel count from...... Beamforming (SASB). Simulations are performed to evaluate the image quality of the presented method in comparison to Parallel beamforming utilizing 16 receive beamformers. As indicators for image quality the detail resolution and Cystic resolution are determined for a set of scatterers at a depth of 90mm...

  14. Key performance indicators for successful simulation projects

    OpenAIRE

    Jahangirian, M; Taylor, SJE; Young, T; Robinson, S

    2016-01-01

    There are many factors that may contribute to the successful delivery of a simulation project. To provide a structured approach to assessing the impact various factors have on project success, we propose a top-down framework whereby 15 Key Performance Indicators (KPI) are developed that represent the level of successfulness of simulation projects from various perspectives. They are linked to a set of Critical Success Factors (CSF) as reported in the simulation literature. A single measure cal...

  15. Simulation based sequential Monte Carlo methods for discretely observed Markov processes

    OpenAIRE

    Neal, Peter

    2014-01-01

    Parameter estimation for discretely observed Markov processes is a challenging problem. However, simulation of Markov processes is straightforward using the Gillespie algorithm. We exploit this ease of simulation to develop an effective sequential Monte Carlo (SMC) algorithm for obtaining samples from the posterior distribution of the parameters. In particular, we introduce two key innovations, coupled simulations, which allow us to study multiple parameter values on the basis of a single sim...

  16. Cross-sectional versus sequential quality indicators of risk factor management in patients with type 2 diabetes

    NARCIS (Netherlands)

    Voorham, Jaco; Denig, Petra; Wolffenbuttel, Bruce H. R.; Haaijer-Ruskamp, Flora M.

    Background: The fairness of quality assessment methods is under debate. Quality indicators incorporating the longitudinal nature of care have been advocated but their usefulness in comparison to more commonly used cross-sectional measures is not clear. Aims: To compare cross-sectional and sequential

  17. Indications of de Sitter spacetime from classical sequential growth dynamics of causal sets

    International Nuclear Information System (INIS)

    Ahmed, Maqbool; Rideout, David

    2010-01-01

    A large class of the dynamical laws for causal sets described by a classical process of sequential growth yields a cyclic universe, whose cycles of expansion and contraction are punctuated by single 'origin elements' of the causal set. We present evidence that the effective dynamics of the immediate future of one of these origin elements, within the context of the sequential growth dynamics, yields an initial period of de Sitter-like exponential expansion, and argue that the resulting picture has many attractive features as a model of the early universe, with the potential to solve some of the standard model puzzles without any fine-tuning.

  18. Study of sequential disinfection for the inactivation of protozoa and indicator microorganisms in wastewater

    Directory of Open Access Journals (Sweden)

    Raphael Corrêa Medeiros

    2015-05-01

    Full Text Available Sewage disinfection has the primary objective of inactivating pathogenic organisms to prevent the dissemination of waterborne diseases. This study analyzed individual disinfection, with chlorine and ultraviolet radiation, and sequential disinfection (chlorine-UV radiation. The tests were conducted with anaerobic effluent in batch, in laboratory scale, with two dosages of chlorine (10 and 20 mg L-1 and UV (2.5 and 6.1 Wh m-3. In addition, to guarantee the presence of cysts in the tests, 104 cysts per liter of Giardia spp. were inoculated. The resistance order was as follows: E. coli = Total Coliforms < Clostridium perfringens < Giardia spp.. Furthermore, synergistic effects reached 0.06 to 1.42 log of inactivation in sequential disinfection for both the most resistant microorganisms.

  19. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis

    NARCIS (Netherlands)

    A. Tran-Duy (An); A. Boonen (Annelies); M.A.F.J. van de Laar (Mart); A. Franke (Andre); J.L. Severens (Hans)

    2011-01-01

    textabstractObjective: To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Methods: Discrete event simulation paradigm was selected for model

  20. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis

    NARCIS (Netherlands)

    Tran-Duy, A.; Boonen, A.; Laar, M.A.F.J.; Franke, A.C.; Severens, J.L.

    2011-01-01

    Objective To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Methods Discrete event simulation paradigm was selected for model development. Drug

  1. A preliminary simulative assessment of disproportionality indices

    OpenAIRE

    Migheli, Matteo; Ortona, Guido; Ponzano, Ferruccio

    2009-01-01

    What do indices of disproportionality actually measure? They provide an aggregate estimation of the difference between votes cast and seats assignment, but the relation between the value of the indices and the will of the voters is highly questionable. The reason is that when casting the vote the voter is deeply affected by the electoral system itself, possibly more deeply than s/he understands. The aim of this paper is to assess the performance of the most used indices of disproportionality ...

  2. Improving multiple-point-based a priori models for inverse problems by combining Sequential Simulation with the Frequency Matching Method

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine

    In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...

  3. A computationally efficient Bayesian sequential simulation approach for the assimilation of vast and diverse hydrogeophysical datasets

    Science.gov (United States)

    Nussbaumer, Raphaël; Gloaguen, Erwan; Mariéthoz, Grégoire; Holliger, Klaus

    2016-04-01

    Bayesian sequential simulation (BSS) is a powerful geostatistical technique, which notably has shown significant potential for the assimilation of datasets that are diverse with regard to the spatial resolution and their relationship. However, these types of applications of BSS require a large number of realizations to adequately explore the solution space and to assess the corresponding uncertainties. Moreover, such simulations generally need to be performed on very fine grids in order to adequately exploit the technique's potential for characterizing heterogeneous environments. Correspondingly, the computational cost of BSS algorithms in their classical form is very high, which so far has limited an effective application of this method to large models and/or vast datasets. In this context, it is also important to note that the inherent assumption regarding the independence of the considered datasets is generally regarded as being too strong in the context of sequential simulation. To alleviate these problems, we have revisited the classical implementation of BSS and incorporated two key features to increase the computational efficiency. The first feature is a combined quadrant spiral - superblock search, which targets run-time savings on large grids and adds flexibility with regard to the selection of neighboring points using equal directional sampling and treating hard data and previously simulated points separately. The second feature is a constant path of simulation, which enhances the efficiency for multiple realizations. We have also modified the aggregation operator to be more flexible with regard to the assumption of independence of the considered datasets. This is achieved through log-linear pooling, which essentially allows for attributing weights to the various data components. Finally, a multi-grid simulating path was created to enforce large-scale variance and to allow for adapting parameters, such as, for example, the log-linear weights or the type

  4. Sequential Gaussian co-simulation of rate decline parameters of longwall gob gas ventholes

    Science.gov (United States)

    Karacan, C. Özgen; Olea, Ricardo A.

    2013-01-01

    Gob gas ventholes (GGVs) are used to control methane inflows into a longwall mining operation by capturing the gas within the overlying fractured strata before it enters the work environment. Using geostatistical co-simulation techniques, this paper maps the parameters of their rate decline behaviors across the study area, a longwall mine in the Northern Appalachian basin. Geostatistical gas-in-place (GIP) simulations were performed, using data from 64 exploration boreholes, and GIP data were mapped within the fractured zone of the study area. In addition, methane flowrates monitored from 10 GGVs were analyzed using decline curve analyses (DCA) techniques to determine parameters of decline rates. Surface elevation showed the most influence on methane production from GGVs and thus was used to investigate its relation with DCA parameters using correlation techniques on normal-scored data. Geostatistical analysis was pursued using sequential Gaussian co-simulation with surface elevation as the secondary variable and with DCA parameters as the primary variables. The primary DCA variables were effective percentage decline rate, rate at production start, rate at the beginning of forecast period, and production end duration. Co-simulation results were presented to visualize decline parameters at an area-wide scale. Wells located at lower elevations, i.e., at the bottom of valleys, tend to perform better in terms of their rate declines compared to those at higher elevations. These results were used to calculate drainage radii of GGVs using GIP realizations. The calculated drainage radii are close to ones predicted by pressure transient tests.

  5. Monte Carlo simulation of the sequential probability ratio test for radiation monitoring

    International Nuclear Information System (INIS)

    Coop, K.L.

    1984-01-01

    A computer program simulates the Sequential Probability Ratio Test (SPRT) using Monte Carlo techniques. The program, SEQTEST, performs random-number sampling of either a Poisson or normal distribution to simulate radiation monitoring data. The results are in terms of the detection probabilities and the average time required for a trial. The computed SPRT results can be compared with tabulated single interval test (SIT) values to determine the better statistical test for particular monitoring applications. Use of the SPRT in a hand-and-foot alpha monitor shows that the SPRT provides better detection probabilities while generally requiring less counting time. Calculations are also performed for a monitor where the SPRT is not permitted to the take longer than the single interval test. Although the performance of the SPRT is degraded by this restriction, the detection probabilities are still similar to the SIT values, and average counting times are always less than 75% of the SIT time. Some optimal conditions for use of the SPRT are described. The SPRT should be the test of choice in many radiation monitoring situations. 6 references, 8 figures, 1 table

  6. Inhomogeneities detection in annual precipitation time series in Portugal using direct sequential simulation

    Science.gov (United States)

    Caineta, Júlio; Ribeiro, Sara; Costa, Ana Cristina; Henriques, Roberto; Soares, Amílcar

    2014-05-01

    Climate data homogenisation is of major importance in monitoring climate change, the validation of weather forecasting, general circulation and regional atmospheric models, modelling of erosion, drought monitoring, among other studies of hydrological and environmental impacts. This happens because non-climate factors can cause time series discontinuities which may hide the true climatic signal and patterns, thus potentially bias the conclusions of those studies. In the last two decades, many methods have been developed to identify and remove these inhomogeneities. One of those is based on geostatistical simulation (DSS - direct sequential simulation), where local probability density functions (pdf) are calculated at candidate monitoring stations, using spatial and temporal neighbouring observations, and then are used for detection of inhomogeneities. This approach has been previously applied to detect inhomogeneities in four precipitation series (wet day count) from a network with 66 monitoring stations located in the southern region of Portugal (1980-2001). This study revealed promising results and the potential advantages of geostatistical techniques for inhomogeneities detection in climate time series. This work extends the case study presented before and investigates the application of the geostatistical stochastic approach to ten precipitation series that were previously classified as inhomogeneous by one of six absolute homogeneity tests (Mann-Kendall test, Wald-Wolfowitz runs test, Von Neumann ratio test, Standard normal homogeneity test (SNHT) for a single break, Pettit test, and Buishand range test). Moreover, a sensibility analysis is implemented to investigate the number of simulated realisations that should be used to accurately infer the local pdfs. Accordingly, the number of simulations per iteration is increased from 50 to 500, which resulted in a more representative local pdf. A set of default and recommended settings is provided, which will help

  7. Assessment of groundwater level estimation uncertainty using sequential Gaussian simulation and Bayesian bootstrapping

    Science.gov (United States)

    Varouchakis, Emmanouil; Hristopulos, Dionissios

    2015-04-01

    Space-time geostatistical approaches can improve the reliability of dynamic groundwater level models in areas with limited spatial and temporal data. Space-time residual Kriging (STRK) is a reliable method for spatiotemporal interpolation that can incorporate auxiliary information. The method usually leads to an underestimation of the prediction uncertainty. The uncertainty of spatiotemporal models is usually estimated by determining the space-time Kriging variance or by means of cross validation analysis. For de-trended data the former is not usually applied when complex spatiotemporal trend functions are assigned. A Bayesian approach based on the bootstrap idea and sequential Gaussian simulation are employed to determine the uncertainty of the spatiotemporal model (trend and covariance) parameters. These stochastic modelling approaches produce multiple realizations, rank the prediction results on the basis of specified criteria and capture the range of the uncertainty. The correlation of the spatiotemporal residuals is modeled using a non-separable space-time variogram based on the Spartan covariance family (Hristopulos and Elogne 2007, Varouchakis and Hristopulos 2013). We apply these simulation methods to investigate the uncertainty of groundwater level variations. The available dataset consists of bi-annual (dry and wet hydrological period) groundwater level measurements in 15 monitoring locations for the time period 1981 to 2010. The space-time trend function is approximated using a physical law that governs the groundwater flow in the aquifer in the presence of pumping. The main objective of this research is to compare the performance of two simulation methods for prediction uncertainty estimation. In addition, we investigate the performance of the Spartan spatiotemporal covariance function for spatiotemporal geostatistical analysis. Hristopulos, D.T. and Elogne, S.N. 2007. Analytic properties and covariance functions for a new class of generalized Gibbs

  8. A comparison of an algorithm for automated sequential beam orientation selection (Cycle) with simulated annealing

    International Nuclear Information System (INIS)

    Woudstra, Evert; Heijmen, Ben J M; Storchi, Pascal R M

    2008-01-01

    Some time ago we developed and published a new deterministic algorithm (called Cycle) for automatic selection of beam orientations in radiotherapy. This algorithm is a plan generation process aiming at the prescribed PTV dose within hard dose and dose-volume constraints. The algorithm allows a large number of input orientations to be used and selects only the most efficient orientations, surviving the selection process. Efficiency is determined by a score function and is more or less equal to the extent of uninhibited access to the PTV for a specific beam during the selection process. In this paper we compare the capabilities of fast-simulated annealing (FSA) and Cycle for cases where local optima are supposed to be present. Five pancreas and five oesophagus cases previously treated in our institute were selected for this comparison. Plans were generated for FSA and Cycle, using the same hard dose and dose-volume constraints, and the largest possible achieved PTV doses as obtained from these algorithms were compared. The largest achieved PTV dose values were generally very similar for the two algorithms. In some cases FSA resulted in a slightly higher PTV dose than Cycle, at the cost of switching on substantially more beam orientations than Cycle. In other cases, when Cycle generated the solution with the highest PTV dose using only a limited number of non-zero weight beams, FSA seemed to have some difficulty in switching off the unfavourable directions. Cycle was faster than FSA, especially for large-dimensional feasible spaces. In conclusion, for the cases studied in this paper, we have found that despite the inherent drawback of sequential search as used by Cycle (where Cycle could probably get trapped in a local optimum), Cycle is nevertheless able to find comparable or sometimes slightly better treatment plans in comparison with FSA (which in theory finds the global optimum) especially in large-dimensional beam weight spaces

  9. Using Zebra-speech to study sequential and simultaneous speech segregation in a cochlear-implant simulation.

    Science.gov (United States)

    Gaudrain, Etienne; Carlyon, Robert P

    2013-01-01

    Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish the target and the masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed.

  10. Management of Industrial Performance Indicators: Regression Analysis and Simulation

    Directory of Open Access Journals (Sweden)

    Walter Roberto Hernandez Vergara

    2017-11-01

    Full Text Available Stochastic methods can be used in problem solving and explanation of natural phenomena through the application of statistical procedures. The article aims to associate the regression analysis and systems simulation, in order to facilitate the practical understanding of data analysis. The algorithms were developed in Microsoft Office Excel software, using statistical techniques such as regression theory, ANOVA and Cholesky Factorization, which made it possible to create models of single and multiple systems with up to five independent variables. For the analysis of these models, the Monte Carlo simulation and analysis of industrial performance indicators were used, resulting in numerical indices that aim to improve the goals’ management for compliance indicators, by identifying systems’ instability, correlation and anomalies. The analytical models presented in the survey indicated satisfactory results with numerous possibilities for industrial and academic applications, as well as the potential for deployment in new analytical techniques.

  11. Sequential hemi-body radiotherapy in advanced multiple myeloma. [Side effects of indicated x-ray therapy

    Energy Technology Data Exchange (ETDEWEB)

    Jaffe, J.P.; Bosch, A.; Raich, P.C.

    1979-01-01

    Eleven patients with advanced multiple myeloma refractory to standard chemotherapy were treated with a regimen of sequential hemi-body radiotherapy consisting of 800 rad midplane in a single dose to each half. 9/10 patients experienced significant relief of skeletal pain and there were 5/11 objective tumor responses with one complete remission. Treatment-related morbidity was significant and consisted primarily of nausea and emesis, bone marrow suppression, and pneumonitis. This therapy is helpful in the management of advanced myeloma, and should be studied earlier in the course of the disease.

  12. Evaluating medical student engagement during virtual patient simulations: a sequential, mixed methods study.

    Science.gov (United States)

    McCoy, Lise; Pettit, Robin K; Lewis, Joy H; Allgood, J Aaron; Bay, Curt; Schwartz, Frederic N

    2016-01-16

    Student engagement is an important domain for medical education, however, it is difficult to quantify. The goal of this study was to investigate the utility of virtual patient simulations (VPS) for increasing medical student engagement. Our aims were specifically to investigate how and to what extent the VPS foster student engagement. This study took place at A.T. Still University School of Osteopathic Medicine in Arizona (ATSU-SOMA), in the USA. First year medical students (n = 108) worked in teams to complete a series of four in-class virtual patient case studies. Student engagement was measured, defined as flow, interest, and relevance. These dimensions were measured using four data collection instruments: researcher observations, classroom photographs, tutor feedback, and an electronic exit survey. Qualitative data were analyzed using a grounded theory approach. Triangulation of findings between the four data sources indicate that VPS foster engagement in three facets: 1) Flow. In general, students enjoyed the activities, and were absorbed in the task at hand. 2) Interest. Students demonstrated interest in the activities, as evidenced by enjoyment, active discussion, and humor. Students remarked upon elements that caused cognitive dissonance: excessive text and classroom noise generated by multi-media and peer conversations. 3) Relevance. VPS were relevant, in terms of situational clinical practice, exam preparation, and obtaining concrete feedback on clinical decisions. Researchers successfully introduced a new learning platform into the medical school curriculum. The data collected during this study were also used to improve new learning modules and techniques associated with implementing them in the classroom. Results of this study assert that virtual patient simulations foster engagement in terms of flow, relevance, and interest.

  13. Customized sequential designs for random simulation experiments: Kriging metamodeling and bootstrapping

    NARCIS (Netherlands)

    Beers, van W.C.M.; Kleijnen, J.P.C.

    2005-01-01

    This paper proposes a novel method to select an experimental design for interpolation in random simulation, especially discrete event simulation. (Though the paper focuses on Kriging, this design approach may also apply to other types of metamodels such as linear regression models.) Assuming that

  14. Sequential Monte Carlo simulation of collision risk in free flight air traffic

    NARCIS (Netherlands)

    Blom, H.A.P.; Bakker, G.; Krystul, J.; Everdij, M.H.C.; Klein Obbink, B.; Klompstra, M.B.

    2005-01-01

    Within HYBRIDGE a novel approach in speeding up Monte Carlo simulation of rare events has been developed. In the current report this method is extended for application to simulating collisions with a stochastic dynamical model of an air traffic operational concept. Subsequently this extended Monte

  15. Exploiting neurovascular coupling: a Bayesian sequential Monte Carlo approach applied to simulated EEG fNIRS data

    Science.gov (United States)

    Croce, Pierpaolo; Zappasodi, Filippo; Merla, Arcangelo; Chiarelli, Antonio Maria

    2017-08-01

    Objective. Electrical and hemodynamic brain activity are linked through the neurovascular coupling process and they can be simultaneously measured through integration of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Thanks to the lack of electro-optical interference, the two procedures can be easily combined and, whereas EEG provides electrophysiological information, fNIRS can provide measurements of two hemodynamic variables, such as oxygenated and deoxygenated hemoglobin. A Bayesian sequential Monte Carlo approach (particle filter, PF) was applied to simulated recordings of electrical and neurovascular mediated hemodynamic activity, and the advantages of a unified framework were shown. Approach. Multiple neural activities and hemodynamic responses were simulated in the primary motor cortex of a subject brain. EEG and fNIRS recordings were obtained by means of forward models of volume conduction and light propagation through the head. A state space model of combined EEG and fNIRS data was built and its dynamic evolution was estimated through a Bayesian sequential Monte Carlo approach (PF). Main results. We showed the feasibility of the procedure and the improvements in both electrical and hemodynamic brain activity reconstruction when using the PF on combined EEG and fNIRS measurements. Significance. The investigated procedure allows one to combine the information provided by the two methodologies, and, by taking advantage of a physical model of the coupling between electrical and hemodynamic response, to obtain a better estimate of brain activity evolution. Despite the high computational demand, application of such an approach to in vivo recordings could fully exploit the advantages of this combined brain imaging technology.

  16. Modelling and sequential simulation of multi-tubular metallic membrane and techno-economics of a hydrogen production process employing thin-layer membrane reactor

    KAUST Repository

    Shafiee, Alireza

    2016-09-24

    A theoretical model for multi-tubular palladium-based membrane is proposed in this paper and validated against experimental data for two different sized membrane modules that operate at high temperatures. The model is used in a sequential simulation format to describe and analyse pure hydrogen and hydrogen binary mixture separations, and then extended to simulate an industrial scale membrane unit. This model is used as a sub-routine within an ASPEN Plus model to simulate a membrane reactor in a steam reforming hydrogen production plant. A techno-economic analysis is then conducted using the validated model for a plant producing 300 TPD of hydrogen. The plant utilises a thin (2.5 μm) defect-free and selective layer (Pd75Ag25 alloy) membrane reactor. The economic sensitivity analysis results show usefulness in finding the optimum operating condition that achieves minimum hydrogen production cost at break-even point. A hydrogen production cost of 1.98 $/kg is estimated while the cost of the thin-layer selective membrane is found to constitute 29% of total process capital cost. These results indicate the competiveness of this thin-layer membrane process against conventional methods of hydrogen production. © 2016 Hydrogen Energy Publications LLC

  17. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis.

    Science.gov (United States)

    Tran-Duy, An; Boonen, Annelies; van de Laar, Mart A F J; Franke, Angelinus C; Severens, Johan L

    2011-12-01

    To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Discrete event simulation paradigm was selected for model development. Drug efficacy was modelled as changes in disease activity (Bath Ankylosing Spondylitis Disease Activity Index (BASDAI)) and functional status (Bath Ankylosing Spondylitis Functional Index (BASFI)), which were linked to costs and health utility using statistical models fitted based on an observational AS cohort. Published clinical data were used to estimate drug efficacy and time to events. Two strategies were compared: (1) five available non-steroidal anti-inflammatory drugs (strategy 1) and (2) same as strategy 1 plus two tumour necrosis factor α inhibitors (strategy 2). 13,000 patients were followed up individually until death. For probability sensitivity analysis, Monte Carlo simulations were performed with 1000 sets of parameters sampled from the appropriate probability distributions. The models successfully generated valid data on treatments, BASDAI, BASFI, utility, quality-adjusted life years (QALYs) and costs at time points with intervals of 1-3 months during the simulation length of 70 years. Incremental cost per QALY gained in strategy 2 compared with strategy 1 was €35,186. At a willingness-to-pay threshold of €80,000, it was 99.9% certain that strategy 2 was cost-effective. The modelling framework provides great flexibility to implement complex algorithms representing treatment selection, disease progression and changes in costs and utilities over time of patients with AS. Results obtained from the simulation are plausible.

  18. Sequential UASB and dual media packed-bed reactors for domestic wastewater treatment - experiment and simulation.

    Science.gov (United States)

    Rodríguez-Gómez, Raúl; Renman, Gunno

    2016-01-01

    A wastewater treatment system composed of an upflow anaerobic sludge blanket (UASB) reactor followed by a packed-bed reactor (PBR) filled with Sorbulite(®) and Polonite(®) filter material was tested in a laboratory bench-scale experiment. The system was operated for 50 weeks and achieved very efficient total phosphorus (P) removal (99%), 7-day biochemical oxygen demand removal (99%) and pathogenic bacteria reduction (99%). However, total nitrogen was only moderately reduced in the system (40%). A model focusing on simulation of organic material, solids and size of granules was then implemented and validated for the UASB reactor. Good agreement between the simulated and measured results demonstrated the capacity of the model to predict the behaviour of solids and chemical oxygen demand, which is critical for successful P removal and recovery in the PBR.

  19. Simulation of skill acquisition in sequential learning of a computer game

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Nielsen, Finn Ravnsbjerg; Rasmussen, Jens

    1995-01-01

    The paper presents some theoretical assumptions about the cognitive control mechanisms of subjects learning to play a computer game. A simulation model has been developed to investigate these assumptions. The model is an automaton, reacting to instruction-like cue action rules. The prototypical...... performances of 23 experimental subjects at succeeding levels of training are compared to the performance of the model. The findings are interpreted in terms of a general taxonomy for cognitive task analysis....

  20. The influence of simultaneous or sequential test conditions in the properties of industrial polymers, submitted to PWR accident simulations

    International Nuclear Information System (INIS)

    Carlin, F.; Alba, C.; Chenion, J.; Gaussens, G.; Henry, J.Y.

    1986-10-01

    The effect of PWR plant normal and accident operating conditions on polymers forms the basis of nuclear qualification of safety-related containment equipment. This study was carried out on the request of safety organizations. Its purpose was to check whether accident simulations carried out sequentially during equipment qualification tests would lead to the same deterioration as that caused by an accident involving simultaneous irradiation and thermodynamic effects. The IPSN, DAS and the United States NRC have collaborated in preparing this study. The work carried out by ORIS Company as well as the results obtained from measurement of the mechanical properties of 8 industrial polymers are described in this report. The results are given in the conclusion. They tend to show that, overall, the most suitable test cycle for simulating accident operating conditions would be one which included irradiation and consecutive thermodynamic shock. The results of this study and the results obtained in a previous study, which included the same test cycles, except for more severe thermo-ageing, have been compared. This comparison, which was made on three elastomers, shows that ageing after the accident has a different effect on each material [fr

  1. Identification and Relative Quantification of Bioactive Peptides Sequentially Released during Simulated Gastrointestinal Digestion of Commercial Kefir.

    Science.gov (United States)

    Liu, Yufang; Pischetsrieder, Monika

    2017-03-08

    Health-promoting effects of kefir may be partially caused by bioactive peptides. To evaluate their formation or degradation during gastrointestinal digestion, we monitored changes of the peptide profile in a model of (1) oral, (2) gastric, and (3) small intestinal digestion of kefir. Matrix-assisted laser desorption/ionization time-of-flight mass spectroscopy analyses revealed clearly different profiles between digests 2/3 and kefir/digest 1. Subsequent ultraperformance liquid chromatography-electrospray ionization-tandem mass spectrometry identified 92 peptides in total (25, 25, 43, and 30, partly overlapping in kefir and digests 1, 2, and 3, respectively), including 16 peptides with ascribed bioactivity. Relative quantification in scheduled multiple reaction monitoring mode showed that many bioactive peptides were released by simulated digestion. Most prominently, the concentration of angiotensin-converting enzyme inhibitor β-casein 203-209 increased approximately 10 000-fold after combined oral, gastric, and intestinal digestion. Thus, physiological digestive processes may promote bioactive peptide formation from proteins and oligopeptides in kefir. Furthermore, bioactive peptides present in certain compartments of the gastrointestinal tract may exert local physiological effects.

  2. Potential uranium supply system based upon computer simulation of sequential exploration and decisions under risk

    International Nuclear Information System (INIS)

    Ortiz-Vertiz, S.R.

    1991-01-01

    A Monte Carlo simulation system was used to estimate potential supply of roll-type deposits. The system takes a given uranium-endowment probability distribution and aims at two major and interrelated objectives: (1) to design a system that estimates potential supply even when prices are much higher than previous or current prices; and (2) to account fully for the cost of discovering and mining the individual mineral deposits contained in given endowment. Achievement of these objectives constitutes the major contribution of this study. To accomplish them, the system considers: cost of risk, return on investment, cost of failures during the search process, discovery depletion, and effect of physical characteristics of the deposits on exploration and mining costs. It also considers that when economic conditions, such as product price, are outside historical experience, existing behavioral rules - exploration drilling density, stopping rules, minimum attractive deposit size and grade, and mining parameters - are irrelevant. The system architecture is general and can be used with an exploration model prepared specifically for other minerals

  3. Neural control of muscle force: indications from a simulation model

    Science.gov (United States)

    Luca, Carlo J. De

    2013-01-01

    We developed a model to investigate the influence of the muscle force twitch on the simulated firing behavior of motoneurons and muscle force production during voluntary isometric contractions. The input consists of an excitatory signal common to all the motor units in the pool of a muscle, consistent with the “common drive” property. Motor units respond with a hierarchically structured firing behavior wherein at any time and force, firing rates are inversely proportional to recruitment threshold, as described by the “onion skin” property. Time- and force-dependent changes in muscle force production are introduced by varying the motor unit force twitches as a function of time or by varying the number of active motor units. A force feedback adjusts the input excitation, maintaining the simulated force at a target level. The simulations replicate motor unit behavior characteristics similar to those reported in previous empirical studies of sustained contractions: 1) the initial decrease and subsequent increase of firing rates, 2) the derecruitment and recruitment of motor units throughout sustained contractions, and 3) the continual increase in the force fluctuation caused by the progressive recruitment of larger motor units. The model cautions the use of motor unit behavior at recruitment and derecruitment without consideration of changes in the muscle force generation capacity. It describes an alternative mechanism for the reserve capacity of motor units to generate extraordinary force. It supports the hypothesis that the control of motoneurons remains invariant during force-varying and sustained isometric contractions. PMID:23236008

  4. Predictive neuromechanical simulations indicate why walking performance declines with ageing.

    Science.gov (United States)

    Song, Seungmoon; Geyer, Hartmut

    2018-04-01

    Although the natural decline in walking performance with ageing affects the quality of life of a growing elderly population, its physiological origins remain unknown. By using predictive neuromechanical simulations of human walking with age-related neuro-musculo-skeletal changes, we find evidence that the loss of muscle strength and muscle contraction speed dominantly contribute to the reduced walking economy and speed. The findings imply that focusing on recovering these muscular changes may be the only effective way to improve performance in elderly walking. More generally, the work is of interest for investigating the physiological causes of altered gait due to age, injury and disorders. Healthy elderly people walk slower and energetically less efficiently than young adults. This decline in walking performance lowers the quality of life for a growing ageing population, and understanding its physiological origin is critical for devising interventions that can delay or revert it. However, the origin of the decline in walking performance remains unknown, as ageing produces a range of physiological changes whose individual effects on gait are difficult to separate in experiments with human subjects. Here we use a predictive neuromechanical model to separately address the effects of common age-related changes to the skeletal, muscular and nervous systems. We find in computer simulations of this model that the combined changes produce gait consistent with elderly walking and that mainly the loss of muscle strength and mass reduces energy efficiency. In addition, we find that the slower preferred walking speed of elderly people emerges in the simulations when adapting to muscle fatigue, again mainly caused by muscle-related changes. The results suggest that a focus on recovering these muscular changes may be the only effective way to improve performance in elderly walking. © 2018 The Authors. The Journal of Physiology © 2018 The Physiological Society.

  5. Sequential Dependencies in Driving

    Science.gov (United States)

    Doshi, Anup; Tran, Cuong; Wilder, Matthew H.; Mozer, Michael C.; Trivedi, Mohan M.

    2012-01-01

    The effect of recent experience on current behavior has been studied extensively in simple laboratory tasks. We explore the nature of sequential effects in the more naturalistic setting of automobile driving. Driving is a safety-critical task in which delayed response times may have severe consequences. Using a realistic driving simulator, we find…

  6. Preliminary results of sequential monitoring of simulated clandestine graves in Colombia, South America, using ground penetrating radar and botany.

    Science.gov (United States)

    Molina, Carlos Martin; Pringle, Jamie K; Saumett, Miguel; Hernández, Orlando

    2015-03-01

    In most Latin American countries there are significant numbers of missing people and forced disappearances, 68,000 alone currently in Colombia. Successful detection of shallow buried human remains by forensic search teams is difficult in varying terrain and climates. This research has created three simulated clandestine burial styles at two different depths commonly encountered in Latin America to gain knowledge of optimum forensic geophysics detection techniques. Repeated monitoring of the graves post-burial was undertaken by ground penetrating radar. Radar survey 2D profile results show reasonable detection of ½ clothed pig cadavers up to 19 weeks of burial, with decreasing confidence after this time. Simulated burials using skeletonized human remains were not able to be imaged after 19 weeks of burial, with beheaded and burnt human remains not being able to be detected throughout the survey period. Horizontal radar time slices showed good early results up to 19 weeks of burial as more area was covered and bi-directional surveys were collected, but these decreased in amplitude over time. Deeper burials were all harder to image than shallower ones. Analysis of excavated soil found soil moisture content almost double compared to those reported from temperate climate studies. Vegetation variations over the simulated graves were also noted which would provide promising indicators for grave detection. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Simulation modeling analysis of sequential relations among therapeutic alliance, symptoms, and adherence to child-centered play therapy between a child with autism spectrum disorder and two therapists.

    Science.gov (United States)

    Goodman, Geoff; Chung, Hyewon; Fischel, Leah; Athey-Lloyd, Laura

    2017-07-01

    This study examined the sequential relations among three pertinent variables in child psychotherapy: therapeutic alliance (TA) (including ruptures and repairs), autism symptoms, and adherence to child-centered play therapy (CCPT) process. A 2-year CCPT of a 6-year-old Caucasian boy diagnosed with autism spectrum disorder was conducted weekly with two doctoral-student therapists, working consecutively for 1 year each, in a university-based community mental-health clinic. Sessions were video-recorded and coded using the Child Psychotherapy Process Q-Set (CPQ), a measure of the TA, and an autism symptom measure. Sequential relations among these variables were examined using simulation modeling analysis (SMA). In Therapist 1's treatment, unexpectedly, autism symptoms decreased three sessions after a rupture occurred in the therapeutic dyad. In Therapist 2's treatment, adherence to CCPT process increased 2 weeks after a repair occurred in the therapeutic dyad. The TA decreased 1 week after autism symptoms increased. Finally, adherence to CCPT process decreased 1 week after autism symptoms increased. The authors concluded that (1) sequential relations differ by therapist even though the child remains constant, (2) therapeutic ruptures can have an unexpected effect on autism symptoms, and (3) changes in autism symptoms can precede as well as follow changes in process variables.

  8. Operational reliability evaluation of restructured power systems with wind power penetration utilizing reliability network equivalent and time-sequential simulation approaches

    DEFF Research Database (Denmark)

    Ding, Yi; Cheng, Lin; Zhang, Yonghong

    2014-01-01

    In the last two decades, the wind power generation has been rapidly and widely developed in many regions and countries for tackling the problems of environmental pollution and sustainability of energy supply. However, the high share of intermittent and fluctuating wind power production has also...... and reserve provides, fast reserve providers and transmission network in restructured power systems. A contingency management schema for real time operation considering its coupling with the day-ahead market is proposed. The time-sequential Monte Carlo simulation is used to model the chronological...

  9. Sequential Banking.

    OpenAIRE

    Bizer, David S; DeMarzo, Peter M

    1992-01-01

    The authors study environments in which agents may borrow sequentially from more than one leader. Although debt is prioritized, additional lending imposes an externality on prior debt because, with moral hazard, the probability of repayment of prior loans decreases. Equilibrium interest rates are higher than they would be if borrowers could commit to borrow from at most one bank. Even though the loan terms are less favorable than they would be under commitment, the indebtedness of borrowers i...

  10. Sequential Probability Ration Tests : Conservative and Robust

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; Shi, Wen

    2017-01-01

    In practice, most computers generate simulation outputs sequentially, so it is attractive to analyze these outputs through sequential statistical methods such as sequential probability ratio tests (SPRTs). We investigate several SPRTs for choosing between two hypothesized values for the mean output

  11. Geostatistical modeling of the gas emission zone and its in-place gas content for Pittsburgh-seam mines using sequential Gaussian simulation

    Science.gov (United States)

    Karacan, C.O.; Olea, R.A.; Goodman, G.

    2012-01-01

    Determination of the size of the gas emission zone, the locations of gas sources within, and especially the amount of gas retained in those zones is one of the most important steps for designing a successful methane control strategy and an efficient ventilation system in longwall coal mining. The formation of the gas emission zone and the potential amount of gas-in-place (GIP) that might be available for migration into a mine are factors of local geology and rock properties that usually show spatial variability in continuity and may also show geometric anisotropy. Geostatistical methods are used here for modeling and prediction of gas amounts and for assessing their associated uncertainty in gas emission zones of longwall mines for methane control.This study used core data obtained from 276 vertical exploration boreholes drilled from the surface to the bottom of the Pittsburgh coal seam in a mining district in the Northern Appalachian basin. After identifying important coal and non-coal layers for the gas emission zone, univariate statistical and semivariogram analyses were conducted for data from different formations to define the distribution and continuity of various attributes. Sequential simulations performed stochastic assessment of these attributes, such as gas content, strata thickness, and strata displacement. These analyses were followed by calculations of gas-in-place and their uncertainties in the Pittsburgh seam caved zone and fractured zone of longwall mines in this mining district. Grid blanking was used to isolate the volume over the actual panels from the entire modeled district and to calculate gas amounts that were directly related to the emissions in longwall mines.Results indicated that gas-in-place in the Pittsburgh seam, in the caved zone and in the fractured zone, as well as displacements in major rock units, showed spatial correlations that could be modeled and estimated using geostatistical methods. This study showed that GIP volumes may

  12. Simulation of machine-specific topographic indices for use across platforms.

    Science.gov (United States)

    Mahmoud, Ashraf M; Roberts, Cynthia; Lembach, Richard; Herderick, Edward E; McMahon, Timothy T

    2006-09-01

    The objective of this project is to simulate the current published topographic indices used for the detection and evaluation of keratoconus to allow their application to maps acquired from multiple topographic machines. A retrospective analysis was performed on 21 eyes of 14 previously diagnosed keratoconus patients from a single practice using a Tomey TMS-1, an Alcon EyeMap, and a Keratron Topographer. Maps that could not be processed or that contained processing errors were excluded from analysis. Topographic indices native to each of the three devices were recorded from each map. Software was written in ANSI standard C to simulate the indices based on the published formulas and/or descriptions to extend the functionality of The Ohio State University Corneal Topography Tool (OSUCTT), a software package designed to accept the input from many corneal topographic devices and provide consistent display and analysis. Twenty indices were simulated. Linear regression analysis was performed between each simulated index and the corresponding native index. A cross-platform comparison using regression analysis was also performed. All simulated indices were significantly correlated with the corresponding native indices (p simulated. Cross-platform comparisons may be limited for specific indices.

  13. A simulation to study the feasibility of improving the temporal resolution of LAGEOS geodynamic solutions by using a sequential process noise filter

    Science.gov (United States)

    Hartman, Brian Davis

    1995-01-01

    A key drawback to estimating geodetic and geodynamic parameters over time based on satellite laser ranging (SLR) observations is the inability to accurately model all the forces acting on the satellite. Errors associated with the observations and the measurement model can detract from the estimates as well. These 'model errors' corrupt the solutions obtained from the satellite orbit determination process. Dynamical models for satellite motion utilize known geophysical parameters to mathematically detail the forces acting on the satellite. However, these parameters, while estimated as constants, vary over time. These temporal variations must be accounted for in some fashion to maintain meaningful solutions. The primary goal of this study is to analyze the feasibility of using a sequential process noise filter for estimating geodynamic parameters over time from the Laser Geodynamics Satellite (LAGEOS) SLR data. This evaluation is achieved by first simulating a sequence of realistic LAGEOS laser ranging observations. These observations are generated using models with known temporal variations in several geodynamic parameters (along track drag and the J(sub 2), J(sub 3), J(sub 4), and J(sub 5) geopotential coefficients). A standard (non-stochastic) filter and a stochastic process noise filter are then utilized to estimate the model parameters from the simulated observations. The standard non-stochastic filter estimates these parameters as constants over consecutive fixed time intervals. Thus, the resulting solutions contain constant estimates of parameters that vary in time which limits the temporal resolution and accuracy of the solution. The stochastic process noise filter estimates these parameters as correlated process noise variables. As a result, the stochastic process noise filter has the potential to estimate the temporal variations more accurately since the constraint of estimating the parameters as constants is eliminated. A comparison of the temporal

  14. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis

    In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance...... is possible as low as Eb/No=0.6 dB, which is about 1.7 dB below the signal-to-noise ratio that marks the cut-off rate for the convolutional code. This is possible since the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....

  15. Dietary fibers from mushroom Sclerotia: 2. In vitro mineral binding capacity under sequential simulated physiological conditions of the human gastrointestinal tract.

    Science.gov (United States)

    Wong, Ka-Hing; Cheung, Peter C K

    2005-11-30

    The in vitro mineral binding capacity of three novel dietary fibers (DFs) prepared from mushroom sclerotia, namely, Pleurotus tuber-regium, Polyporous rhinocerus, and Wolfiporia cocos, to Ca, Mg, Cu, Fe, and Zn under sequential simulated physiological conditions of the human stomach, small intestine, and colon was investigated and compared. Apart from releasing most of their endogenous Ca (ranged from 96.9 to 97.9% removal) and Mg (ranged from 95.9 to 96.7% removal), simulated physiological conditions of the stomach also attenuated the possible adverse binding effect of the three sclerotial DFs to the exogenous minerals by lowering their cation-exchange capacity (ranged from 20.8 to 32.3%) and removing a substantial amount of their potential mineral chelators including protein (ranged from 16.2 to 37.8%) and phytate (ranged from 58.5 to 64.2%). The in vitro mineral binding capacity of the three sclerotial DF under simulated physiological conditions of small intestine was found to be low, especially for Ca (ranged from 4.79 to 5.91% binding) and Mg (ranged from 3.16 to 4.18% binding), and was highly correlated (r > 0.97) with their residual protein contents. Under simulated physiological conditions of the colon with slightly acidic pH (5.80), only bound Ca was readily released (ranged from 34.2 to 72.3% releasing) from the three sclerotial DFs, and their potential enhancing effect on passive Ca absorption in the human large intestine was also discussed.

  16. Reliability analysis with the simulator S.ESCAF of a very complex sequential system: the electrical power supply system of a nuclear reactor

    International Nuclear Information System (INIS)

    Blot, M.

    1987-06-01

    The reliability analysis of complex sequential systems, in which the order of arrival of the events must be taken into account, can be very difficult, because the use of the classical modelling technique of Markov diagrams leads to an important limitation on the number of components which can be handled. The desk-top apparatus S.ESCAF, which electronically simulates very closely the behaviour of the system being studied, and is very easy to use, even by a non specialist in electronics, allows one to avoid these inconveniences and to enlarge considerably the analysis possibilities. This paper shows the application of the S.ESCAF method to the electrical power supply system of a nuclear reactor. This system requires the simulation of more than forty components with about sixty events such as failure, repair and refusal to start. A comparison of times necessary to perform the analysis by these means and by other methods is described, and the advantages of S.ESCAF are presented

  17. Study on Evaluation Indicators System of Crowd Management for Transfer Stations Based on Pedestrian Simulation

    Directory of Open Access Journals (Sweden)

    Guanghou Zhang

    2011-12-01

    Full Text Available Improving safety and convenience of transfer is one of the most vital tasks in subway system planning, design and operation management. Because of complicated space layout and crowded pedestrian, crowd control is a big challenge for management of transfer stations. Thus, a quantitative evaluation should be done before improvement measures are carried out. Literature review showed that present evaluation indicators about crowd management in subway system were all based on fixed value or experience. Dynamic effect caused by pedestrian congestion and various facility combination cannot be represented based on these indicators. Thus, in this paper, based on the pedestrian simulation tool, dynamic evaluation indicators system of crowd management was established from the point of safety, cost-effectiveness and comfort. In order to aid decision makers to identify the most appropriate scenario to improve the effectiveness of crowd management, Matter-Element Analysis (MEA was used to rate different scenarios. A pedestrian simulation model of a designing intermodal transfer station was built and four different scenarios were tested to demonstrate how to use this indicators system. Simulation results were evaluated based on the dynamic indicators system and MEA. The application results show that the dynamic evaluation indicators system is operational and can reflect level of the crowd management in transfer station comprehensively and precisely.

  18. How Well Can We Detect Lineage-Specific Diversification-Rate Shifts? A Simulation Study of Sequential AIC Methods.

    Science.gov (United States)

    May, Michael R; Moore, Brian R

    2016-11-01

    Evolutionary biologists have long been fascinated by the extreme differences in species numbers across branches of the Tree of Life. This has motivated the development of statistical methods for detecting shifts in the rate of lineage diversification across the branches of phylogenic trees. One of the most frequently used methods, MEDUSA, explores a set of diversification-rate models, where each model assigns branches of the phylogeny to a set of diversification-rate categories. Each model is first fit to the data, and the Akaike information criterion (AIC) is then used to identify the optimal diversification model. Surprisingly, the statistical behavior of this popular method is uncharacterized, which is a concern in light of: (1) the poor performance of the AIC as a means of choosing among models in other phylogenetic contexts; (2) the ad hoc algorithm used to visit diversification models, and; (3) errors that we reveal in the likelihood function used to fit diversification models to the phylogenetic data. Here, we perform an extensive simulation study demonstrating that MEDUSA (1) has a high false-discovery rate (on average, spurious diversification-rate shifts are identified [Formula: see text] of the time), and (2) provides biased estimates of diversification-rate parameters. Understanding the statistical behavior of MEDUSA is critical both to empirical researchers-in order to clarify whether these methods can make reliable inferences from empirical datasets-and to theoretical biologists-in order to clarify the specific problems that need to be solved in order to develop more reliable approaches for detecting shifts in the rate of lineage diversification. [Akaike information criterion; extinction; lineage-specific diversification rates; phylogenetic model selection; speciation.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  19. Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping

    Science.gov (United States)

    Leberl, F.

    1975-01-01

    Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.

  20. Application of sequential leaching, risk indices and multivariate statistics to evaluate heavy metal contamination of estuarine sediments: Dhamara Estuary, East Coast of India.

    Science.gov (United States)

    Asa, Subas Chandra; Rath, Prasanta; Panda, Unmesh Chandra; Parhi, Pankaj Kumar; Bramha, Satyanarayan

    2013-08-01

    In the present study, concentration of some selected trace metals (Fe, Mn, Ni, Co, Pb, Zn, Cu, Cr and Cd) are measured in Brahmani, Baitarani river complex along with Dhamara estuary and its near shore. Chemical partitioning has been made to establish association of metals into different geochemical phases. The exchangeable fraction is having high environmental risk among non-lithogeneous phases due to greater potential for mobility into pore water. The metals with highest bio-availability being Cd, Zn and Cr. The metals like Mn, Zn, Cd and Cu represent an appreciable portion in carbonate phase. Fe-Mn oxides act as efficient scavenger for most of the metals playing a prime role in controlling their fate and transport. Among non-lithogeneous phases apart from reducible, Cr showed a significant enrichment in organic phase. Risk assessment code values indicate that all metals except Fe fall under medium-risk zone. In estuarine zone Cd, Zn, Pb and Cr are released to 32.43, 26.10, 21.81 and 20 %, respectively, indicating their significant bio-availability pose high ecological risk. A quantitative approach has been made through the use of different risk indices like enrichment factor, geo-accumulation index and pollution load index. Factor analysis indicates that in riverine zone, Fe-Mn oxides/hydroxides seem to play an important role in scavenging metals, in estuarine zone, organic precipitation and adsorption to the fine silt and clay particles while in coastal zone, co-precipitation with Fe could be the mechanism for the same. Canonical discriminant function indicates that it is highly successful in discriminating the groups as predicted.

  1. Virtual Reality Simulation as a Tool to Monitor Surgical Performance Indicators: VIRESI Observational Study.

    Science.gov (United States)

    Muralha, Nuno; Oliveira, Manuel; Ferreira, Maria Amélia; Costa-Maia, José

    2017-05-31

    Virtual reality simulation is a topic of discussion as a complementary tool to traditional laparoscopic surgical training in the operating room. However, it is unclear whether virtual reality training can have an impact on the surgical performance of advanced laparoscopic procedures. Our objective was to assess the ability of the virtual reality simulator LAP Mentor to identify and quantify changes in surgical performance indicators, after LAP Mentor training for digestive anastomosis. Twelve surgeons from Centro Hospitalar de São João in Porto (Portugal) performed two sessions of advanced task 5: anastomosis in LAP Mentor, before and after completing the tutorial, and were evaluated on 34 surgical performance indicators. The results show that six surgical performance indicators significantly changed after LAP Mentor training. The surgeons performed the task significantly faster as the median 'total time' significantly reduced (p virtual reality training simulation as a benchmark tool to assess the surgical performance of Portuguese surgeons. LAP Mentor is able to identify variations in surgical performance indicators of digestive anastomosis.

  2. Particle–Mixing Simulations Using DEM and Comparison of the Performance of Mixing Indices

    International Nuclear Information System (INIS)

    Cho, Migyung

    2017-01-01

    Mixing of molecular grains having different characteristics is very important in many industries such as the food and pharmaceutical industries. With the development of computer simulations, it is common practice to find the optimal mixing conditions through a simulation before the actual mixing task to estimate the proper level of mixing. Accordingly, there has been an increasing need for a mixing index to measure the mix of particles in the simulation process. Mixing indices, which have been widely used so far, can largely be classified into two types: first is the statistical-based mixing index, which is prepared using the sampling method, and the second is the mixing index that is prepared using all the particles. In this paper, we calculated mixing indices in different ways for the data in the course of mixing the particles using the DEM simulation. Additionally, we compared the performance, advantages, and disadvantages of each mixing index. Therefore, I propose a standard that can be used to select an appropriate mixing index.

  3. Particle–Mixing Simulations Using DEM and Comparison of the Performance of Mixing Indices

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Migyung [Tongmyong Univ., Busan (Korea, Republic of)

    2017-02-15

    Mixing of molecular grains having different characteristics is very important in many industries such as the food and pharmaceutical industries. With the development of computer simulations, it is common practice to find the optimal mixing conditions through a simulation before the actual mixing task to estimate the proper level of mixing. Accordingly, there has been an increasing need for a mixing index to measure the mix of particles in the simulation process. Mixing indices, which have been widely used so far, can largely be classified into two types: first is the statistical-based mixing index, which is prepared using the sampling method, and the second is the mixing index that is prepared using all the particles. In this paper, we calculated mixing indices in different ways for the data in the course of mixing the particles using the DEM simulation. Additionally, we compared the performance, advantages, and disadvantages of each mixing index. Therefore, I propose a standard that can be used to select an appropriate mixing index.

  4. Spectral Bio-indicator Simulations for Tracking Photosynthetic Activities in a Corn Field

    Science.gov (United States)

    Cheng, Yen-Ben; Middleton, Elizabeth M.; Huemmrich, K. Fred; Zhang, Qingyuan; Corp, Lawrence; Campbell, Petya; Kustas, William

    2011-01-01

    Accurate assessment of vegetation canopy optical properties plays a critical role in monitoring natural and managed ecosystems under environmental changes. In this context, radiative transfer (RT) models simulating vegetation canopy reflectance have been demonstrated to be a powerful tool for understanding and estimating spectral bio-indicators. In this study, two narrow band spectroradiometers were utilized to acquire observations over corn canopies for two summers. These in situ spectral data were then used to validate a two-layer Markov chain-based canopy reflectance model for simulating the Photochemical Reflectance Index (PRI), which has been widely used in recent vegetation photosynthetic light use efficiency (LUE) studies. The in situ PRI derived from narrow band hyperspectral reflectance exhibited clear responses to: 1) viewing geometry which affects the asset of light environment; and 2) seasonal variation corresponding to the growth stage. The RT model (ACRM) successfully simulated the responses to the variable viewing geometry. The best simulations were obtained when the model was set to run in the two layer mode using the sunlit leaves as the upper layer and shaded leaves as the lower layer. Simulated PRI values yielded much better correlations to in situ observations when the cornfield was dominated by green foliage during the early growth, vegetative and reproductive stages (r = 0.78 to 0.86) than in the later senescent stage (r = 0.65). Further sensitivity analyses were conducted to show the important influences of leaf area index (LAI) and the sunlit/shaded ratio on PRI observations.

  5. Synthetic Aperture Sequential Beamforming

    DEFF Research Database (Denmark)

    Kortbek, Jacob; Jensen, Jørgen Arendt; Gammelmark, Kim Løkke

    2008-01-01

    A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective is to im......A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective...... is to improve and obtain a more range independent lateral resolution compared to conventional dynamic receive focusing (DRF) without compromising frame rate. SASB is a two-stage procedure using two separate beamformers. First a set of Bmode image lines using a single focal point in both transmit and receive...... is stored. The second stage applies the focused image lines from the first stage as input data. The SASB method has been investigated using simulations in Field II and by off-line processing of data acquired with a commercial scanner. The performance of SASB with a static image object is compared with DRF...

  6. Spectral indices of cardiovascular adaptations to short-term simulated microgravity exposure

    Science.gov (United States)

    Patwardhan, A. R.; Evans, J. M.; Berk, M.; Grande, K. J.; Charles, J. B.; Knapp, C. F.

    1995-01-01

    We investigated the effects of exposure to microgravity on the baseline autonomic balance in cardiovascular regulation using spectral analysis of cardiovascular variables measured during supine rest. Heart rate, arterial pressure, radial flow, thoracic fluid impedance and central venous pressure were recorded from nine volunteers before and after simulated microgravity, produced by 20 hours of 6 degrees head down bedrest plus furosemide. Spectral powers increased after simulated microgravity in the low frequency region (centered at about 0.03 Hz) in arterial pressure, heart rate and radial flow, and decreased in the respiratory frequency region (centered at about 0.25 Hz) in heart rate. Reduced heart rate power in the respiratory frequency region indicates reduced parasympathetic influence on the heart. A concurrent increase in the low frequency power in arterial pressure, heart rate, and radial flow indicates increased sympathetic influence. These results suggest that the baseline autonomic balance in cardiovascular regulation is shifted towards increased sympathetic and decreased parasympathetic influence after exposure to short-term simulated microgravity.

  7. Changes in thermo-tolerance and survival under simulated gastrointestinal conditions of Salmonella Enteritidis PT4 and Salmonella Typhimurium PT4 in chicken breast meat after exposure to sequential stresses.

    Science.gov (United States)

    Melo, Adma Nadja Ferreira de; Souza, Geany Targino de; Schaffner, Donald; Oliveira, Tereza C Moreira de; Maciel, Janeeyre Ferreira; Souza, Evandro Leite de; Magnani, Marciane

    2017-06-19

    This study assessed changes in thermo-tolerance and capability to survive to simulated gastrointestinal conditions of Salmonella Enteritidis PT4 and Salmonella Typhimurium PT4 inoculated in chicken breast meat following exposure to stresses (cold, acid and osmotic) commonly imposed during food processing. The effects of the stress imposed by exposure to oregano (Origanum vulgare L.) essential oil (OVEO) on thermo-tolerance were also assessed. After exposure to cold stress (5°C for 5h) in chicken breast meat the test strains were sequentially exposed to the different stressing substances (lactic acid, NaCl or OVEO) at sub-lethal amounts, which were defined considering previously determined minimum inhibitory concentrations, and finally to thermal treatment (55°C for 30min). Resistant cells from distinct sequential treatments were exposed to simulated gastrointestinal conditions. The exposure to cold stress did not result in increased tolerance to acid stress (lactic acid: 5 and 2.5μL/g) for both strains. Cells of S. Typhimurium PT4 and S. Enteritidis PT4 previously exposed to acid stress showed higher (pthermo-tolerance in both strains. The cells that survived the sequential stress exposure (resistant) showed higher tolerance (pthermo-tolerance and enhance the survival under gastrointestinal conditions of S. Enteritidis PT4 and S. Typhimurium PT4. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. [Subjective sensations indicating simulator sickness and fatigue after exposure to virtual reality].

    Science.gov (United States)

    Malińska, Marzena; Zuzewicz, Krystyna; Bugajska, Joanna; Grabowski, Andrzej

    2014-01-01

    The study assessed the incidence and intensity of subjective symptoms indicating simulator sickness among the persons with no inclination to motion sickness, immersed in virtual reality (VR) by watching an hour long movie in the stereoscopic (three-dimensional - 3D) and non-stereoscopic (two-dimensional - 2D) versions and after an hour long training using virtual reality, called sVR. The sample comprised 20 healthy young men with no inclination to motion sickness. The participants' subjective sensations, indicating symptoms of simulator sickness were assessed using the questionnaire completed by the participants immediately, 20 min and 24 h following the test. Grandjean's scale was used to assess fatigue and mood. The symptoms were observed immediately after the exposure to sVR. Their intensity was higher than after watching the 2D and 3D movies. A significant relationship was found between the eye pain and the type of exposure (2D, 3D and sVR) (Chi2)(2) = 6.225, p < or = 0.05); the relationship between excessive perspiration and the exposure to 31) movie and sVR was also noted (Chi2(1) = 9.173, p < or = 0.01). Some symptoms were still observed 20 min after exposure to sVR. The comparison of Grandjean's scale results before and after the training in sVR handing showed significant differences in 11 out of 14 subscales. Before and after exposure to 3D movie, the differences were significant only for the "tired-fatigued" subscale (Z = 2.501, p < or = 0.012) in favor of "fatigued". Based on the subjective sensation of discomfort after watching 2D and 3D movies it is impossible to predict symptoms of simulator sickness after training using sVR.

  9. Simulations indicate that scores of lionfish (Pterois volitans colonized the Atlantic Ocean

    Directory of Open Access Journals (Sweden)

    Jason D. Selwyn

    2017-12-01

    Full Text Available The invasion of the western Atlantic Ocean by the Indo-Pacific red lionfish (Pterois volitans has had devastating consequences for marine ecosystems. Estimating the number of colonizing lionfish can be useful in identifying the introduction pathway and can inform policy decisions aimed at preventing similar invasions. It is well-established that at least ten lionfish were initially introduced. However, that estimate has not faced probabilistic scrutiny and is based solely on the number of haplotypes in the maternally-inherited mitochondrial control region. To rigorously estimate the number of lionfish that were introduced, we used a forward-time, Wright-Fisher, population genetic model in concert with a demographic, life-history model to simulate the invasion across a range of source population sizes and colonizing population fecundities. Assuming a balanced sex ratio and no Allee effects, the simulations indicate that the Atlantic population was founded by 118 (54–514, 95% HPD lionfish from the Indo-Pacific, the Caribbean by 84 (22–328, 95% HPD lionfish from the Atlantic, and the Gulf of Mexico by at least 114 (no upper bound on 95% HPD lionfish from the Caribbean. Increasing the size, and therefore diversity, of the Indo-Pacific source population and fecundity of the founding population caused the number of colonists to decrease, but with rapidly diminishing returns. When the simulation was parameterized to minimize the number of colonists (high θ and relative fecundity, 96 (48–216, 95% HPD colonists were most likely. In a more realistic scenario with Allee effects (e.g., 50% reduction in fecundity plaguing the colonists, the most likely number of lionfish increased to 272 (106–950, 95% HPD. These results, in combination with other published data, support the hypothesis that lionfish were introduced to the Atlantic via the aquarium trade, rather than shipping. When building the model employed here, we made assumptions that minimize

  10. Simulations indicate that scores of lionfish (Pterois volitans) colonized the Atlantic Ocean.

    Science.gov (United States)

    Selwyn, Jason D; Johnson, John E; Downey-Wall, Alan M; Bynum, Adam M; Hamner, Rebecca M; Hogan, J Derek; Bird, Christopher E

    2017-01-01

    The invasion of the western Atlantic Ocean by the Indo-Pacific red lionfish ( Pterois volitans ) has had devastating consequences for marine ecosystems. Estimating the number of colonizing lionfish can be useful in identifying the introduction pathway and can inform policy decisions aimed at preventing similar invasions. It is well-established that at least ten lionfish were initially introduced. However, that estimate has not faced probabilistic scrutiny and is based solely on the number of haplotypes in the maternally-inherited mitochondrial control region. To rigorously estimate the number of lionfish that were introduced, we used a forward-time, Wright-Fisher, population genetic model in concert with a demographic, life-history model to simulate the invasion across a range of source population sizes and colonizing population fecundities. Assuming a balanced sex ratio and no Allee effects, the simulations indicate that the Atlantic population was founded by 118 (54-514, 95% HPD) lionfish from the Indo-Pacific, the Caribbean by 84 (22-328, 95% HPD) lionfish from the Atlantic, and the Gulf of Mexico by at least 114 (no upper bound on 95% HPD) lionfish from the Caribbean. Increasing the size, and therefore diversity, of the Indo-Pacific source population and fecundity of the founding population caused the number of colonists to decrease, but with rapidly diminishing returns. When the simulation was parameterized to minimize the number of colonists (high θ and relative fecundity), 96 (48-216, 95% HPD) colonists were most likely. In a more realistic scenario with Allee effects (e.g., 50% reduction in fecundity) plaguing the colonists, the most likely number of lionfish increased to 272 (106-950, 95% HPD). These results, in combination with other published data, support the hypothesis that lionfish were introduced to the Atlantic via the aquarium trade, rather than shipping. When building the model employed here, we made assumptions that minimize the number of

  11. Sequential interactions-in which one player plays first and another responds-promote cooperation in evolutionary-dynamical simulations of single-shot Prisoner's Dilemma and Snowdrift games.

    Science.gov (United States)

    Laird, Robert A

    2018-05-21

    Cooperation is a central topic in evolutionary biology because (a) it is difficult to reconcile why individuals would act in a way that benefits others if such action is costly to themselves, and (b) it underpins many of the 'major transitions of evolution', making it essential for explaining the origins of successively higher levels of biological organization. Within evolutionary game theory, the Prisoner's Dilemma and Snowdrift games are the main theoretical constructs used to study the evolution of cooperation in dyadic interactions. In single-shot versions of these games, wherein individuals play each other only once, players typically act simultaneously rather than sequentially. Allowing one player to respond to the actions of its co-player-in the absence of any possibility of the responder being rewarded for cooperation or punished for defection, as in simultaneous or sequential iterated games-may seem to invite more incentive for exploitation and retaliation in single-shot games, compared to when interactions occur simultaneously, thereby reducing the likelihood that cooperative strategies can thrive. To the contrary, I use lattice-based, evolutionary-dynamical simulation models of single-shot games to demonstrate that under many conditions, sequential interactions have the potential to enhance unilaterally or mutually cooperative outcomes and increase the average payoff of populations, relative to simultaneous interactions-benefits that are especially prevalent in a spatially explicit context. This surprising result is attributable to the presence of conditional strategies that emerge in sequential games that can't occur in the corresponding simultaneous versions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Adaptive sequential controller

    Energy Technology Data Exchange (ETDEWEB)

    El-Sharkawi, Mohamed A. (Renton, WA); Xing, Jian (Seattle, WA); Butler, Nicholas G. (Newberg, OR); Rodriguez, Alonso (Pasadena, CA)

    1994-01-01

    An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.

  13. Adaptive sequential controller

    Science.gov (United States)

    El-Sharkawi, Mohamed A.; Xing, Jian; Butler, Nicholas G.; Rodriguez, Alonso

    1994-01-01

    An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.

  14. Sequential charged particle reaction

    International Nuclear Information System (INIS)

    Hori, Jun-ichi; Ochiai, Kentaro; Sato, Satoshi; Yamauchi, Michinori; Nishitani, Takeo

    2004-01-01

    The effective cross sections for producing the sequential reaction products in F82H, pure vanadium and LiF with respect to the 14.9-MeV neutron were obtained and compared with the estimation ones. Since the sequential reactions depend on the secondary charged particles behavior, the effective cross sections are corresponding to the target nuclei and the material composition. The effective cross sections were also estimated by using the EAF-libraries and compared with the experimental ones. There were large discrepancies between estimated and experimental values. Additionally, we showed the contribution of the sequential reaction on the induced activity and dose rate in the boundary region with water. From the present study, it has been clarified that the sequential reactions are of great importance to evaluate the dose rates around the surface of cooling pipe and the activated corrosion products. (author)

  15. Selecting Policy Indicators and Developing Simulation Models for the National School Lunch and Breakfast Programs (Summary)

    OpenAIRE

    Lisa Dragoset; Anne Gordon

    2010-01-01

    This brief describes exploratory work to develop a simulation model to predict the potential implications of changes that may be coming in policies and practices related to school meals and school food environments.

  16. Restrained Proton Indicator in Combined Quantum-Mechanics/Molecular-Mechanics Dynamics Simulations of Proton Transfer through a Carbon Nanotube.

    Science.gov (United States)

    Duster, Adam W; Lin, Hai

    2017-09-14

    Recently, a collective variable "proton indicator" was purposed for tracking an excess proton solvated in bulk water in molecular dynamics simulations. In this work, we demonstrate the feasibility of utilizing the position of this proton indicator as a reaction coordinate to model an excess proton migrating through a hydrophobic carbon nanotube in combined quantum-mechanics/molecular-mechanics simulations. Our results indicate that applying a harmonic restraint to the proton indicator in the bulk solvent near the nanotube pore entrance leads to the recruitment of water molecules into the pore. This is consistent with an earlier study that employed a multistate empirical valence bond potential and a different representation (center of excess charge) of the proton. We attribute this water recruitment to the delocalized nature of the solvated proton, which prefers to be in high-dielectric bulk solvent. While water recruitment into the pore is considered an artifact in the present simulations (because of the artificially imposed restraint on the proton), if the proton were naturally restrained, it could assist in building water wires prior to proton transfer through the pore. The potential of mean force for a proton translocation through the water-filled pore was computed by umbrella sampling, where the bias potentials were applied to the proton indicator. The free energy curve and barrier heights agree reasonably with those in the literature. The results suggest that the proton indicator can be used as a reaction coordinate in simulations of proton transport in confined environments.

  17. Sequential boundaries approach in clinical trials with unequal allocation ratios

    Directory of Open Access Journals (Sweden)

    Ayatollahi Seyyed

    2006-01-01

    Full Text Available Abstract Background In clinical trials, both unequal randomization design and sequential analyses have ethical and economic advantages. In the single-stage-design (SSD, however, if the sample size is not adjusted based on unequal randomization, the power of the trial will decrease, whereas with sequential analysis the power will always remain constant. Our aim was to compare sequential boundaries approach with the SSD when the allocation ratio (R was not equal. Methods We evaluated the influence of R, the ratio of the patients in experimental group to the standard group, on the statistical properties of two-sided tests, including the two-sided single triangular test (TT, double triangular test (DTT and SSD by multiple simulations. The average sample size numbers (ASNs and power (1-β were evaluated for all tests. Results Our simulation study showed that choosing R = 2 instead of R = 1 increases the sample size of SSD by 12% and the ASN of the TT and DTT by the same proportion. Moreover, when R = 2, compared to the adjusted SSD, using the TT or DTT allows to retrieve the well known reductions of ASN observed when R = 1, compared to SSD. In addition, when R = 2, compared to SSD, using the TT and DTT allows to obtain smaller reductions of ASN than when R = 1, but maintains the power of the test to its planned value. Conclusion This study indicates that when the allocation ratio is not equal among the treatment groups, sequential analysis could indeed serve as a compromise between ethicists, economists and statisticians.

  18. The Bacterial Sequential Markov Coalescent.

    Science.gov (United States)

    De Maio, Nicola; Wilson, Daniel J

    2017-05-01

    Bacteria can exchange and acquire new genetic material from other organisms directly and via the environment. This process, known as bacterial recombination, has a strong impact on the evolution of bacteria, for example, leading to the spread of antibiotic resistance across clades and species, and to the avoidance of clonal interference. Recombination hinders phylogenetic and transmission inference because it creates patterns of substitutions (homoplasies) inconsistent with the hypothesis of a single evolutionary tree. Bacterial recombination is typically modeled as statistically akin to gene conversion in eukaryotes, i.e. , using the coalescent with gene conversion (CGC). However, this model can be very computationally demanding as it needs to account for the correlations of evolutionary histories of even distant loci. So, with the increasing popularity of whole genome sequencing, the need has emerged for a faster approach to model and simulate bacterial genome evolution. We present a new model that approximates the coalescent with gene conversion: the bacterial sequential Markov coalescent (BSMC). Our approach is based on a similar idea to the sequential Markov coalescent (SMC)-an approximation of the coalescent with crossover recombination. However, bacterial recombination poses hurdles to a sequential Markov approximation, as it leads to strong correlations and linkage disequilibrium across very distant sites in the genome. Our BSMC overcomes these difficulties, and shows a considerable reduction in computational demand compared to the exact CGC, and very similar patterns in simulated data. We implemented our BSMC model within new simulation software FastSimBac. In addition to the decreased computational demand compared to previous bacterial genome evolution simulators, FastSimBac provides more general options for evolutionary scenarios, allowing population structure with migration, speciation, population size changes, and recombination hotspots. FastSimBac is

  19. Production of DagA and ethanol by sequential utilization of sugars in a mixed-sugar medium simulating microalgal hydrolysate.

    Science.gov (United States)

    Park, Juyi; Hong, Soon-Kwang; Chang, Yong Keun

    2015-09-01

    A novel two-step fermentation process using a mixed-sugar medium mimicking microalgal hydrolysate has been proposed to avoid glucose repression and thus to maximize substrate utilization efficiency. When DagA, a β-agarase was produced in one step in the mixed-sugar medium by using a recombinant Streptomyces lividans, glucose was found to have negative effects on the consumption of the other sugars and DagA biosynthesis causing low substrate utilization efficiency and low DagA productivity. To overcome such difficulties, a new strategy of sequential substrate utilization was developed. In the first step, glucose was consumed by Saccharomyces cerevisiae together with galactose and mannose producing ethanol, after which DagA was produced from the remaining sugars of xylose, rhamnose and ribose. Fucose was not consumed. By adopting this two-step process, the overall substrate utilization efficiency was increased approximately 3-fold with a nearly 2-fold improvement of DagA production, let alone the additional benefit of ethanol production. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Interpreting ecological diversity indices applied to terminal restriction fragment length polymorphism data: insights from simulated microbial communities.

    Science.gov (United States)

    Blackwood, Christopher B; Hudleston, Deborah; Zak, Donald R; Buyer, Jeffrey S

    2007-08-01

    Ecological diversity indices are frequently applied to molecular profiling methods, such as terminal restriction fragment length polymorphism (T-RFLP), in order to compare diversity among microbial communities. We performed simulations to determine whether diversity indices calculated from T-RFLP profiles could reflect the true diversity of the underlying communities despite potential analytical artifacts. These include multiple taxa generating the same terminal restriction fragment (TRF) and rare TRFs being excluded by a relative abundance (fluorescence) threshold. True community diversity was simulated using the lognormal species abundance distribution. Simulated T-RFLP profiles were generated by assigning each species a TRF size based on an empirical or modeled TRF size distribution. With a typical threshold (1%), the only consistently useful relationship was between Smith and Wilson evenness applied to T-RFLP data (TRF-E(var)) and true Shannon diversity (H'), with correlations between 0.71 and 0.81. TRF-H' and true H' were well correlated in the simulations using the lowest number of species, but this correlation declined substantially in simulations using greater numbers of species, to the point where TRF-H' cannot be considered a useful statistic. The relationships between TRF diversity indices and true indices were sensitive to the relative abundance threshold, with greatly improved correlations observed using a 0.1% threshold, which was investigated for comparative purposes but is not possible to consistently achieve with current technology. In general, the use of diversity indices on T-RFLP data provides inaccurate estimates of true diversity in microbial communities (with the possible exception of TRF-E(var)). We suggest that, where significant differences in T-RFLP diversity indices were found in previous work, these should be reinterpreted as a reflection of differences in community composition rather than a true difference in community diversity.

  1. Sequential stochastic optimization

    CERN Document Server

    Cairoli, Renzo

    1996-01-01

    Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet

  2. Indicators of Simulated Driving Skills in Adolescents with Attention Deficit Hyperactivity Disorder

    Directory of Open Access Journals (Sweden)

    Sherrilene Classen PhD, MPH, OTR/L, FAOTA

    2014-01-01

    Full Text Available Adolescents with attention deficit hyperactivity disorder (ADHD have an increased risk for committing traffic violations, and they are four times more likely than neurotypical peers to be crash involved, making them a potentially high risk group for driving. We used a two-group design to measure differences in demographics, clinical off-road tests, and fitness to drive abilities in a driving simulator with nine adolescents with ADHD (mean age = 15.00, SD ± 1.00 compared to 22 healthy controls (HC (mean age = 14.32, SD ±..716, as evaluated by an Occupational Therapist Certified Driving Rehabilitation Specialist (OT-CDRS. Despite few demographic differences, the adolescents with ADHD performed worse than the HC on tests of right visual acuity (F = 5.92, p = .036, right peripheral field (F = 6.85, p = .019, selective attention (U = 53.00, p = .046, and motor coordination (U = 53.00, p = .046. The ADHD group made more visual scanning (U = 52.50, p = .041, speed regulation (U = 28.00, p = .001, and total driving errors (U = 32.50, p = .003 on the simulator. Adolescents with ADHD performed worse on tests measuring visual, cognitive, motor, and pre-driving skills, and on a driving simulator. They may require the services of an OT-CDRS to determine their fitness to drive abilities prior to referring them for driver’s education.

  3. Sequential memory: Binding dynamics

    Science.gov (United States)

    Afraimovich, Valentin; Gong, Xue; Rabinovich, Mikhail

    2015-10-01

    Temporal order memories are critical for everyday animal and human functioning. Experiments and our own experience show that the binding or association of various features of an event together and the maintaining of multimodality events in sequential order are the key components of any sequential memories—episodic, semantic, working, etc. We study a robustness of binding sequential dynamics based on our previously introduced model in the form of generalized Lotka-Volterra equations. In the phase space of the model, there exists a multi-dimensional binding heteroclinic network consisting of saddle equilibrium points and heteroclinic trajectories joining them. We prove here the robustness of the binding sequential dynamics, i.e., the feasibility phenomenon for coupled heteroclinic networks: for each collection of successive heteroclinic trajectories inside the unified networks, there is an open set of initial points such that the trajectory going through each of them follows the prescribed collection staying in a small neighborhood of it. We show also that the symbolic complexity function of the system restricted to this neighborhood is a polynomial of degree L - 1, where L is the number of modalities.

  4. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  5. Regional climate model simulations indicate limited climatic impacts by operational and planned European wind farms.

    Science.gov (United States)

    Vautard, Robert; Thais, Françoise; Tobin, Isabelle; Bréon, François-Marie; Devezeaux de Lavergne, Jean-Guy; Colette, Augustin; Yiou, Pascal; Ruti, Paolo Michele

    2014-01-01

    The rapid development of wind energy has raised concerns about environmental impacts. Temperature changes are found in the vicinity of wind farms and previous simulations have suggested that large-scale wind farms could alter regional climate. However, assessments of the effects of realistic wind power development scenarios at the scale of a continent are missing. Here we simulate the impacts of current and near-future wind energy production according to European Union energy and climate policies. We use a regional climate model describing the interactions between turbines and the atmosphere, and find limited impacts. A statistically significant signal is only found in winter, with changes within ±0.3 °C and within 0-5% for precipitation. It results from the combination of local wind farm effects and changes due to a weak, but robust, anticyclonic-induced circulation over Europe. However, the impacts remain much weaker than the natural climate interannual variability and changes expected from greenhouse gas emissions.

  6. Global Sensitivity of Simulated Water Balance Indicators Under Future Climate Change in the Colorado Basin

    Science.gov (United States)

    Bennett, Katrina E.; Urrego Blanco, Jorge R.; Jonko, Alexandra; Bohn, Theodore J.; Atchley, Adam L.; Urban, Nathan M.; Middleton, Richard S.

    2018-01-01

    The Colorado River Basin is a fundamentally important river for society, ecology, and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent, and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model. We combine global sensitivity analysis with a space-filling Latin Hypercube Sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach. We find that snow-dominated regions are much more sensitive to uncertainties in VIC parameters. Although baseflow and runoff changes respond to parameters used in previous sensitivity studies, we discover new key parameter sensitivities. For instance, changes in runoff and evapotranspiration are sensitive to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI) in the VIC model. It is critical for improved modeling to narrow uncertainty in these parameters through improved observations and field studies. This is important because LAI and albedo are anticipated to change under future climate and narrowing uncertainty is paramount to advance our application of models such as VIC for water resource management.

  7. Integration of Tidal Prism Model and HSPF for simulating indicator bacteria in coastal watersheds

    Science.gov (United States)

    Sobel, Rose S.; Rifai, Hanadi S.; Petersen, Christina M.

    2017-09-01

    Coastal water quality is strongly influenced by tidal fluctuations and water chemistry. There is a need for rigorous models that are not computationally or economically prohibitive, but still allow simulation of the hydrodynamics and bacteria sources for coastal, tidally influenced streams and bayous. This paper presents a modeling approach that links a Tidal Prism Model (TPM) implemented in an Excel-based modeling environment with a watershed runoff model (Hydrologic Simulation Program FORTRAN, HSPF) for such watersheds. The TPM is a one-dimensional mass balance approach that accounts for loading from tidal exchange, runoff, point sources and bacteria die-off at an hourly time step resolution. The novel use of equal high-resolution time steps in this study allowed seamless integration of the TPM and HSPF. The linked model was calibrated to flow and E. Coli data (for HSPF), and salinity and enterococci data (for the TPM) for a coastal stream in Texas. Sensitivity analyses showed the TPM to be most influenced by changes in net decay rates followed by tidal and runoff loads, respectively. Management scenarios were evaluated with the developed linked models to assess the impact of runoff load reductions and improved wastewater treatment plant quality and to determine the areas of critical need for such reductions. Achieving water quality standards for bacteria required load reductions that ranged from zero to 90% for the modeled coastal stream.

  8. Combined simulation of fatigue crack nucleation and propagation based on a damage indicator

    Directory of Open Access Journals (Sweden)

    M. Springer

    2016-10-01

    Full Text Available Fatigue considerations often distinguish between fatigue crack nucleation and fatigue crack propagation. The current work presents a modeling approach utilizing one Fatigue Damage Indicator to treat both in a unified way. The approach is implemented within the framework of the Finite Element Method. Multiaxial critical plane models with an extended damage accumulation are employed as Fatigue Indicators. Locations of fatigue crack emergence are predicted by these indicators and material degradation is utilized to model local material failure. The cyclic loading is continued on the now degraded structure and the next location prone to material failure is identified and degradation modeled. This way, fatigue crack propagation is represented by an evolving spatial zone of material failure. This propagating damage zone leads to a changing structural response of the pristine structure. By recourse to the Fatigue Damage Indicator a correlation between the number of applied load cycles and the changing structural behavior is established. Finally, the proposed approach is exemplified by cyclic bending experiments in the Low Cycle Fatigue regime

  9. Utilizing In Situ Directional Hyperspectral Measurements to Validate Bio-Indicator Simulations for a Corn Crop Canopy

    Science.gov (United States)

    Cheng, Yen-Ben; Middleton, Elizabeth M.; Huemmrich, Karl F.; Zhang, Qingyuan; Campbell, Petya K. E.; Corp, Lawrence A.; Russ, Andrew L.; Kustas, William P.

    2010-01-01

    Two radiative transfer canopy models, SAIL and the two-layer Markov-Chain Canopy Reflectance Model (MCRM), were coupled with in situ leaf optical properties to simulate canopy-level spectral band ratio vegetation indices with the focus on the photochemical reflectance index in a cornfield. In situ hyperspectral measurements were made at both leaf and canopy levels. Leaf optical properties were obtained from both sunlit and shaded leaves. Canopy reflectance was acquired for eight different relative azimuth angles (psi) at three different view zenith angles (Theta (sub v)), and later used to validate model outputs. Field observations of photochemical reflectance index (PRI) for sunlit leaves exhibited lower values than shaded leaves, indicating higher light stress. Canopy PRI expressed obvious sensitivity to viewing geometry, as a function of both Theta (sub v) and psi . Overall, simulations from MCRM exhibited better agreements with in situ values than SAIL. When using only sunlit leaves as input, the MCRM-simulated PRI values showed satisfactory correlation and RMSE, as compared to in situ values. However, the performance of the MCRM model was significantly improved after defining a lower canopy layer comprised of shaded leaves beneath the upper sunlit leaf layer. Four other widely used band ratio vegetation indices were also studied and compared with the PRI results. MCRM simulations were able to generate satisfactory simulations for these other four indices when using only sunlit leaves as input; but unlike PRI, adding shaded leaves did not improve the performance of MCRM. These results support the hypothesis that the PRI is sensitive to physiological dynamics while the others detect static factors related to canopy structure. Sensitivity analysis was performed on MCRM in order to better understand the effects of structure related parameters on the PRI simulations. Leaf area index (LAI) showed the most significant impact on MCRM-simulated PRI among the parameters

  10. Indicators of Arctic Sea Ice Bistability in Climate Model Simulations and Observations

    Science.gov (United States)

    2014-09-30

    associated with the ice - albedo feedback and the seasonal melt and growth of sea ice , as well as horizontal climate variations on a global domain. (2...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Indicators of Arctic Sea Ice Bistability in Climate...possibility that the climate system supports multiple Arctic sea ice states that are relevant for the evolution of sea ice during the next several

  11. [Simulation of vegetation indices optimizing under retrieval of vegetation biochemical parameters based on PROSPECT + SAIL model].

    Science.gov (United States)

    Wu, Ling; Liu, Xiang-Nan; Zhou, Bo-Tian; Liu, Chuan-Hao; Li, Lu-Feng

    2012-12-01

    This study analyzed the sensitivities of three vegetation biochemical parameters [chlorophyll content (Cab), leaf water content (Cw), and leaf area index (LAI)] to the changes of canopy reflectance, with the effects of each parameter on the wavelength regions of canopy reflectance considered, and selected three vegetation indices as the optimization comparison targets of cost function. Then, the Cab, Cw, and LAI were estimated, based on the particle swarm optimization algorithm and PROSPECT + SAIL model. The results showed that retrieval efficiency with vegetation indices as the optimization comparison targets of cost function was better than that with all spectral reflectance. The correlation coefficients (R2) between the measured and estimated values of Cab, Cw, and LAI were 90.8%, 95.7%, and 99.7%, and the root mean square errors of Cab, Cw, and LAI were 4.73 microg x cm(-2), 0.001 g x cm(-2), and 0.08, respectively. It was suggested that to adopt vegetation indices as the optimization comparison targets of cost function could effectively improve the efficiency and precision of the retrieval of biochemical parameters based on PROSPECT + SAIL model.

  12. Sequential Power-Dependence Theory

    NARCIS (Netherlands)

    Buskens, Vincent; Rijt, Arnout van de

    2008-01-01

    Existing methods for predicting resource divisions in laboratory exchange networks do not take into account the sequential nature of the experimental setting. We extend network exchange theory by considering sequential exchange. We prove that Sequential Power-Dependence Theory—unlike

  13. Modelling sequentially scored item responses

    NARCIS (Netherlands)

    Akkermans, W.

    2000-01-01

    The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is

  14. Temperature-assisted solute focusing with sequential trap/release zones in isocratic and gradient capillary liquid chromatography: Simulation and experiment

    Science.gov (United States)

    Groskreutz, Stephen R.; Weber, Stephen G.

    2016-01-01

    In this work we characterize the development of a method to enhance temperature-assisted on-column solute focusing (TASF) called two-stage TASF. A new instrument was built to implement two-stage TASF consisting of a linear array of three independent, electronically controlled Peltier devices (thermoelectric coolers, TECs). Samples are loaded onto the chromatographic column with the first two TECs, TEC A and TEC B, cold. In the two-stage TASF approach TECs A and B are cooled during injection. TEC A is heated following sample loading. At some time following TEC A’s temperature rise, TEC B’s temperature is increased from the focusing temperature to a temperature matching that of TEC A. Injection bands are focused twice on-column, first on the initial TEC, e.g. single-stage TASF, then refocused on the second, cold TEC. Our goal is to understand the two-stage TASF approach in detail. We have developed a simple yet powerful digital simulation procedure to model the effect of changing temperature in the two focusing zones on retention, band shape and band spreading. The simulation can predict experimental chromatograms resulting from spatial and temporal temperature programs in combination with isocratic and solvent gradient elution. To assess the two-stage TASF method and the accuracy of the simulation well characterized solutes are needed. Thus, retention factors were measured at six temperatures (25–75 °C) at each of twelve mobile phases compositions (0.05–0.60 acetonitrile/water) for homologs of n-alkyl hydroxylbenzoate esters and n-alkyl p-hydroxyphenones. Simulations accurately reflect experimental results in showing that the two-stage approach improves separation quality. For example, two-stage TASF increased sensitivity for a low retention solute by a factor of 2.2 relative to single-stage TASF and 8.8 relative to isothermal conditions using isocratic elution. Gradient elution results for two-stage TASF were more encouraging. Application of two-stage TASF

  15. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1998-01-01

    We describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon (RS) codes with nonuniform profile. With this scheme decoding with good performance is possible as low...... as Eb/N0=0.6 dB, which is about 1.25 dB below the signal-to-noise ratio (SNR) that marks the cutoff rate for the full system. Accounting for about 0.45 dB due to the outer codes, sequential decoding takes place at about 1.7 dB below the SNR cutoff rate for the convolutional code. This is possible since...... the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...

  16. Sequential use of the STICS crop model and of the MACRO pesticide fate model to simulate pesticides leaching in cropping systems.

    Science.gov (United States)

    Lammoglia, Sabine-Karen; Moeys, Julien; Barriuso, Enrique; Larsbo, Mats; Marín-Benito, Jesús-María; Justes, Eric; Alletto, Lionel; Ubertosi, Marjorie; Nicolardot, Bernard; Munier-Jolain, Nicolas; Mamy, Laure

    2017-03-01

    The current challenge in sustainable agriculture is to introduce new cropping systems to reduce pesticides use in order to reduce ground and surface water contamination. However, it is difficult to carry out in situ experiments to assess the environmental impacts of pesticide use for all possible combinations of climate, crop, and soils; therefore, in silico tools are necessary. The objective of this work was to assess pesticides leaching in cropping systems coupling the performances of a crop model (STICS) and of a pesticide fate model (MACRO). STICS-MACRO has the advantage of being able to simulate pesticides fate in complex cropping systems and to consider some agricultural practices such as fertilization, mulch, or crop residues management, which cannot be accounted for with MACRO. The performance of STICS-MACRO was tested, without calibration, from measurements done in two French experimental sites with contrasted soil and climate properties. The prediction of water percolation and pesticides concentrations with STICS-MACRO was satisfactory, but it varied with the pedoclimatic context. The performance of STICS-MACRO was shown to be similar or better than that of MACRO. The improvement of the simulation of crop growth allowed better estimate of crop transpiration therefore of water balance. It also allowed better estimate of pesticide interception by the crop which was found to be crucial for the prediction of pesticides concentrations in water. STICS-MACRO is a new promising tool to improve the assessment of the environmental risks of pesticides used in cropping systems.

  17. Sequential spatial processes for image analysis

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette); V. Capasso

    2009-01-01

    htmlabstractWe give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects

  18. Sequential spatial processes for image analysis

    NARCIS (Netherlands)

    Lieshout, van M.N.M.; Capasso, V.

    2009-01-01

    We give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects through a video

  19. The effect of a graphical interpretation of a statistic trend indicator (Trigg's Tracking Variable) on the detection of simulated changes.

    Science.gov (United States)

    Kennedy, R R; Merry, A F

    2011-09-01

    Anaesthesia involves processing large amounts of information over time. One task of the anaesthetist is to detect substantive changes in physiological variables promptly and reliably. It has been previously demonstrated that a graphical trend display of historical data leads to more rapid detection of such changes. We examined the effect of a graphical indication of the magnitude of Trigg's Tracking Variable, a simple statistically based trend detection algorithm, on the accuracy and latency of the detection of changes in a micro-simulation. Ten anaesthetists each viewed 20 simulations with four variables displayed as the current value with a simple graphical trend display. Values for these variables were generated by a computer model, and updated every second; after a period of stability a change occurred to a new random value at least 10 units from baseline. In 50% of the simulations an indication of the rate of change was given by a five level graphical representation of the value of Trigg's Tracking Variable. Participants were asked to indicate when they thought a change was occurring. Changes were detected 10.9% faster with the trend indicator present (mean 13.1 [SD 3.1] cycles vs 14.6 [SD 3.4] cycles, 95% confidence interval 0.4 to 2.5 cycles, P = 0.013. There was no difference in accuracy of detection (median with trend detection 97% [interquartile range 95 to 100%], without trend detection 100% [98 to 100%]), P = 0.8. We conclude that simple statistical trend detection may speed detection of changes during routine anaesthesia, even when a graphical trend display is present.

  20. Sequential decay of Reggeons

    International Nuclear Information System (INIS)

    Yoshida, Toshihiro

    1981-01-01

    Probabilities of meson production in the sequential decay of Reggeons, which are formed from the projectile and the target in the hadron-hadron to Reggeon-Reggeon processes, are investigated. It is assumed that pair creation of heavy quarks and simultaneous creation of two antiquark-quark pairs are negligible. The leading-order terms with respect to ratio of creation probabilities of anti s s to anti u u (anti d d) are calculated. The production cross sections in the target fragmentation region are given in terms of probabilities in the initial decay of the Reggeons and an effect of manyparticle production. (author)

  1. Sequential Monte Carlo with Highly Informative Observations

    OpenAIRE

    Del Moral, Pierre; Murray, Lawrence M.

    2014-01-01

    We propose sequential Monte Carlo (SMC) methods for sampling the posterior distribution of state-space models under highly informative observation regimes, a situation in which standard SMC methods can perform poorly. A special case is simulating bridges between given initial and final values. The basic idea is to introduce a schedule of intermediate weighting and resampling times between observation times, which guide particles towards the final state. This can always be done for continuous-...

  2. Sequential test procedures for inventory differences

    International Nuclear Information System (INIS)

    Goldman, A.S.; Kern, E.A.; Emeigh, C.W.

    1985-01-01

    By means of a simulation study, we investigated the appropriateness of Page's and power-one sequential tests on sequences of inventory differences obtained from an example materials control unit, a sub-area of a hypothetical UF 6 -to-U 3 O 8 conversion process. The study examined detection probability and run length curves obtained from different loss scenarios. 12 refs., 10 figs., 2 tabs

  3. Molecular dynamics simulations indicate that deoxyhemoglobin, oxyhemoglobin, carboxyhemoglobin, and glycated hemoglobin under compression and shear exhibit an anisotropic mechanical behavior.

    Science.gov (United States)

    Yesudasan, Sumith; Wang, Xianqiao; Averett, Rodney D

    2018-05-01

    We developed a new mechanical model for determining the compression and shear mechanical behavior of four different hemoglobin structures. Previous studies on hemoglobin structures have focused primarily on overall mechanical behavior; however, this study investigates the mechanical behavior of hemoglobin, a major constituent of red blood cells, using steered molecular dynamics (SMD) simulations to obtain anisotropic mechanical behavior under compression and shear loading conditions. Four different configurations of hemoglobin molecules were considered: deoxyhemoglobin (deoxyHb), oxyhemoglobin (HbO 2 ), carboxyhemoglobin (HbCO), and glycated hemoglobin (HbA 1C ). The SMD simulations were performed on the hemoglobin variants to estimate their unidirectional stiffness and shear stiffness. Although hemoglobin is structurally denoted as a globular protein due to its spherical shape and secondary structure, our simulation results show a significant variation in the mechanical strength in different directions (anisotropy) and also a strength variation among the four different hemoglobin configurations studied. The glycated hemoglobin molecule possesses an overall higher compressive mechanical stiffness and shear stiffness when compared to deoxyhemoglobin, oxyhemoglobin, and carboxyhemoglobin molecules. Further results from the models indicate that the hemoglobin structures studied possess a soft outer shell and a stiff core based on stiffness.

  4. Simulation and comparison of progression-free survival among patients with non-squamous non-small-cell lung cancer receiving sequential therapy.

    Science.gov (United States)

    Walzer, Stefan; Chouaid, Christos; Lister, Johanna; Gultyaev, Dmitry; Vergnenegre, Alain; de Marinis, Filippo; Meng, Jie; de Castro Carpeno, Javier; Crott, Ralph; Kleman, Martin; Ngoh, Charles

    2015-01-01

    In recent years, the treatment landscape in advanced non-squamous non-small-cell lung cancer (nsNSCLC) has changed. New therapies (e.g., bevacizumab indicated in first line) have become available and other therapies (e.g., pemetrexed in first line and second line) moved into earlier lines in the treatment paradigm. While there has been an expansion of the available treatment options, it is still a key research question which therapy sequence results in the best survival outcomes for patients with nsNSCLC. A therapy-sequencing disease model that approximates treatment outcomes in up to five lines of treatment was developed for patients with nsNSCLC. The primary source of data for progression-free survival (PFS) and time to death was published pivotal trial data. All patients were treatment-naïve and in the PFS state, received first-line treatment with either bevacizumab-based therapy or doublet chemotherapy (including the option of pemetrexed + cisplatin). Patients would then progress to a subsequent line of therapy, remain in PFS or die. In case of progression, it was assumed that each survivor would receive a subsequent line of therapy, based on EMA licensed therapies. Weibull distribution curves were fitted to the data. All bevacizumab-based first-line therapy sequences analyzed achieved total PFS of around 15 months. Bevacizumab + carboplatin + paclitaxel (first line) → pemetrexed (second line) → erlotinib (third line) → docetaxel (fourth line) resulted in total mean PFS time of 15.7 months, for instance. Sequences with pemetrexed in combination with cisplatin in first line achieved total PFS times between 12.6 and 12.8 months with a slightly higher total PFS time achieved when assuming pemetrexed continuation therapy in maintenance after pemetrexed + cisplatin in first-line induction. Overall survival results followed the same trend as PFS. The model suggests that treatment-sequencing strategies starting with a bevacizumab-based combination in first line

  5. Vulnerability of Agriculture to Climate Change as Revealed by Relationships between Simulated Crop Yield and Climate Change Indices

    Science.gov (United States)

    King, A. W.; Absar, S. M.; Nair, S.; Preston, B. L.

    2012-12-01

    The vulnerability of agriculture is among the leading concerns surrounding climate change. Agricultural production is influenced by drought and other extremes in weather and climate. In regions of subsistence farming, worst case reductions in yield lead to malnutrition and famine. Reduced surplus contributes to poverty in agrarian economies. In more economically diverse and industrialized regions, variations in agricultural yield can influence the regional economy through market mechanisms. The latter grows in importance as agriculture increasingly services the energy market in addition to markets for food and fiber. Agriculture is historically a highly adaptive enterprise and will respond to future changes in climate with a variety of adaptive mechanisms. Nonetheless, the risk, if not expectation, of increases in climate extremes and hazards exceeding historical experience motivates scientifically based anticipatory assessment of the vulnerability of agriculture to climate change. We investigate the sensitivity component of that vulnerability using EPIC, a well established field-scale model of cropping systems that includes the simulation of economic yield. The core of our analysis is the relationship between simulated yield and various indices of climate change, including the CCI/CLIVAR/JCOM ETCCDI indices, calculated from weather inputs to the model. We complement this core with analysis using the DSSAT cropping system model and exploration of relationships between historical yield statistics and climate indices calculated from weather records. Our analyses are for sites in the Southeast/Gulf Coast region of the United States. We do find "tight" monotonic relationships between annual yield and climate for some indices, especially those associated with available water. More commonly, however, we find an increase in the variability of yield as the index value becomes more extreme. Our findings contribute to understanding the sensitivity of crop yield as part of

  6. Simulated trends of extreme climate indices for the Carpathian basin using outputs of different regional climate models

    Science.gov (United States)

    Pongracz, R.; Bartholy, J.; Szabo, P.; Pieczka, I.; Torma, C. S.

    2009-04-01

    CM3 (run at 10 km horizontal resolution) was developed by Giorgi et al. and it is available from the ICTP (International Centre for Theoretical Physics). Analysis of the simulated daily temperature datasets suggests that the detected regional warming is expected to continue in the 21st century. Cold temperature extremes are projected to decrease while warm extremes tend to increase significantly. Expected changes of annual precipitation indices are small, but generally consistent with the detected trends of the 20th century. Based on the simulations, extreme precipitation events are expected to become more intense and more frequent in winter, while a general decrease of extreme precipitation indices is expected in summer.

  7. Evaluation of skill at simulating heatwave and heat-humidity indices in Global and Regional Climate Models

    Science.gov (United States)

    Goldie, J. K.; Alexander, L. V.; Lewis, S. C.; Sherwood, S. C.

    2017-12-01

    A wide body of literature now establishes the harm of extreme heat on human health, and work is now emerging on the projection of future health impacts. However, heat-health relationships vary across different populations (Gasparrini et al. 2015), so accurate simulation of regional climate is an important component of joint health impact projection. Here, we evaluate the ability of nine Global Climate Models (GCMs) from CMIP5 and the NARCliM Regional Climate Model to reproduce a selection of 15 health-relevant heatwave and heat-humidity indices over the historical period (1990-2005) using the Perkins skill score (Perkins et al. 2007) in five Australian cities. We explore the reasons for poor model skill, comparing these modelled distributions to both weather station observations and gridded reanalysis data. Finally, we show changes in the modelled distributions from the highest-performing models under RCP4.5 and RCP8.5 greenhouse gas scenarios and discuss the implications of simulated heat stress for future climate change adaptation. ReferencesGasparrini, Antonio, Yuming Guo, Masahiro Hashizume, Eric Lavigne, Antonella Zanobetti, Joel Schwartz, Aurelio Tobias, et al. "Mortality Risk Attributable to High and Low Ambient Temperature: A Multicountry Observational Study." The Lancet 386, no. 9991 (July 31, 2015): 369-75. doi:10.1016/S0140-6736(14)62114-0. Perkins, S. E., A. J. Pitman, N. J. Holbrook, and J. McAneney. "Evaluation of the AR4 Climate Models' Simulated Daily Maximum Temperature, Minimum Temperature, and Precipitation over Australia Using Probability Density Functions." Journal of Climate 20, no. 17 (September 1, 2007): 4356-76. doi:10.1175/JCLI4253.1.

  8. A novel method for the sequential removal and separation of multiple heavy metals from wastewater.

    Science.gov (United States)

    Fang, Li; Li, Liang; Qu, Zan; Xu, Haomiao; Xu, Jianfang; Yan, Naiqiang

    2018-01-15

    A novel method was developed and applied for the treatment of simulated wastewater containing multiple heavy metals. A sorbent of ZnS nanocrystals (NCs) was synthesized and showed extraordinary performance for the removal of Hg 2+ , Cu 2+ , Pb 2+ and Cd 2+ . The removal efficiencies of Hg 2+ , Cu 2+ , Pb 2+ and Cd 2+ were 99.9%, 99.9%, 90.8% and 66.3%, respectively. Meanwhile, it was determined that solubility product (K sp ) of heavy metal sulfides was closely related to adsorption selectivity of various heavy metals on the sorbent. The removal efficiency of Hg 2+ was higher than that of Cd 2+ , while the K sp of HgS was lower than that of CdS. It indicated that preferential adsorption of heavy metals occurred when the K sp of the heavy metal sulfide was lower. In addition, the differences in the K sp of heavy metal sulfides allowed for the exchange of heavy metals, indicating the potential application for the sequential removal and separation of heavy metals from wastewater. According to the cumulative adsorption experimental results, multiple heavy metals were sequentially adsorbed and separated from the simulated wastewater in the order of the K sp of their sulfides. This method holds the promise of sequentially removing and separating multiple heavy metals from wastewater. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. How to Read the Tractatus Sequentially

    Directory of Open Access Journals (Sweden)

    Tim Kraft

    2016-11-01

    Full Text Available One of the unconventional features of Wittgenstein’s Tractatus Logico-Philosophicus is its use of an elaborated and detailed numbering system. Recently, Bazzocchi, Hacker und Kuusela have argued that the numbering system means that the Tractatus must be read and interpreted not as a sequentially ordered book, but as a text with a two-dimensional, tree-like structure. Apart from being able to explain how the Tractatus was composed, the tree reading allegedly solves exegetical issues both on the local (e. g. how 4.02 fits into the series of remarks surrounding it and the global level (e. g. relation between ontology and picture theory, solipsism and the eye analogy, resolute and irresolute readings. This paper defends the sequential reading against the tree reading. After presenting the challenges generated by the numbering system and the two accounts as attempts to solve them, it is argued that Wittgenstein’s own explanation of the numbering system, anaphoric references within the Tractatus and the exegetical issues mentioned above do not favour the tree reading, but a version of the sequential reading. This reading maintains that the remarks of the Tractatus form a sequential chain: The role of the numbers is to indicate how remarks on different levels are interconnected to form a concise, surveyable and unified whole.

  10. Decision-making in research tasks with sequential testing.

    Directory of Open Access Journals (Sweden)

    Thomas Pfeiffer

    Full Text Available BACKGROUND: In a recent controversial essay, published by JPA Ioannidis in PLoS Medicine, it has been argued that in some research fields, most of the published findings are false. Based on theoretical reasoning it can be shown that small effect sizes, error-prone tests, low priors of the tested hypotheses and biases in the evaluation and publication of research findings increase the fraction of false positives. These findings raise concerns about the reliability of research. However, they are based on a very simple scenario of scientific research, where single tests are used to evaluate independent hypotheses. METHODOLOGY/PRINCIPAL FINDINGS: In this study, we present computer simulations and experimental approaches for analyzing more realistic scenarios. In these scenarios, research tasks are solved sequentially, i.e. subsequent tests can be chosen depending on previous results. We investigate simple sequential testing and scenarios where only a selected subset of results can be published and used for future rounds of test choice. Results from computer simulations indicate that for the tasks analyzed in this study, the fraction of false among the positive findings declines over several rounds of testing if the most informative tests are performed. Our experiments show that human subjects frequently perform the most informative tests, leading to a decline of false positives as expected from the simulations. CONCLUSIONS/SIGNIFICANCE: For the research tasks studied here, findings tend to become more reliable over time. We also find that the performance in those experimental settings where not all performed tests could be published turned out to be surprisingly inefficient. Our results may help optimize existing procedures used in the practice of scientific research and provide guidance for the development of novel forms of scholarly communication.

  11. Simultaneous versus sequential penetrating keratoplasty and cataract surgery.

    Science.gov (United States)

    Hayashi, Ken; Hayashi, Hideyuki

    2006-10-01

    To compare the surgical outcomes of simultaneous penetrating keratoplasty and cataract surgery with those of sequential surgery. Thirty-nine eyes of 39 patients scheduled for simultaneous keratoplasty and cataract surgery and 23 eyes of 23 patients scheduled for sequential keratoplasty and secondary phacoemulsification surgery were recruited. Refractive error, regular and irregular corneal astigmatism determined by Fourier analysis, and endothelial cell loss were studied at 1 week and 3, 6, and 12 months after combined surgery in the simultaneous surgery group or after subsequent phacoemulsification surgery in the sequential surgery group. At 3 and more months after surgery, mean refractive error was significantly greater in the simultaneous surgery group than in the sequential surgery group, although no difference was seen at 1 week. The refractive error at 12 months was within 2 D of that targeted in 15 eyes (39%) in the simultaneous surgery group and within 2 D in 16 eyes (70%) in the sequential surgery group; the incidence was significantly greater in the sequential group (P = 0.0344). The regular and irregular astigmatism was not significantly different between the groups at 3 and more months after surgery. No significant difference was also found in the percentage of endothelial cell loss between the groups. Although corneal astigmatism and endothelial cell loss were not different, refractive error from target refraction was greater after simultaneous keratoplasty and cataract surgery than after sequential surgery, indicating a better outcome after sequential surgery than after simultaneous surgery.

  12. Comparison of discrete ordinate and Monte Carlo simulations of polarized radiative transfer in two coupled slabs with different refractive indices.

    Science.gov (United States)

    Cohen, D; Stamnes, S; Tanikawa, T; Sommersten, E R; Stamnes, J J; Lotsberg, J K; Stamnes, K

    2013-04-22

    A comparison is presented of two different methods for polarized radiative transfer in coupled media consisting of two adjacent slabs with different refractive indices, each slab being a stratified medium with no change in optical properties except in the direction of stratification. One of the methods is based on solving the integro-differential radiative transfer equation for the two coupled slabs using the discrete ordinate approximation. The other method is based on probabilistic and statistical concepts and simulates the propagation of polarized light using the Monte Carlo approach. The emphasis is on non-Rayleigh scattering for particles in the Mie regime. Comparisons with benchmark results available for a slab with constant refractive index show that both methods reproduce these benchmark results when the refractive index is set to be the same in the two slabs. Computed results for test cases with coupling (different refractive indices in the two slabs) show that the two methods produce essentially identical results for identical input in terms of absorption and scattering coefficients and scattering phase matrices.

  13. Quantum Inequalities and Sequential Measurements

    International Nuclear Information System (INIS)

    Candelpergher, B.; Grandouz, T.; Rubinx, J.L.

    2011-01-01

    In this article, the peculiar context of sequential measurements is chosen in order to analyze the quantum specificity in the two most famous examples of Heisenberg and Bell inequalities: Results are found at some interesting variance with customary textbook materials, where the context of initial state re-initialization is described. A key-point of the analysis is the possibility of defining Joint Probability Distributions for sequential random variables associated to quantum operators. Within the sequential context, it is shown that Joint Probability Distributions can be defined in situations where not all of the quantum operators (corresponding to random variables) do commute two by two. (authors)

  14. Usage of link-level performance indicators for HSDPA network-level simulations in E-UMTS

    NARCIS (Netherlands)

    Brouwer, Frank; de Bruin, I.C.C.; Silva, João Carlos; Souto, Nuno; Cercas, Francisco; Correia, Américo

    2004-01-01

    The paper describes integration of HSDPA (high-speed downlink packet access) link-level simulation results into network-level simulations for enhanced UMTS. The link-level simulations model all physical layer features depicted in the 3GPP standards. These include: generation of transport blocks;

  15. Framework for sequential approximate optimization

    NARCIS (Netherlands)

    Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.

    2004-01-01

    An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python

  16. Decision support for green supply chain operations by integrating dynamic simulation and LCA indicators: diaper case study.

    Science.gov (United States)

    Adhitya, Arief; Halim, Iskandar; Srinivasan, Rajagopalan

    2011-12-01

    As the issue of environmental sustainability is becoming an important business factor, companies are now looking for decision support tools to assess the fuller picture of the environmental impacts associated with their manufacturing operations and supply chain (SC) activities. Lifecycle assessment (LCA) is widely used to measure the environmental consequences assignable to a product. However, it is usually limited to a high-level snapshot of the environmental implications over the product value chain without consideration of the dynamics arising from the multitiered structure and the interactions along the SC. This paper proposes a framework for green supply chain management by integrating a SC dynamic simulation and LCA indicators to evaluate both the economic and environmental impacts of various SC decisions such as inventories, distribution network configuration, and ordering policy. The advantages of this framework are demonstrated through an industrially motivated case study involving diaper production. Three distinct scenarios are evaluated to highlight how the proposed approach enables integrated decision support for green SC design and operation.

  17. Sequentially pulsed traveling wave accelerator

    Science.gov (United States)

    Caporaso, George J [Livermore, CA; Nelson, Scott D [Patterson, CA; Poole, Brian R [Tracy, CA

    2009-08-18

    A sequentially pulsed traveling wave compact accelerator having two or more pulse forming lines each with a switch for producing a short acceleration pulse along a short length of a beam tube, and a trigger mechanism for sequentially triggering the switches so that a traveling axial electric field is produced along the beam tube in synchronism with an axially traversing pulsed beam of charged particles to serially impart energy to the particle beam.

  18. Heads-Up Display with Virtual Precision Approach Path Indicator as Implemented in a Real-Time Piloted Lifting-Body Simulation

    Science.gov (United States)

    Neuhaus, Jason R.

    2018-01-01

    This document describes the heads-up display (HUD) used in a piloted lifting-body entry, approach and landing simulation developed for the simulator facilities of the Simulation Development and Analysis Branch (SDAB) at NASA Langley Research Center. The HUD symbology originated with the piloted simulation evaluations of the HL-20 lifting body concept conducted in 1989 at NASA Langley. The original symbology was roughly based on Shuttle HUD symbology, as interpreted by Langley researchers. This document focuses on the addition of the precision approach path indicator (PAPI) lights to the HUD overlay.

  19. Comparison of Sequential and Variational Data Assimilation

    Science.gov (United States)

    Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht

    2017-04-01

    Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.

  20. Time scale of random sequential adsorption.

    Science.gov (United States)

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

  1. Remarks on sequential designs in risk assessment

    International Nuclear Information System (INIS)

    Seidenfeld, T.

    1982-01-01

    The special merits of sequential designs are reviewed in light of particular challenges that attend risk assessment for human population. The kinds of ''statistical inference'' are distinguished and the problem of design which is pursued is the clash between Neyman-Pearson and Bayesian programs of sequential design. The value of sequential designs is discussed and the Neyman-Pearson vs. Bayesian sequential designs are probed in particular. Finally, warnings with sequential designs are considered, especially in relation to utilitarianism

  2. Sequential versus simultaneous market delineation

    DEFF Research Database (Denmark)

    Haldrup, Niels; Møllgaard, Peter; Kastberg Nielsen, Claus

    2005-01-01

    and geographical markets. Using a unique data setfor prices of Norwegian and Scottish salmon, we propose a methodologyfor simultaneous market delineation and we demonstrate that comparedto a sequential approach conclusions will be reversed.JEL: C3, K21, L41, Q22Keywords: Relevant market, econometric delineation......Delineation of the relevant market forms a pivotal part of most antitrustcases. The standard approach is sequential. First the product marketis delineated, then the geographical market is defined. Demand andsupply substitution in both the product dimension and the geographicaldimension...

  3. Sequential logic analysis and synthesis

    CERN Document Server

    Cavanagh, Joseph

    2007-01-01

    Until now, there was no single resource for actual digital system design. Using both basic and advanced concepts, Sequential Logic: Analysis and Synthesis offers a thorough exposition of the analysis and synthesis of both synchronous and asynchronous sequential machines. With 25 years of experience in designing computing equipment, the author stresses the practical design of state machines. He clearly delineates each step of the structured and rigorous design principles that can be applied to practical applications. The book begins by reviewing the analysis of combinatorial logic and Boolean a

  4. Learning sequential control in a Neural Blackboard Architecture for in situ concept reasoning

    NARCIS (Netherlands)

    van der Velde, Frank; van der Velde, Frank; Besold, Tarek R.; Lamb, Luis; Serafini, Luciano; Tabor, Whitney

    2016-01-01

    Simulations are presented and discussed of learning sequential control in a Neural Blackboard Architecture (NBA) for in situ concept-based reasoning. Sequential control is learned in a reservoir network, consisting of columns with neural circuits. This allows the reservoir to control the dynamics of

  5. The pursuit of balance in sequential randomized trials

    Directory of Open Access Journals (Sweden)

    Raymond P. Guiteras

    2016-06-01

    Full Text Available In many randomized trials, subjects enter the sample sequentially. Because the covariates for all units are not known in advance, standard methods of stratification do not apply. We describe and assess the method of DA-optimal sequential allocation (Atkinson, 1982 for balancing stratification covariates across treatment arms. We provide simulation evidence that the method can provide substantial improvements in precision over commonly employed alternatives. We also describe our experience implementing the method in a field trial of a clean water and handwashing intervention in Dhaka, Bangladesh, the first time the method has been used. We provide advice and software for future researchers.

  6. Event-shape analysis: Sequential versus simultaneous multifragment emission

    International Nuclear Information System (INIS)

    Cebra, D.A.; Howden, S.; Karn, J.; Nadasen, A.; Ogilvie, C.A.; Vander Molen, A.; Westfall, G.D.; Wilson, W.K.; Winfield, J.S.; Norbeck, E.

    1990-01-01

    The Michigan State University 4π array has been used to select central-impact-parameter events from the reaction 40 Ar+ 51 V at incident energies from 35 to 85 MeV/nucleon. The event shape in momentum space is an observable which is shown to be sensitive to the dynamics of the fragmentation process. A comparison of the experimental event-shape distribution to sequential- and simultaneous-decay predictions suggests that a transition in the breakup process may have occurred. At 35 MeV/nucleon, a sequential-decay simulation reproduces the data. For the higher energies, the experimental distributions fall between the two contrasting predictions

  7. Evaluation Using Sequential Trials Methods.

    Science.gov (United States)

    Cohen, Mark E.; Ralls, Stephen A.

    1986-01-01

    Although dental school faculty as well as practitioners are interested in evaluating products and procedures used in clinical practice, research design and statistical analysis can sometimes pose problems. Sequential trials methods provide an analytical structure that is both easy to use and statistically valid. (Author/MLW)

  8. Attack Trees with Sequential Conjunction

    NARCIS (Netherlands)

    Jhawar, Ravi; Kordy, Barbara; Mauw, Sjouke; Radomirović, Sasa; Trujillo-Rasua, Rolando

    2015-01-01

    We provide the first formal foundation of SAND attack trees which are a popular extension of the well-known attack trees. The SAND at- tack tree formalism increases the expressivity of attack trees by intro- ducing the sequential conjunctive operator SAND. This operator enables the modeling of

  9. Simulations

    CERN Document Server

    Ngada, Narcisse

    2015-06-15

    The complexity and cost of building and running high-power electrical systems make the use of simulations unavoidable. The simulations available today provide great understanding about how systems really operate. This paper helps the reader to gain an insight into simulation in the field of power converters for particle accelerators. Starting with the definition and basic principles of simulation, two simulation types, as well as their leading tools, are presented: analog and numerical simulations. Some practical applications of each simulation type are also considered. The final conclusion then summarizes the main important items to keep in mind before opting for a simulation tool or before performing a simulation.

  10. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations

    Science.gov (United States)

    Qin, Fangjun; Jiang, Sai; Zha, Feng

    2018-01-01

    In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms. PMID:29751538

  11. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations

    Directory of Open Access Journals (Sweden)

    Fangjun Qin

    2018-05-01

    Full Text Available In this paper, a sequential multiplicative extended Kalman filter (SMEKF is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms.

  12. PREDICTION OF MACROECONOMIC INDICATORS OF THE INSURANCE MARKET DEVELOPMENT IN THE SVERDLOVSK REGION IN 2010 WITH APPLICATION OF THE SIMULATION MODELING METHOD

    Directory of Open Access Journals (Sweden)

    I. U. Vedmed

    2010-06-01

    Full Text Available Key parameters characterizing a level of development of insurance business in a region are the following indicators: insurance density and insurance penetration. To analyze the level of the given indicators for the market of the Sverdlovsk region in 2010, methods of simulation modeling have been applied. Two assumptions concerning probabilistic distribution of initial parameters have been considered in the issue; the given parameters are: normal distribution and level distribution.

  13. A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.

    Science.gov (United States)

    Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L

    2016-03-01

    Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015

  14. Atomistic simulations indicate cardiolipin to have an integral role in the structure of the cytochrome bc(1) complex

    DEFF Research Database (Denmark)

    Poyry, S.; Cramariuc, O.; Postila, P. A.

    2013-01-01

    by both ensuring the structural integrity of the protein complex and also by taking part in the proton uptake. Yet, the atom-scale understanding of these highly charged four-tail lipids in the cyt bc(1) function has remained quite unclear. We consider this issue through atomistic molecular dynamics...... the description of the role of the surrounding lipid environment: in addition to the specific CL-protein interactions, we observe the protein domains on the positive side of the membrane to settle against the lipids. Altogether, the simulations discussed in this article provide novel views into the dynamics...... simulations that are applied to the entire cyt bc(1) dimer of the purple photosynthetic bacterium Rhodobacter capsulatus embedded in a lipid bilayer. We find CLs to spontaneously diffuse to the dimer interface to the immediate vicinity of the higher potential heme b groups of the complex's catalytic Q...

  15. Estimation After a Group Sequential Trial.

    Science.gov (United States)

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  16. Sequential method for the assessment of innovations in computer assisted industrial processes

    International Nuclear Information System (INIS)

    Suarez Antola R.

    1995-01-01

    A sequential method for the assessment of innovations in industrial processes is proposed, using suitable combinations of mathematical modelling and numerical simulation of dynamics. Some advantages and limitations of the proposed method are discussed. tabs

  17. Temporal characteristics of radiologists’ and novices’ lesion detection in viewing medical images presented rapidly and sequentially

    Directory of Open Access Journals (Sweden)

    Ryoichi Nakashima

    2016-10-01

    Full Text Available Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers’ attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy. This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks.

  18. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.; Shamma, Jeff S.

    2014-01-01

    incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well

  19. Evaluation of a continuous indicator for syndromic surveillance through simulation. application to vector borne disease emergence detection in cattle using milk yield.

    Directory of Open Access Journals (Sweden)

    Aurélien Madouasse

    Full Text Available Two vector borne diseases, caused by the Bluetongue and Schmallenberg viruses respectively, have emerged in the European ruminant populations since 2006. Several diseases are transmitted by the same vectors and could emerge in the future. Syndromic surveillance, which consists in the routine monitoring of indicators for the detection of adverse health events, may allow an early detection. Milk yield is routinely measured in a large proportion of dairy herds and could be incorporated as an indicator in a surveillance system. However, few studies have evaluated continuous indicators for syndromic surveillance. The aim of this study was to develop a framework for the quantification of both disease characteristics and model predictive abilities that are important for a continuous indicator to be sensitive, timely and specific for the detection of a vector-borne disease emergence. Emergences with a range of spread characteristics and effects on milk production were simulated. Milk yields collected monthly in 48 713 French dairy herds were used to simulate 576 disease emergence scenarios. First, the effect of disease characteristics on the sensitivity and timeliness of detection were assessed: Spatio-temporal clusters of low milk production were detected with a scan statistic using the difference between observed and simulated milk yields as input. In a second step, the system specificity was evaluated by running the scan statistic on the difference between observed and predicted milk yields, in the absence of simulated emergence. The timeliness of detection depended mostly on how easily the disease spread between and within herds. The time and location of the emergence or adding random noise to the simulated effects had a limited impact on the timeliness of detection. The main limitation of the system was the low specificity i.e. the high number of clusters detected from the difference between observed and predicted productions, in the absence of

  20. Robustness of the Sequential Lineup Advantage

    Science.gov (United States)

    Gronlund, Scott D.; Carlson, Curt A.; Dailey, Sarah B.; Goodsell, Charles A.

    2009-01-01

    A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup…

  1. Analysis of noise pollution in an andesite quarry with the use of simulation studies and evaluation indices.

    Science.gov (United States)

    Kosała, Krzysztof; Stępień, Bartłomiej

    2016-01-01

    This paper presents the verification of two partial indices proposed for the evaluation of continuous and impulse noise pollution in quarries. These indices, together with the sound power of machines index and the noise hazard index at the workstation, are components of the global index of assessment of noise hazard in the working environment of a quarry. This paper shows the results of acoustic tests carried out in an andesite quarry. Noise generated by machines and from performed blasting works was investigated. On the basis of acoustic measurements carried out in real conditions, the sound power levels of machines and the phenomenon of explosion were determined and, based on the results, three-dimensional models of acoustic noise propagation in the quarry were developed. To assess the degree of noise pollution in the area of the quarry, the continuous and impulse noise indices were used.

  2. Random sequential adsorption of cubes

    Science.gov (United States)

    Cieśla, Michał; Kubala, Piotr

    2018-01-01

    Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.

  3. Comparative MD Simulations Indicate a Dual Role for Arg1323.50 in Dopamine-Dependent D2R Activation.

    Directory of Open Access Journals (Sweden)

    Ralf C Kling

    Full Text Available Residue Arg3.50 belongs to the highly conserved DRY-motif of class A GPCRs, which is located at the bottom of TM3. On the one hand, Arg3.50 has been reported to help stabilize the inactive state of GPCRs, but on the other hand has also been shown to be crucial for stabilizing active receptor conformations and mediating receptor-G protein coupling. The combined results of these studies suggest that the exact function of Arg3.50 is likely to be receptor-dependent and must be characterized independently for every GPCR. Consequently, we now present comparative molecular-dynamics simulations that use our recently described inactive-state and Gα-bound active-state homology models of the dopamine D2 receptor (D2R, which are either bound to dopamine or ligand-free, performed to identify the function of Arg1323.50 in D2R. Our results are consistent with a dynamic model of D2R activation in which Arg1323.50 adopts a dual role, both by stabilizing the inactive-state receptor conformation and enhancing dopamine-dependent D2R-G protein coupling.

  4. Modeling and Predicting AD Progression by Regression Analysis of Sequential Clinical Data

    KAUST Repository

    Xie, Qing

    2016-02-23

    Alzheimer\\'s Disease (AD) is currently attracting much attention in elders\\' care. As the increasing availability of massive clinical diagnosis data, especially the medical images of brain scan, it is highly significant to precisely identify and predict the potential AD\\'s progression based on the knowledge in the diagnosis data. In this paper, we follow a novel sequential learning framework to model the disease progression for AD patients\\' care. Different from the conventional approaches using only initial or static diagnosis data to model the disease progression for different durations, we design a score-involved approach and make use of the sequential diagnosis information in different disease stages to jointly simulate the disease progression. The actual clinical scores are utilized in progress to make the prediction more pertinent and reliable. We examined our approach by extensive experiments on the clinical data provided by the Alzheimer\\'s Disease Neuroimaging Initiative (ADNI). The results indicate that the proposed approach is more effective to simulate and predict the disease progression compared with the existing methods.

  5. Modeling and Predicting AD Progression by Regression Analysis of Sequential Clinical Data

    KAUST Repository

    Xie, Qing; Wang, Su; Zhu, Jia; Zhang, Xiangliang

    2016-01-01

    Alzheimer's Disease (AD) is currently attracting much attention in elders' care. As the increasing availability of massive clinical diagnosis data, especially the medical images of brain scan, it is highly significant to precisely identify and predict the potential AD's progression based on the knowledge in the diagnosis data. In this paper, we follow a novel sequential learning framework to model the disease progression for AD patients' care. Different from the conventional approaches using only initial or static diagnosis data to model the disease progression for different durations, we design a score-involved approach and make use of the sequential diagnosis information in different disease stages to jointly simulate the disease progression. The actual clinical scores are utilized in progress to make the prediction more pertinent and reliable. We examined our approach by extensive experiments on the clinical data provided by the Alzheimer's Disease Neuroimaging Initiative (ADNI). The results indicate that the proposed approach is more effective to simulate and predict the disease progression compared with the existing methods.

  6. Configural and component processing in simultaneous and sequential lineup procedures.

    Science.gov (United States)

    Flowe, Heather D; Smith, Harriet M J; Karoğlu, Nilda; Onwuegbusi, Tochukwu O; Rai, Lovedeep

    2016-01-01

    Configural processing supports accurate face recognition, yet it has never been examined within the context of criminal identification lineups. We tested, using the inversion paradigm, the role of configural processing in lineups. Recent research has found that face discrimination accuracy in lineups is better in a simultaneous compared to a sequential lineup procedure. Therefore, we compared configural processing in simultaneous and sequential lineups to examine whether there are differences. We had participants view a crime video, and then they attempted to identify the perpetrator from a simultaneous or sequential lineup. The test faces were presented either upright or inverted, as previous research has shown that inverting test faces disrupts configural processing. The size of the inversion effect for faces was the same across lineup procedures, indicating that configural processing underlies face recognition in both procedures. Discrimination accuracy was comparable across lineup procedures in both the upright and inversion condition. Theoretical implications of the results are discussed.

  7. Grounded running in quails: simulations indicate benefits of observed fixed aperture angle between legs before touch-down.

    Science.gov (United States)

    Andrada, Emanuel; Rode, Christian; Blickhan, Reinhard

    2013-10-21

    Many birds use grounded running (running without aerial phases) in a wide range of speeds. Contrary to walking and running, numerical investigations of this gait based on the BSLIP (bipedal spring loaded inverted pendulum) template are rare. To obtain template related parameters of quails (e.g. leg stiffness) we used x-ray cinematography combined with ground reaction force measurements of quail grounded running. Interestingly, with speed the quails did not adjust the swing leg's angle of attack with respect to the ground but adapted the angle between legs (which we termed aperture angle), and fixed it about 30ms before touchdown. In simulations with the BSLIP we compared this swing leg alignment policy with the fixed angle of attack with respect to the ground typically used in the literature. We found symmetric periodic grounded running in a simply connected subset comprising one third of the investigated parameter space. The fixed aperture angle strategy revealed improved local stability and surprising tolerance with respect to large perturbations. Starting with the periodic solutions, after step-down step-up or step-up step-down perturbations of 10% leg rest length, in the vast majority of cases the bipedal SLIP could accomplish at least 50 steps to fall. The fixed angle of attack strategy was not feasible. We propose that, in small animals in particular, grounded running may be a common gait that allows highly compliant systems to exploit energy storage without the necessity of quick changes in the locomotor program when facing perturbations. © 2013 Elsevier Ltd. All rights reserved.

  8. Sequential Triangle Strip Generator based on Hopfield Networks

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří; Lněnička, Radim

    2009-01-01

    Roč. 21, č. 2 (2009), s. 583-617 ISSN 0899-7667 R&D Projects: GA MŠk(CZ) 1M0545; GA AV ČR 1ET100300517; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10300504; CEZ:AV0Z10750506 Keywords : sequential triangle strip * combinatorial optimization * Hopfield network * minimum energy * simulated annealing Subject RIV: IN - Informatics, Computer Science Impact factor: 2.175, year: 2009

  9. Hydrological simulation approaches for BMPs and LID practices in highly urbanized area and development of hydrological performance indicator system

    Directory of Open Access Journals (Sweden)

    Yan-wei Sun

    2014-04-01

    Full Text Available Urbanization causes hydrological change and increases stormwater runoff volumes, leading to flooding, erosion, and the degradation of instream ecosystem health. Best management practices (BMPs, like detention ponds and infiltration trenches, have been widely used to control flood runoff events for the past decade. However, low impact development (LID options have been proposed as an alternative approach to better mimic the natural flow regime by using decentralized designs to control stormwater runoff at the source, rather than at a centralized location in the watershed. For highly urbanized areas, LID stormwater management practices such as bioretention cells and porous pavements can be used to retrofit existing infrastructure and reduce runoff volumes and peak flows. This paper describes a modeling approach to incorporate these LID practices and the two BMPs of detention ponds and infiltration trenches in an existing hydrological model to estimate the impacts of BMPs and LID practices on the surface runoff. The modeling approach has been used in a parking lot located in Lenexa, Kansas, USA, to predict hydrological performance of BMPs and LID practices. A performance indicator system including the flow duration curve, peak flow frequency exceedance curve, and runoff coefficient have been developed in an attempt to represent impacts of BMPs and LID practices on the entire spectrum of the runoff regime. Results demonstrate that use of these BMPs and LID practices leads to significant stormwater control for small rainfall events and less control for flood events.

  10. Sequential series for nuclear reactions

    International Nuclear Information System (INIS)

    Izumo, Ko

    1975-01-01

    A new time-dependent treatment of nuclear reactions is given, in which the wave function of compound nucleus is expanded by a sequential series of the reaction processes. The wave functions of the sequential series form another complete set of compound nucleus at the limit Δt→0. It is pointed out that the wave function is characterized by the quantities: the number of degrees of freedom of motion n, the period of the motion (Poincare cycle) tsub(n), the delay time t sub(nμ) and the relaxation time tausub(n) to the equilibrium of compound nucleus, instead of the usual quantum number lambda, the energy eigenvalue Esub(lambda) and the total width GAMMAsub(lambda) of resonance levels, respectively. The transition matrix elements and the yields of nuclear reactions also become the functions of time given by the Fourier transform of the usual ones. The Poincare cycles of compound nuclei are compared with the observed correlations among resonance levels, which are about 10 -17 --10 -16 sec for medium and heavy nuclei and about 10 -20 sec for the intermediate resonances. (auth.)

  11. On the origin of reproducible sequential activity in neural circuits

    Science.gov (United States)

    Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  12. Exploring the sequential lineup advantage using WITNESS.

    Science.gov (United States)

    Goodsell, Charles A; Gronlund, Scott D; Carlson, Curt A

    2010-12-01

    Advocates claim that the sequential lineup is an improvement over simultaneous lineup procedures, but no formal (quantitatively specified) explanation exists for why it is better. The computational model WITNESS (Clark, Appl Cogn Psychol 17:629-654, 2003) was used to develop theoretical explanations for the sequential lineup advantage. In its current form, WITNESS produced a sequential advantage only by pairing conservative sequential choosing with liberal simultaneous choosing. However, this combination failed to approximate four extant experiments that exhibited large sequential advantages. Two of these experiments became the focus of our efforts because the data were uncontaminated by likely suspect position effects. Decision-based and memory-based modifications to WITNESS approximated the data and produced a sequential advantage. The next step is to evaluate the proposed explanations and modify public policy recommendations accordingly.

  13. Sequential lineup presentation: Patterns and policy

    OpenAIRE

    Lindsay, R C L; Mansour, Jamal K; Beaudry, J L; Leach, A-M; Bertrand, M I

    2009-01-01

    Sequential lineups were offered as an alternative to the traditional simultaneous lineup. Sequential lineups reduce incorrect lineup selections; however, the accompanying loss of correct identifications has resulted in controversy regarding adoption of the technique. We discuss the procedure and research relevant to (1) the pattern of results found using sequential versus simultaneous lineups; (2) reasons (theory) for differences in witness responses; (3) two methodological issues; and (4) im...

  14. Biased lineups: sequential presentation reduces the problem.

    Science.gov (United States)

    Lindsay, R C; Lea, J A; Nosworthy, G J; Fulford, J A; Hector, J; LeVan, V; Seabrook, C

    1991-12-01

    Biased lineups have been shown to increase significantly false, but not correct, identification rates (Lindsay, Wallbridge, & Drennan, 1987; Lindsay & Wells, 1980; Malpass & Devine, 1981). Lindsay and Wells (1985) found that sequential lineup presentation reduced false identification rates, presumably by reducing reliance on relative judgment processes. Five staged-crime experiments were conducted to examine the effect of lineup biases and sequential presentation on eyewitness recognition accuracy. Sequential lineup presentation significantly reduced false identification rates from fair lineups as well as from lineups biased with regard to foil similarity, instructions, or witness attire, and from lineups biased in all of these ways. The results support recommendations that police present lineups sequentially.

  15. Immediate Sequential Bilateral Cataract Surgery

    DEFF Research Database (Denmark)

    Kessel, Line; Andresen, Jens; Erngaard, Ditte

    2015-01-01

    The aim of the present systematic review was to examine the benefits and harms associated with immediate sequential bilateral cataract surgery (ISBCS) with specific emphasis on the rate of complications, postoperative anisometropia, and subjective visual function in order to formulate evidence......-based national Danish guidelines for cataract surgery. A systematic literature review in PubMed, Embase, and Cochrane central databases identified three randomized controlled trials that compared outcome in patients randomized to ISBCS or bilateral cataract surgery on two different dates. Meta-analyses were...... performed using the Cochrane Review Manager software. The quality of the evidence was assessed using the GRADE method (Grading of Recommendation, Assessment, Development, and Evaluation). We did not find any difference in the risk of complications or visual outcome in patients randomized to ISBCS or surgery...

  16. Random and cooperative sequential adsorption

    Science.gov (United States)

    Evans, J. W.

    1993-10-01

    Irreversible random sequential adsorption (RSA) on lattices, and continuum "car parking" analogues, have long received attention as models for reactions on polymer chains, chemisorption on single-crystal surfaces, adsorption in colloidal systems, and solid state transformations. Cooperative generalizations of these models (CSA) are sometimes more appropriate, and can exhibit richer kinetics and spatial structure, e.g., autocatalysis and clustering. The distribution of filled or transformed sites in RSA and CSA is not described by an equilibrium Gibbs measure. This is the case even for the saturation "jammed" state of models where the lattice or space cannot fill completely. However exact analysis is often possible in one dimension, and a variety of powerful analytic methods have been developed for higher dimensional models. Here we review the detailed understanding of asymptotic kinetics, spatial correlations, percolative structure, etc., which is emerging for these far-from-equilibrium processes.

  17. Intra-individual diagnostic image quality and organ-specific-radiation dose comparison between spiral cCT with iterative image reconstruction and z-axis automated tube current modulation and sequential cCT

    International Nuclear Information System (INIS)

    Wenz, Holger; Maros, Máté E.; Meyer, Mathias; Gawlitza, Joshua; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O.; Groden, Christoph; Henzler, Thomas

    2016-01-01

    •Superiority of spiral versus sequential cCT in image quality and organ-specific-radiation dose.•Spiral cCT: lower organ-specific-radiation-dose in eye lense compared to tilted sequential cCT.•State-of-the-art IR spiral cCT techniques has significant advantages over sequential cCT techniques. Superiority of spiral versus sequential cCT in image quality and organ-specific-radiation dose. Spiral cCT: lower organ-specific-radiation-dose in eye lense compared to tilted sequential cCT. State-of-the-art IR spiral cCT techniques has significant advantages over sequential cCT techniques. To prospectively evaluate image quality and organ-specific-radiation dose of spiral cranial CT (cCT) combined with automated tube current modulation (ATCM) and iterative image reconstruction (IR) in comparison to sequential tilted cCT reconstructed with filtered back projection (FBP) without ATCM. 31 patients with a previous performed tilted non-contrast enhanced sequential cCT aquisition on a 4-slice CT system with only FBP reconstruction and no ATCM were prospectively enrolled in this study for a clinical indicated cCT scan. All spiral cCT examinations were performed on a 3rd generation dual-source CT system using ATCM in z-axis direction. Images were reconstructed using both, FBP and IR (level 1–5). A Monte-Carlo-simulation-based analysis was used to compare organ-specific-radiation dose. Subjective image quality for various anatomic structures was evaluated using a 4-point Likert-scale and objective image quality was evaluated by comparing signal-to-noise ratios (SNR). Spiral cCT led to a significantly lower (p < 0.05) organ-specific-radiation dose in all targets including eye lense. Subjective image quality of spiral cCT datasets with an IR reconstruction level 5 was rated significantly higher compared to the sequential cCT acquisitions (p < 0.0001). Consecutive mean SNR was significantly higher in all spiral datasets (FBP, IR 1–5) when compared to sequential cCT with a mean

  18. Sequential transformation of the structural and thermodynamic parameters of the complex particles, combining covalent conjugate (sodium caseinate + maltodextrin) with polyunsaturated lipids stabilized by a plant antioxidant, in the simulated gastro-intestinal conditions in vitro.

    Science.gov (United States)

    Antipova, Anna S; Zelikina, Darya V; Shumilina, Elena A; Semenova, Maria G

    2016-10-01

    The present work is focused on the structural transformation of the complexes, formed between covalent conjugate (sodium caseinate + maltodextrin) and an equimass mixture of the polyunsaturated lipids (PULs): (soy phosphatidylcholine + triglycerides of flaxseed oil) stabilized by a plant antioxidant (an essential oil of clove buds), in the simulated conditions of the gastrointestinal tract. The conjugate was used here as a food-grade delivery vehicle for the PULs. The release of these PULs at each stage of the simulated digestion was estimated. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Study of individual and group affective processes in the crew of a simulated mission to Mars: Positive affectivity as a valuable indicator of changes in the crew affectivity

    Science.gov (United States)

    Poláčková Šolcová, Iva; Lačev, Alek; Šolcová, Iva

    2014-07-01

    The success of a long-duration space mission depends on various technical demands as well as on the psychological (cognitive, affective, and motivational) adaptation of crewmembers and the quality of interactions within the crew. We examined the ways crewmembers of a 520-day simulated spaceflight to Mars (held in the Institute for Biomedical Problems, in Moscow) experienced and regulated their moods and emotions. Results show that crewmembers experienced predominantly positive emotions throughout their 520-day isolation and the changes in mood of the crewmembers were asynchronous and balanced. The study suggests that during the simulation, crewmembers experienced and regulated their emotions differently than they usually do in their everyday life. In isolation, crewmembers preferred to suppress and neutralize their negative emotions and express overtly only emotions with positive valence. Although the affective processes were almost invariable throughout the simulation, two periods of time when the level of positive emotions declined were identified. Regarding the findings, the paper suggests that changes in positive affectivity could be a more valuable indicator of human experience in demanding but professional environments than changes in negative affectivity. Finally, the paper discusses the phenomenology of emotions during a real space mission.

  20. Simulation

    DEFF Research Database (Denmark)

    Gould, Derek A; Chalmers, Nicholas; Johnson, Sheena J

    2012-01-01

    Recognition of the many limitations of traditional apprenticeship training is driving new approaches to learning medical procedural skills. Among simulation technologies and methods available today, computer-based systems are topical and bring the benefits of automated, repeatable, and reliable p...... performance assessments. Human factors research is central to simulator model development that is relevant to real-world imaging-guided interventional tasks and to the credentialing programs in which it would be used....

  1. Research on parallel algorithm for sequential pattern mining

    Science.gov (United States)

    Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

    2008-03-01

    Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

  2. Fully vs. Sequentially Coupled Loads Analysis of Offshore Wind Turbines

    Energy Technology Data Exchange (ETDEWEB)

    Damiani, Rick; Wendt, Fabian; Musial, Walter; Finucane, Z.; Hulliger, L.; Chilka, S.; Dolan, D.; Cushing, J.; O' Connell, D.; Falk, S.

    2017-06-19

    The design and analysis methods for offshore wind turbines must consider the aerodynamic and hydrodynamic loads and response of the entire system (turbine, tower, substructure, and foundation) coupled to the turbine control system dynamics. Whereas a fully coupled (turbine and support structure) modeling approach is more rigorous, intellectual property concerns can preclude this approach. In fact, turbine control system algorithms and turbine properties are strictly guarded and often not shared. In many cases, a partially coupled analysis using separate tools and an exchange of reduced sets of data via sequential coupling may be necessary. In the sequentially coupled approach, the turbine and substructure designers will independently determine and exchange an abridged model of their respective subsystems to be used in their partners' dynamic simulations. Although the ability to achieve design optimization is sacrificed to some degree with a sequentially coupled analysis method, the central question here is whether this approach can deliver the required safety and how the differences in the results from the fully coupled method could affect the design. This work summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between these approaches through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.

  3. Trial Sequential Methods for Meta-Analysis

    Science.gov (United States)

    Kulinskaya, Elena; Wood, John

    2014-01-01

    Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…

  4. An automatic system for acidity determination based on sequential injection titration and the monosegmented flow approach.

    Science.gov (United States)

    Kozak, Joanna; Wójtowicz, Marzena; Gawenda, Nadzieja; Kościelniak, Paweł

    2011-06-15

    An automatic sequential injection system, combining monosegmented flow analysis, sequential injection analysis and sequential injection titration is proposed for acidity determination. The system enables controllable sample dilution and generation of standards of required concentration in a monosegmented sequential injection manner, sequential injection titration of the prepared solutions, data collecting, and handling. It has been tested on spectrophotometric determination of acetic, citric and phosphoric acids with sodium hydroxide used as a titrant and phenolphthalein or thymolphthalein (in the case of phosphoric acid determination) as indicators. Accuracy better than |4.4|% (RE) and repeatability better than 2.9% (RSD) have been obtained. It has been applied to the determination of total acidity in vinegars and various soft drinks. The system provides low sample (less than 0.3 mL) consumption. On average, analysis of a sample takes several minutes. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. The sequential structure of brain activation predicts skill.

    Science.gov (United States)

    Anderson, John R; Bothell, Daniel; Fincham, Jon M; Moon, Jungaa

    2016-01-29

    In an fMRI study, participants were trained to play a complex video game. They were scanned early and then again after substantial practice. While better players showed greater activation in one region (right dorsal striatum) their relative skill was better diagnosed by considering the sequential structure of whole brain activation. Using a cognitive model that played this game, we extracted a characterization of the mental states that are involved in playing a game and the statistical structure of the transitions among these states. There was a strong correspondence between this measure of sequential structure and the skill of different players. Using multi-voxel pattern analysis, it was possible to recognize, with relatively high accuracy, the cognitive states participants were in during particular scans. We used the sequential structure of these activation-recognized states to predict the skill of individual players. These findings indicate that important features about information-processing strategies can be identified from a model-based analysis of the sequential structure of brain activation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Simulated Extreme Prepitation Indices over Northeast Brasil in Current Climate and Future Scenarios RCP4.5 and RCP8.5

    Science.gov (United States)

    Wender Santiago Marinho, Marcos; Araújo Costa, Alexandre; Cassain Sales, Domingo; Oliveira Guimarães, Sullyandro; Mariano da Silva, Emerson; das Chagas Vasconcelos Júnior, Francisco

    2013-04-01

    In this study, we analyzed extreme precipitation indices, for present and future modeled climates over Northeast of Brazil (NEB), from CORDEX simulations over the domain of Tropical Americas. The period for the model validation was from 1989-2007, using data from the European Center (ECWMF) Reanalysis, ERA-INTERIM, as input to drive the regional model (RAMS 6.0). Reanalysis data were assimilated via both lateral boundaries and the entire domain (a much weaker "central nudging"). Six indices of extreme precipitation were calculated over NEB: the average number of days above 10, 20 and 30 mm in one year (R10, R20, R30), the number of consecutive dry days (CDD), the number of consecutive wet days (CWD) and the maximum rainfall in five consecutive days (RX5). Those indices were compared against two independent databases: MERRA (Modern Era Retrospective analysis for Research and Applications) and TRMM (Tropical Rainfall Measuring Mission). After validation, climate simulations were performed for the present climate (1985-2005) and short-term (2015-2035), mid-term (2045-2065) and long-term (2079 to 2099) future climates for two scenarios: RCP 4.5 and RCP 8.5, nesting RAMS into HadGEM2-ES global model (a participant of CMIP5). Along with the indices, we also calculated Probability Distribution Functions (PDFs) to study the behavior of daily precipitation in the present and by the end of the 21st century (2079 to 2099) to assess possible changes under RCPs 4.5 and 8.5. The regional model is capable of representing relatively well the extreme precipitation indices for current climate, but there is some difficulties in performing a proper validation since the observed databases disagree significantly. Future projections show significant changes in most extreme indices. Rnn generally tend to increase, especially under RCP8.5. More significant changes are projected for the long-term period, under RCP8.5, which shows a pronounced R30 enhancement over northern states. CDD tends

  7. Sequential lineup laps and eyewitness accuracy.

    Science.gov (United States)

    Steblay, Nancy K; Dietrich, Hannah L; Ryan, Shannon L; Raczynski, Jeanette L; James, Kali A

    2011-08-01

    Police practice of double-blind sequential lineups prompts a question about the efficacy of repeated viewings (laps) of the sequential lineup. Two laboratory experiments confirmed the presence of a sequential lap effect: an increase in witness lineup picks from first to second lap, when the culprit was a stranger. The second lap produced more errors than correct identifications. In Experiment 2, lineup diagnosticity was significantly higher for sequential lineup procedures that employed a single versus double laps. Witnesses who elected to view a second lap made significantly more errors than witnesses who chose to stop after one lap or those who were required to view two laps. Witnesses with prior exposure to the culprit did not exhibit a sequential lap effect.

  8. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.

    2014-12-15

    This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.

  9. Sequential Product of Quantum Effects: An Overview

    Science.gov (United States)

    Gudder, Stan

    2010-12-01

    This article presents an overview for the theory of sequential products of quantum effects. We first summarize some of the highlights of this relatively recent field of investigation and then provide some new results. We begin by discussing sequential effect algebras which are effect algebras endowed with a sequential product satisfying certain basic conditions. We then consider sequential products of (discrete) quantum measurements. We next treat transition effect matrices (TEMs) and their associated sequential product. A TEM is a matrix whose entries are effects and whose rows form quantum measurements. We show that TEMs can be employed for the study of quantum Markov chains. Finally, we prove some new results concerning TEMs and vector densities.

  10. SCRLH-TL Based Sequential Rotation Feed Network for Broadband Circularly Polarized Antenna Array

    Directory of Open Access Journals (Sweden)

    B. F. Zong

    2016-04-01

    Full Text Available In this paper, a broadband circularly polarized (CP microstrip antenna array using composite right/left-handed transmission line (SCRLH-TL based sequential rotation (SR feed network is presented. The characteristics of a SCRLH-TL are initially investigated. Then, a broadband and low insertion loss 45º phase shifter is designed using the SCRLH-TL and the phase shifter is employed in constructing a SR feed network for CP antenna array. To validate the design method of the SR feed network, a 2×2 antenna array comprising sequentially rotated coupled stacked CP antenna elements is designed, fabricated and measured. Both the simulated and measured results indicate that the performances of the antenna element are further enhanced when the SR network is used. The antenna array exhibits the VSWR less than 1.8 dB from 4 GHz to 7 GHz and the 3 dB axial ratio (AR from 4.4 GHz to 6.8 GHz. Also, high peak gain of 13.7 dBic is obtained. Besides, the normalized radiation patterns at the operating frequencies are symmetrical and the side lobe levels are low at φ=0º and φ=90º.

  11. Validation studies on indexed sequential modeling for the Colorado River Basin

    International Nuclear Information System (INIS)

    Labadie, J.W.; Fontane, D.G.; Salas, J.D.; Ouarda, T.

    1991-01-01

    This paper reports on a method called indexed sequential modeling (ISM) that has been developed by the Western Area Power Administration to estimate reliable levels of project dependable power capacity (PDC) and applied to several federal hydro systems in the Western U.S. The validity of ISM in relation to more commonly accepted stochastic modeling approaches is analyzed by applying it to the Colorado River Basin using the Colorado River Simulation System (CRSS) developed by the U.S. Bureau of Reclamation. Performance of ISM is compared with results from input of stochastically generated data using the LAST Applied Stochastic Techniques Package. Results indicate that output generated from ISM synthetically generated sequences display an acceptable correspondence with results obtained from final convergent stochastically generated hydrology for the Colorado River Basin

  12. Simulation

    CERN Document Server

    Ross, Sheldon

    2006-01-01

    Ross's Simulation, Fourth Edition introduces aspiring and practicing actuaries, engineers, computer scientists and others to the practical aspects of constructing computerized simulation studies to analyze and interpret real phenomena. Readers learn to apply results of these analyses to problems in a wide variety of fields to obtain effective, accurate solutions and make predictions about future outcomes. This text explains how a computer can be used to generate random numbers, and how to use these random numbers to generate the behavior of a stochastic model over time. It presents the statist

  13. The subtyping of primary aldosteronism by adrenal vein sampling: sequential blood sampling causes factitious lateralization.

    Science.gov (United States)

    Rossitto, Giacomo; Battistel, Michele; Barbiero, Giulio; Bisogni, Valeria; Maiolino, Giuseppe; Diego, Miotto; Seccia, Teresa M; Rossi, Gian Paolo

    2018-02-01

    The pulsatile secretion of adrenocortical hormones and a stress reaction occurring when starting adrenal vein sampling (AVS) can affect the selectivity and also the assessment of lateralization when sequential blood sampling is used. We therefore tested the hypothesis that a simulated sequential blood sampling could decrease the diagnostic accuracy of lateralization index for identification of aldosterone-producing adenoma (APA), as compared with bilaterally simultaneous AVS. In 138 consecutive patients who underwent subtyping of primary aldosteronism, we compared the results obtained simultaneously bilaterally when starting AVS (t-15) and 15 min after (t0), with those gained with a simulated sequential right-to-left AVS technique (R ⇒ L) created by combining hormonal values obtained at t-15 and at t0. The concordance between simultaneously obtained values at t-15 and t0, and between simultaneously obtained values and values gained with a sequential R ⇒ L technique, was also assessed. We found a marked interindividual variability of lateralization index values in the patients with bilaterally selective AVS at both time point. However, overall the lateralization index simultaneously determined at t0 provided a more accurate identification of APA than the simulated sequential lateralization indexR ⇒ L (P = 0.001). Moreover, regardless of which side was sampled first, the sequential AVS technique induced a sequence-dependent overestimation of lateralization index. While in APA patients the concordance between simultaneous AVS at t0 and t-15 and between simultaneous t0 and sequential technique was moderate-to-good (K = 0.55 and 0.66, respectively), in non-APA patients, it was poor (K = 0.12 and 0.13, respectively). Sequential AVS generates factitious between-sides gradients, which lower its diagnostic accuracy, likely because of the stress reaction arising upon starting AVS.

  14. Assessment of the ozone-nitrogen oxide-volatile organic compound sensitivity of Mexico City through an indicator-based approach: measurements and numerical simulations comparison.

    Science.gov (United States)

    Torres-Jardón, Ricardo; García-Reynoso, J Agustín; Jazcilevich, Arón; Ruiz-Suárez, L Gerardo; Keener, Tim C

    2009-10-01

    The ozone (O3) sensitivity to nitrogen oxides (NOx, or nitric oxide [NO] + nitrogen dioxide [NO2]) versus volatile organic compounds (VOCs) in the Mexico City metropolitan area (MCMA) is a current issue of scientific controversy. To shed light on this issue, we compared measurements of the indicator species O3/NOy (where NOy represents the sum of NO + NO2 + nitric acid [HNO3] + peroxyacetyl nitrate [PAN] + others), NOy, and the semiempirically derived O3/NOz(surrogate) (where NOz(surrogate) is the derived surrogate NOz, and NOz represents NOx reaction products, or NOy - NOx) with results of numerical predictions reproducing the transition regimes between NOx and VOC sensitivities. Ambient air concentrations of O3, NOx, and NOy were measured from April 14 to 25, 2004 in one downwind receptor site of photochemically aged air masses within Mexico City. MCMA-derived transition values for an episode day occurring during the same monitoring period were obtained through a series of photochemical simulations using the Multiscale Climate and Chemistry Model (MCCM). The comparison between the measured indicator species and the simulated spatial distribution of the indicators O3/ NOy, O3/NOz(surrogate), and NOy in MCMA suggest that O3 in this megacity is likely VOC-sensitive. This is in opposition to past studies that, on the basis of the observed morning VOC/NOx ratios, have concluded that O3 in Mexico City is NOx-sensitive. Simulated MCMA-derived sensitive transition values for O3/NOy, hydrogen peroxide (H2O2)/HNO3, and NOy were found to be in agreement with threshold criteria proposed for other regions in North America and Europe, although the transition crossover for O3/NOz and O3/HNO3 was not consistent with values reported elsewhere. An additional empirical evaluation of weekend/weekday differences in average maximum O3 concentrations and 6:00- to 9:00-a.m. NOx and NO levels registered at the same site in April 2004 indirectly confirmed the above results. A preliminary

  15. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-01-01

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  16. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  17. Sequential Scintigraphy in Renal Transplantation

    Energy Technology Data Exchange (ETDEWEB)

    Winkel, K. zum; Harbst, H.; Schenck, P.; Franz, H. E.; Ritz, E.; Roehl, L.; Ziegler, M.; Ammann, W.; Maier-Borst, W. [Institut Fuer Nuklearmedizin, Deutsches Krebsforschungszentrum, Heidelberg, Federal Republic of Germany (Germany)

    1969-05-15

    Based on experience gained from more than 1600 patients with proved or suspected kidney diseases and on results on extended studies with dogs, sequential scintigraphy was performed after renal transplantation in dogs. After intravenous injection of 500 {mu}Ci. {sup 131}I-Hippuran scintiphotos were taken during the first minute with an exposure time of 15 sec each and thereafter with an exposure of 2 min up to at least 16 min.. Several examinations were evaluated digitally. 26 examinations were performed on 11 dogs with homotransplanted kidneys. Immediately after transplantation the renal function was almost normal arid the bladder was filled in due time. At the beginning of rejection the initial uptake of radioactive Hippuran was reduced. The intrarenal transport became delayed; probably the renal extraction rate decreased. Corresponding to the development of an oedema in the transplant the uptake area increased in size. In cases of thrombosis of the main artery there was no evidence of any uptake of radioactivity in the transplant. Similar results were obtained in 41 examinations on 15 persons. Patients with postoperative anuria due to acute tubular necrosis showed still some uptake of radioactivity contrary to those with thrombosis of the renal artery, where no uptake was found. In cases of rejection the most frequent signs were a reduced initial uptake and a delayed intrarenal transport of radioactive Hippuran. Infarction could be detected by a reduced uptake in distinct areas of the transplant. (author)

  18. Sequential provisional implant prosthodontics therapy.

    Science.gov (United States)

    Zinner, Ira D; Markovits, Stanley; Jansen, Curtis E; Reid, Patrick E; Schnader, Yale E; Shapiro, Herbert J

    2012-01-01

    The fabrication and long-term use of first- and second-stage provisional implant prostheses is critical to create a favorable prognosis for function and esthetics of a fixed-implant supported prosthesis. The fixed metal and acrylic resin cemented first-stage prosthesis, as reviewed in Part I, is needed for prevention of adjacent and opposing tooth movement, pressure on the implant site as well as protection to avoid micromovement of the freshly placed implant body. The second-stage prosthesis, reviewed in Part II, should be used following implant uncovering and abutment installation. The patient wears this provisional prosthesis until maturation of the bone and healing of soft tissues. The second-stage provisional prosthesis is also a fail-safe mechanism for possible early implant failures and also can be used with late failures and/or for the necessity to repair the definitive prosthesis. In addition, the screw-retained provisional prosthesis is used if and when an implant requires removal or other implants are to be placed as in a sequential approach. The creation and use of both first- and second-stage provisional prostheses involve a restorative dentist, dental technician, surgeon, and patient to work as a team. If the dentist alone cannot do diagnosis and treatment planning, surgery, and laboratory techniques, he or she needs help by employing the expertise of a surgeon and a laboratory technician. This team approach is essential for optimum results.

  19. Stochastic simulation of large grids using free and public domain software

    NARCIS (Netherlands)

    Bruin, de S.; Wit, de A.J.W.

    2005-01-01

    This paper proposes a tiled map procedure enabling sequential indicator simulation on grids consisting of several tens of millions of cells, without putting excessive memory requirements. Spatial continuity across map tiles is handled by conditioning adjacent tiles on their shared boundaries. Tiles

  20. Managerial adjustment and its limits: sequential fault in comparative perspective

    Directory of Open Access Journals (Sweden)

    Flávio da Cunha Rezende

    2008-01-01

    Full Text Available This article focuses on explanations for sequential faults in administrative reform. It deals with the limits of managerial adjustment in an approach that attempts to connect theory and empirical data, articulating three levels of analysis. The first level presents comparative evidence of sequential fault within reforms in national governments through a set of indicators geared toward understanding changes in the role of the state. In light of analyses of a representative set of comparative studies on reform implementation, the second analytical level proceeds to identify four typical mechanisms that are present in explanations on managerial adjustment faults. In this way, we seek to configure an explanatory matrix for theories on sequential fault. Next we discuss the experience of management reform in the Brazilian context, conferring special attention on one of the mechanisms that creates fault: the control dilemma. The major hypotheses that guide our article are that reforms lead to sequential fault and that there are at least four causal mechanisms that produce reforms: a transactions costs involved in producing reforms; b performance legacy; c predominance of fiscal adjustment and d the control dilemma. These mechanisms act separately or in concert, and act to decrease chances for a transformation of State managerial patterns. Major evidence that is analyzed in these articles lend consistency to the general argument that reforms have failed in their attempts to reduce public expenses, alter patterns of resource allocation, reduce the labor force and change the role of the State. Our major conclusion is that reforms fail sequentially and managerial adjustment displays considerable limitations, particularly those of a political nature.

  1. Results of simultaneous and sequential pediatric liver and kidney transplantation.

    Science.gov (United States)

    Rogers, J; Bueno, J; Shapiro, R; Scantlebury, V; Mazariegos, G; Fung, J; Reyes, J

    2001-11-27

    The indications for simultaneous and sequential pediatric liver (LTx) and kidney (KTx) transplantation have not been well defined. We herein report the results of our experience with these procedures in children with end-stage liver disease and/or subsequent end-stage renal disease. Between 1984 and 1995, 12 LTx recipients received 15 kidney allografts. Eight simultaneous and seven sequential LTx/KTx were performed. There were six males and six females, with a mean age of 10.9 years (1.5-23.7). One of the eight simultaneous LTx/KTx was part of a multivisceral allograft. Five KTx were performed at varied intervals after successful LTx, one KTx was performed after a previous simultaneous LTx/KTx, and one KTx was performed after previous sequential LTx/KTx. Immunosuppression was with tacrolimus or cyclosporine and steroids. Indications for LTx were oxalosis (four), congenital hepatic fibrosis (two), cystinosis (one), polycystic liver disease (one), A-1-A deficiency (one), Total Parenteral Nutrition (TPN)-related (one), cryptogenic cirrhosis (one), and hepatoblastoma (one). Indications for KTx were oxalosis (four), drug-induced (four), polycystic kidney disease (three), cystinosis (one), and glomerulonephritis (1). With a mean follow-up of 58 months (0.9-130), the overall patient survival rate was 58% (7/12). One-year and 5-year actuarial patient survival rates were 66% and 58%, respectively. Patient survival rates at 1 year after KTx according to United Network of Organ Sharing (liver) status were 100% for status 3, 50% for status 2, and 0% for status 1. The overall renal allograft survival rate was 47%. Actuarial renal allograft survival rates were 53% at 1 and 5 years. The overall hepatic allograft survival rate was equivalent to the overall patient survival rate (58%). Six of seven surviving patients have normal renal allograft function, and one patient has moderate chronic allograft nephropathy. All surviving patients have normal hepatic allograft function. Six

  2. Evaluation of Nine Consensus Indices in Delphi Foresight Research and Their Dependency on Delphi Survey Characteristics: A Simulation Study and Debate on Delphi Design and Interpretation.

    Science.gov (United States)

    Birko, Stanislav; Dove, Edward S; Özdemir, Vural

    2015-01-01

    The extent of consensus (or the lack thereof) among experts in emerging fields of innovation can serve as antecedents of scientific, societal, investor and stakeholder synergy or conflict. Naturally, how we measure consensus is of great importance to science and technology strategic foresight. The Delphi methodology is a widely used anonymous survey technique to evaluate consensus among a panel of experts. Surprisingly, there is little guidance on how indices of consensus can be influenced by parameters of the Delphi survey itself. We simulated a classic three-round Delphi survey building on the concept of clustered consensus/dissensus. We evaluated three study characteristics that are pertinent for design of Delphi foresight research: (1) the number of survey questions, (2) the sample size, and (3) the extent to which experts conform to group opinion (the Group Conformity Index) in a Delphi study. Their impacts on the following nine Delphi consensus indices were then examined in 1000 simulations: Clustered Mode, Clustered Pairwise Agreement, Conger's Kappa, De Moivre index, Extremities Version of the Clustered Pairwise Agreement, Fleiss' Kappa, Mode, the Interquartile Range and Pairwise Agreement. The dependency of a consensus index on the Delphi survey characteristics was expressed from 0.000 (no dependency) to 1.000 (full dependency). The number of questions (range: 6 to 40) in a survey did not have a notable impact whereby the dependency values remained below 0.030. The variation in sample size (range: 6 to 50) displayed the top three impacts for the Interquartile Range, the Clustered Mode and the Mode (dependency = 0.396, 0.130, 0.116, respectively). The Group Conformity Index, a construct akin to measuring stubbornness/flexibility of experts' opinions, greatly impacted all nine Delphi consensus indices (dependency = 0.200 to 0.504), except the Extremity CPWA and the Interquartile Range that were impacted only beyond the first decimal point (dependency = 0

  3. Tradable permit allocations and sequential choice

    Energy Technology Data Exchange (ETDEWEB)

    MacKenzie, Ian A. [Centre for Economic Research, ETH Zuerich, Zurichbergstrasse 18, 8092 Zuerich (Switzerland)

    2011-01-15

    This paper investigates initial allocation choices in an international tradable pollution permit market. For two sovereign governments, we compare allocation choices that are either simultaneously or sequentially announced. We show sequential allocation announcements result in higher (lower) aggregate emissions when announcements are strategic substitutes (complements). Whether allocation announcements are strategic substitutes or complements depends on the relationship between the follower's damage function and governments' abatement costs. When the marginal damage function is relatively steep (flat), allocation announcements are strategic substitutes (complements). For quadratic abatement costs and damages, sequential announcements provide a higher level of aggregate emissions. (author)

  4. Sequential Generalized Transforms on Function Space

    Directory of Open Access Journals (Sweden)

    Jae Gil Choi

    2013-01-01

    Full Text Available We define two sequential transforms on a function space Ca,b[0,T] induced by generalized Brownian motion process. We then establish the existence of the sequential transforms for functionals in a Banach algebra of functionals on Ca,b[0,T]. We also establish that any one of these transforms acts like an inverse transform of the other transform. Finally, we give some remarks about certain relations between our sequential transforms and other well-known transforms on Ca,b[0,T].

  5. A Bayesian sequential design using alpha spending function to control type I error.

    Science.gov (United States)

    Zhu, Han; Yu, Qingzhao

    2017-10-01

    We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.

  6. Double-blind photo lineups using actual eyewitnesses: an experimental test of a sequential versus simultaneous lineup procedure.

    Science.gov (United States)

    Wells, Gary L; Steblay, Nancy K; Dysart, Jennifer E

    2015-02-01

    Eyewitnesses (494) to actual crimes in 4 police jurisdictions were randomly assigned to view simultaneous or sequential photo lineups using laptop computers and double-blind administration. The sequential procedure used in the field experiment mimicked how it is conducted in actual practice (e.g., using a continuation rule, witness does not know how many photos are to be viewed, witnesses resolve any multiple identifications), which is not how most lab experiments have tested the sequential lineup. No significant differences emerged in rates of identifying lineup suspects (25% overall) but the sequential procedure produced a significantly lower rate (11%) of identifying known-innocent lineup fillers than did the simultaneous procedure (18%). The simultaneous/sequential pattern did not significantly interact with estimator variables and no lineup-position effects were observed for either the simultaneous or sequential procedures. Rates of nonidentification were not significantly different for simultaneous and sequential but nonidentifiers from the sequential procedure were more likely to use the "not sure" response option than were nonidentifiers from the simultaneous procedure. Among witnesses who made an identification, 36% (41% of simultaneous and 32% of sequential) identified a known-innocent filler rather than a suspect, indicating that eyewitness performance overall was very poor. The results suggest that the sequential procedure that is used in the field reduces the identification of known-innocent fillers, but the differences are relatively small.

  7. Efficacy of premixed versus sequential administration of ...

    African Journals Online (AJOL)

    sequential administration in separate syringes on block characteristics, haemodynamic parameters, side effect profile and postoperative analgesic requirement. Trial design: This was a prospective, randomised clinical study. Method: Sixty orthopaedic patients scheduled for elective lower limb surgery under spinal ...

  8. Structural Consistency, Consistency, and Sequential Rationality.

    OpenAIRE

    Kreps, David M; Ramey, Garey

    1987-01-01

    Sequential equilibria comprise consistent beliefs and a sequentially ra tional strategy profile. Consistent beliefs are limits of Bayes ratio nal beliefs for sequences of strategies that approach the equilibrium strategy. Beliefs are structurally consistent if they are rationaliz ed by some single conjecture concerning opponents' strategies. Consis tent beliefs are not necessarily structurally consistent, notwithstan ding a claim by Kreps and Robert Wilson (1982). Moreover, the spirit of stru...

  9. Sequential extraction applied to Peruibe black mud, SP, Brazil

    International Nuclear Information System (INIS)

    Torrecilha, Jefferson Koyaishi

    2014-01-01

    The Peruibe Black mud is used in therapeutic treatments such as psoriasis, peripheral dermatitis, acne and seborrhoea, as well as in the treatment of myalgia, arthritis, rheumatism and non-articular processes. Likewise other medicinal clays, it may not be free from possible adverse health effects due to possible hazardous minerals leading to respiratory system occurrences and other effects, caused by the presence of toxic elements. Once used for therapeutic purposes, any given material should be fully characterized and thus samples of Peruibe black mud were analyzed to determine physical and chemical properties: moisture content, organic matter and loss on ignition; pH, particle size, cation exchange capacity and swelling index. The elemental composition was determined by Neutron Activation Analysis, Atomic Absorption Graphite Furnace and X-ray fluorescence; the mineralogical composition was determined by X-ray diffraction. Another tool widely used to evaluate the behavior of trace elements, in various environmental matrices, is the sequential extraction. Thus, a sequential extraction procedure was applied to fractionate the mud in specific geochemical forms and verify how and how much of the elements may be contained in it. Considering the several sequential extraction procedures, BCR-701 method (Community Bureau of Reference) was used since it is considered the most reproducible among them. A simple extraction with an artificial sweat was, also, applied in order to verify which components are potentially available for absorption by the patient skin during the topical treatment. The results indicated that the mud is basically composed by a silty-clay material, rich in organic matter and with good cation exchange capacity. There were no significant variations in mineralogy and elemental composition of both, in natura and mature mud forms. The analysis by sequential extraction and by simple extraction indicated that the elements possibly available in larger

  10. Partitioning washoff of manure-borne fecal indicators (Escherichia coli and stanols) into splash and hydraulic components: field rainfall simulations in a tropical agro-ecosystem.

    Science.gov (United States)

    Ribolzi, Olivier; Rochelle-Newall, Emma J.; Janeau, Jean-Louis; Viguier, Marion; Jardé, Emilie; Latsachack, Keooudone; Henri-Des-Tureaux, Thierry; Thammahacksac, Chanthamousone; Mugler, Claude; Valentin, Christian; Sengtaheuanghoung, Oloth

    2017-04-01

    Overland flow from manured fields and pastures is known to be an important mechanism by which organisms of faecal origin are transferred to streams in rural watersheds. In the tropical montane areas of South-East Asia, recent changes in land use have induced increased runoff, soil erosion, in-stream suspended sediment loads resulting in increased microbial pathogen dissemination and contamination of stream waters. The majority of enteric and environmental bacteria in aquatic systems are associated with particles such as sediments which can strongly influence their survival and transport characteristics. Escherichia coli (E. coli) has emerged as one of the most appropriate microbial indicators of faecal contamination of natural waters, with the presence of E. coli indicating that faecal contamination is present. In association with E. coli, faecal stanols can also be used as microbial source tracking tool for the identification of the origin of the faecal contamination (e.g. livestock, human, etc). Field rain simulations were used to examine how E.coli and stanols are exported from the surface of upland, agricultural soils during overland flow events. The objectives were to characterize the loss dynamics of these indicators from agricultural soils contaminated with livestock waste, and to partition total detachment into the splash and hydraulic components. Nine 1m2 microplots were divided in triplicated treatment groups: (a) controls with no amendments, (b) amended with pig manure or (c) poultry manure. Each plot was divided into two 0.5m2 rectangular subplots. For each simulation, one subplot was designated as a rain splash treatment; the other was covered with 2-mm grid size wire screen 10 cm above the soil surface to break the raindrops into fine droplets, thus drastically reducing their kinetic energy. E. coli concentrations in overland flow were estimated for both the attached and free living fractions and stanols were measured on the particulate matter washed

  11. Kr-85m activity as burnup measurement indicator in a pebble bed reactor based on ORIGEN2.1 Computer Simulation

    Science.gov (United States)

    Husnayani, I.; Udiyani, P. M.; Bakhri, S.; Sunaryo, G. R.

    2018-02-01

    Pebble Bed Reactor (PBR) is a high temperature gas-cooled reactor which employs graphite as a moderator and helium as a coolant. In a multi-pass PBR, burnup of the fuel pebble must be measured in each cycle by online measurement in order to determine whether the fuel pebble should be reloaded into the core for another cycle or moved out of the core into spent fuel storage. One of the well-known methods for measuring burnup is based on the activity of radionuclide decay inside the fuel pebble. In this work, the activity and gamma emission of Kr-85m were studied in order to investigate the feasibility of Kr-85m as burnup measurement indicator in a PBR. The activity and gamma emission of Kr-85 were estimated using ORIGEN2.1 computer code. The parameters of HTR-10 were taken as a case study in performing ORIGEN2.1 simulation. The results show that the activity revolution of Kr-85m has a good relationship with the burnup of the pebble fuel in each cycle. The Kr-85m activity reduction in each burnup step,in the range of 12% to 4%, is considered sufficient to show the burnup level in each cycle. The gamma emission of Kr-85m is also sufficiently high which is in the order of 1010 photon/second. From these results, it can be concluded that Kr-85m is suitable to be used as burnup measurement indicator in a pebble bed reactor.

  12. Use of sequential extraction to assess metal partitioning in soils

    International Nuclear Information System (INIS)

    Kaasalainen, Marika; Yli-Halla, Markku

    2003-01-01

    The state of heavy metal pollution and the mobility of Cd, Cu, Ni, Cr, Pb and Zn were studied in three texturally different agricultural soil profiles near a Cu-Ni smelter in Harjavalta, Finland. The pseudo-total concentrations were determined by an aqua regia procedure. Metals were also determined after division into four fractions by sequential extraction with (1) acetic acid (exchangeable and specifically adsorbed metals), (2) a reducing agent (bound to Fe/Mn hydroxides), (3) an oxidizing agent (bound to soil organic matter) and (4) aqua regia (bound to mineral structures). Fallout from the smelter has increased the concentrations of Cd, Cu and Ni in the topsoil, where 75-90% of Cd, 49-72% of Cu and 22-52% of Ni occurred in the first two fractions. Slight Pb and Zn pollution was evident as well. High proportions of mobile Cd, Cu and Ni also deeper in the sandy soil, closest to the smelter, indicated some downward movement of metals. The hydroxide-bound fraction of Pb dominated in almost all soils and horizons, while Ni, Cr and Zn mostly occurred in mineral structures. Aqua regia extraction is usefully supplemented with sequential extraction, particularly in less polluted soils and in soils that exhibit substantial textural differences within the profiles. - Sequential extraction is most useful with soils with low metal pollutant levels

  13. Effect of sequential isoproturon pulse exposure on Scenedesmus vacuolatus.

    Science.gov (United States)

    Vallotton, Nathalie; Eggen, Rik Ilda Lambertus; Chèvre, Nathalie

    2009-04-01

    Aquatic organisms are typically exposed to fluctuating concentrations of herbicides in streams. To assess the effects on algae of repeated peak exposure to the herbicide isoproturon, we subjected the alga Scenedesmus vacuolatus to two sequential pulse exposure scenarios. Effects on growth and on the inhibition of the effective quantum yield of photosystem II (PSII) were measured. In the first scenario, algae were exposed to short, 5-h pulses at high isoproturon concentrations (400 and 1000 microg/l), each followed by a recovery period of 18 h, while the second scenario consisted of 22.5-h pulses at lower concentrations (60 and 120 microg/l), alternating with short recovery periods (1.5 h). In addition, any changes in the sensitivity of the algae to isoproturon following sequential pulses were examined by determining the growth rate-EC(50) prior to and following exposure. In both exposure scenarios, we found that algal growth and its effective quantum yield were systematically inhibited during the exposures and that these effects were reversible. Sequential pulses to isoproturon could be considered a sequence of independent events. Nevertheless, a consequence of inhibited growth during the repeated exposures is the cumulative decrease in biomass production. Furthermore, in the second scenario, when the sequence of long pulses began to approach a scenario of continuous exposure, a slight increase in the tolerance of the algae to isoproturon was observed. These findings indicated that sequential pulses do affect algae during each pulse exposure, even if algae recover between the exposures. These observations could support an improved risk assessment of fluctuating exposures to reversibly acting herbicides.

  14. Deciphering Intrinsic Inter-subunit Couplings that Lead to Sequential Hydrolysis of F 1 -ATPase Ring

    Science.gov (United States)

    Dai, Liqiang; Flechsig, Holger; Yu, Jin

    2017-10-01

    The rotary sequential hydrolysis of metabolic machine F1-ATPase is a prominent feature to reveal high coordination among multiple chemical sites on the stator F1 ring, which also contributes to tight coupling between the chemical reaction and central {\\gamma}-shaft rotation. High-speed AFM experiments discovered that the sequential hydrolysis was maintained on the F1 ring even in the absence of the {\\gamma} rotor. To explore how the intrinsic sequential performance arises, we computationally investigated essential inter-subunit couplings on the hexameric ring of mitochondrial and bacterial F1. We first reproduced the sequential hydrolysis schemes as experimentally detected, by simulating tri-site ATP hydrolysis cycles on the F1 ring upon kinetically imposing inter-subunit couplings to substantially promote the hydrolysis products release. We found that it is key for certain ATP binding and hydrolysis events to facilitate the neighbor-site ADP and Pi release to support the sequential hydrolysis. The kinetically feasible couplings were then scrutinized through atomistic molecular dynamics simulations as well as coarse-grained simulations, in which we enforced targeted conformational changes for the ATP binding or hydrolysis. Notably, we detected the asymmetrical neighbor-site opening that would facilitate the ADP release upon the enforced ATP binding, and computationally captured the complete Pi release through charge hopping upon the enforced neighbor-site ATP hydrolysis. The ATP-hydrolysis triggered Pi release revealed in current TMD simulation confirms a recent prediction made from statistical analyses of single molecule experimental data in regard to the role ATP hydrolysis plays. Our studies, therefore, elucidate both the concerted chemical kinetics and underlying structural dynamics of the inter-subunit couplings that lead to the rotary sequential hydrolysis of the F1 ring.

  15. Sequential optimization and reliability assessment method for metal forming processes

    International Nuclear Information System (INIS)

    Sahai, Atul; Schramm, Uwe; Buranathiti, Thaweepat; Chen Wei; Cao Jian; Xia, Cedric Z.

    2004-01-01

    Uncertainty is inevitable in any design process. The uncertainty could be due to the variations in geometry of the part, material properties or due to the lack of knowledge about the phenomena being modeled itself. Deterministic design optimization does not take uncertainty into account and worst case scenario assumptions lead to vastly over conservative design. Probabilistic design, such as reliability-based design and robust design, offers tools for making robust and reliable decisions under the presence of uncertainty in the design process. Probabilistic design optimization often involves double-loop procedure for optimization and iterative probabilistic assessment. This results in high computational demand. The high computational demand can be reduced by replacing computationally intensive simulation models with less costly surrogate models and by employing Sequential Optimization and reliability assessment (SORA) method. The SORA method uses a single-loop strategy with a series of cycles of deterministic optimization and reliability assessment. The deterministic optimization and reliability assessment is decoupled in each cycle. This leads to quick improvement of design from one cycle to other and increase in computational efficiency. This paper demonstrates the effectiveness of Sequential Optimization and Reliability Assessment (SORA) method when applied to designing a sheet metal flanging process. Surrogate models are used as less costly approximations to the computationally expensive Finite Element simulations

  16. A Theory of Sequential Reciprocity

    NARCIS (Netherlands)

    Dufwenberg, M.; Kirchsteiger, G.

    1998-01-01

    Many experimental studies indicate that people are motivated by reciprocity. Rabin (1993) develops techniques for incorporating such concerns into game theory and economics. His model, however, does not fare well when applied to situations with an interesting dynamic structure (like many

  17. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    KAUST Repository

    Khaki, M.

    2017-07-06

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  18. Sequential dependencies in magnitude scaling of loudness

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Jesteadt, Walt

    2013-01-01

    Ten normally hearing listeners used a programmable sone-potentiometer knob to adjust the level of a 1000-Hz sinusoid to match the loudness of numbers presented to them in a magnitude production task. Three different power-law exponents (0.15, 0.30, and 0.60) and a log-law with equal steps in d......B were used to program the sone-potentiometer. The knob settings systematically influenced the form of the loudness function. Time series analysis was used to assess the sequential dependencies in the data, which increased with increasing exponent and were greatest for the log-law. It would be possible......, therefore, to choose knob properties that minimized these dependencies. When the sequential dependencies were removed from the data, the slope of the loudness functions did not change, but the variability decreased. Sequential dependencies were only present when the level of the tone on the previous trial...

  19. Attenuation of bulk organic matter, nutrients (N and P), and pathogen indicators during soil passage: Effect of temperature and redox conditions in simulated soil aquifer treatment (SAT)

    KAUST Repository

    Abel, Chol D T

    2012-07-22

    Soil aquifer treatment (SAT) is a costeffective natural wastewater treatment and reuse technology. It is an environmentally friendly technology that does not require chemical usage and is applicable to both developing and developed countries. However, the presence of organic matter, nutrients, and pathogens poses a major health threat to the population exposed to partially treated wastewater or reclaimed water through SAT. Laboratory-based soil column and batch experiments simulating SAT were conducted to examine the influence of temperature variation and oxidation-reduction (redox) conditions on removal of bulk organic matter, nutrients, and indicator microorganisms using primary effluent. While an average dissolved organic carbon (DOC) removal of 17.7 % was achieved in soil columns at 5 °C, removal at higher temperatures increased by 10 % increments with increase in temperature by 5 °C over the range of 15 to 25 °C. Furthermore, soil column and batch experiments conducted under different redox conditions revealed higher DOC removal in aerobic (oxic) experiments compared to anoxic experiments. Aerobic soil columns exhibited DOC removal 15 % higher than that achieved in the anoxic columns, while aerobic batch showed DOC removal 7.8 % higher than the corresponding anoxic batch experiments. Ammonium-nitrogen removal greater than 99 % was observed at 20 and 25 °C, while 89.7 % was removed at 15 °C, but the removal substantially decreased to 8.8 % at 5 °C. While ammonium-nitrogen was attenuated by 99.9 % in aerobic batch reactors carried out at room temperature, anoxic experiments under similar conditions revealed 12.1 % ammonium-nitrogen reduction, corresponding to increase in nitrate-nitrogen and decrease in sulfate concentration. © Springer Science+Business Media B.V. 2012.

  20. The finite sample performance of estimators for mediation analysis under sequential conditional independence

    DEFF Research Database (Denmark)

    Huber, Martin; Lechner, Michael; Mellace, Giovanni

    Using a comprehensive simulation study based on empirical data, this paper investigates the finite sample properties of different classes of parametric and semi-parametric estimators of (natural) direct and indirect causal effects used in mediation analysis under sequential conditional independence...

  1. The finite sample performance of estimators for mediation analysis under sequential conditional independence

    DEFF Research Database (Denmark)

    Huber, Martin; Lechner, Michael; Mellace, Giovanni

    2016-01-01

    Using a comprehensive simulation study based on empirical data, this paper investigates the finite sample properties of different classes of parametric and semi-parametric estimators of (natural) direct and indirect causal effects used in mediation analysis under sequential conditional independen...... of the methods often (but not always) varies with the features of the data generating process....

  2. A Comparison of Ultimate Loads from Fully and Sequentially Coupled Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wendt, Fabian F [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Damiani, Rick R [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-11-14

    This poster summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between two modeling approaches (fully coupled and sequentially coupled) through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.

  3. Dihydroazulene photoswitch operating in sequential tunneling regime

    DEFF Research Database (Denmark)

    Broman, Søren Lindbæk; Lara-Avila, Samuel; Thisted, Christine Lindbjerg

    2012-01-01

    to electrodes so that the electron transport goes by sequential tunneling. To assure weak coupling, the DHA switching kernel is modified by incorporating p-MeSC6H4 end-groups. Molecules are prepared by Suzuki cross-couplings on suitable halogenated derivatives of DHA. The synthesis presents an expansion of our......, incorporating a p-MeSC6H4 anchoring group in one end, has been placed in a silver nanogap. Conductance measurements justify that transport through both DHA (high resistivity) and VHF (low resistivity) forms goes by sequential tunneling. The switching is fairly reversible and reenterable; after more than 20 ON...

  4. Asynchronous Operators of Sequential Logic Venjunction & Sequention

    CERN Document Server

    Vasyukevich, Vadim

    2011-01-01

    This book is dedicated to new mathematical instruments assigned for logical modeling of the memory of digital devices. The case in point is logic-dynamical operation named venjunction and venjunctive function as well as sequention and sequentional function. Venjunction and sequention operate within the framework of sequential logic. In a form of the corresponding equations, they organically fit analytical expressions of Boolean algebra. Thus, a sort of symbiosis is formed using elements of asynchronous sequential logic on the one hand and combinational logic on the other hand. So, asynchronous

  5. The effects of sequential attention shifts within visual working memory

    Directory of Open Access Journals (Sweden)

    Qi eLi

    2014-09-01

    Full Text Available Previous studies have shown conflicting data as to whether it is possible to sequentially shift spatial attention among visual working memory (VWM representations. The present study investigated this issue by asynchronously presenting attentional cues during the retention interval of a change detection task. In particular, we focused on two types of sequential attention shifts: 1 orienting attention to one location, and then withdrawing attention from it, and 2 switching the focus of attention from one location to another. In Experiment 1, a withdrawal cue was presented after a spatial retro-cue to measure the effect of withdrawing attention. The withdrawal cue significantly reduced the cost of invalid spatial cues, but surprisingly, did not attenuate the benefit of valid spatial cues. This indicates that the withdrawal cue only triggered the activation of facilitative components but not inhibitory components of attention. In Experiment 2, two spatial retro-cues were presented successively to examine the effect of switching the focus of attention. We observed benefits of both the first and second cues in sequential cueing, indicating that participants were able to reorient attention from one location to another within VWM, and the reallocation of attention did not attenuate memory at the first cued location. In Experiment 3, we found that reducing the validity of the preceding spatial cue did lead to a significant reduction in its benefit. However, performance at the first-cued location was still better than the neutral baseline or performance at the uncued locations, indicating that the first cue benefit might have been preserved both partially under automatic control and partially under voluntary control. Our findings revealed new properties of dynamic attentional control in VWM maintenance.

  6. Eyewitness confidence in simultaneous and sequential lineups: a criterion shift account for sequential mistaken identification overconfidence.

    Science.gov (United States)

    Dobolyi, David G; Dodson, Chad S

    2013-12-01

    Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  7. Impact of sequential disorder on the scaling behavior of airplane boarding time

    Science.gov (United States)

    Baek, Yongjoo; Ha, Meesoon; Jeong, Hawoong

    2013-05-01

    Airplane boarding process is an example where disorder properties of the system are relevant to the emergence of universality classes. Based on a simple model, we present a systematic analysis of finite-size effects in boarding time, and propose a comprehensive view of the role of sequential disorder in the scaling behavior of boarding time against the plane size. Using numerical simulations and mathematical arguments, we find how the scaling behavior depends on the number of seat columns and the range of sequential disorder. Our results show that new scaling exponents can arise as disorder is localized to varying extents.

  8. Bootstrap Sequential Determination of the Co-integration Rank in VAR Models

    DEFF Research Database (Denmark)

    Guiseppe, Cavaliere; Rahbæk, Anders; Taylor, A.M. Robert

    with empirical rejection frequencies often very much in excess of the nominal level. As a consequence, bootstrap versions of these tests have been developed. To be useful, however, sequential procedures for determining the co-integrating rank based on these bootstrap tests need to be consistent, in the sense...... in the literature by proposing a bootstrap sequential algorithm which we demonstrate delivers consistent cointegration rank estimation for general I(1) processes. Finite sample Monte Carlo simulations show the proposed procedure performs well in practice....

  9. Sequential analysis of materials balances. Application to a prospective reprocessing facility

    International Nuclear Information System (INIS)

    Picard, R.

    1986-01-01

    This paper discusses near-real-time accounting in the context of the prospective DWK reprocessing plant. Sensitivity of a standard sequential testing procedure, applied to unfalsified operator data only, is examined with respect to a variety of loss scenarios. It is seen that large inventories preclude high-probability detection of certain protracted losses of material. In Sec. 2, a rough error propagation for the MBA of interest is outlined. Mathematical development for the analysis is given in Sec. 3, and generic aspects of sequential testing are reviewed in Sec. 4. In Sec. 5, results from a simulation to quantify performance of the accounting system are presented

  10. Interpretability degrees of finitely axiomatized sequential theories

    NARCIS (Netherlands)

    Visser, Albert

    In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory-like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB-have suprema. This partially answers a question posed

  11. Interpretability Degrees of Finitely Axiomatized Sequential Theories

    NARCIS (Netherlands)

    Visser, Albert

    2012-01-01

    In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory —like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB— have suprema. This partially answers a question

  12. S.M.P. SEQUENTIAL MATHEMATICS PROGRAM.

    Science.gov (United States)

    CICIARELLI, V; LEONARD, JOSEPH

    A SEQUENTIAL MATHEMATICS PROGRAM BEGINNING WITH THE BASIC FUNDAMENTALS ON THE FOURTH GRADE LEVEL IS PRESENTED. INCLUDED ARE AN UNDERSTANDING OF OUR NUMBER SYSTEM, AND THE BASIC OPERATIONS OF WORKING WITH WHOLE NUMBERS--ADDITION, SUBTRACTION, MULTIPLICATION, AND DIVISION. COMMON FRACTIONS ARE TAUGHT IN THE FIFTH, SIXTH, AND SEVENTH GRADES. A…

  13. Sequential and Simultaneous Logit: A Nested Model.

    NARCIS (Netherlands)

    van Ophem, J.C.M.; Schram, A.J.H.C.

    1997-01-01

    A nested model is presented which has both the sequential and the multinomial logit model as special cases. This model provides a simple test to investigate the validity of these specifications. Some theoretical properties of the model are discussed. In the analysis a distribution function is

  14. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  15. Sequential models for coarsening and missingness

    NARCIS (Netherlands)

    Gill, R.D.; Robins, J.M.

    1997-01-01

    In a companion paper we described what intuitively would seem to be the most general possible way to generate Coarsening at Random mechanisms a sequential procedure called randomized monotone coarsening Counterexamples showed that CAR mechanisms exist which cannot be represented in this way Here we

  16. Sequential motor skill: cognition, perception and action

    NARCIS (Netherlands)

    Ruitenberg, M.F.L.

    2013-01-01

    Discrete movement sequences are assumed to be the building blocks of more complex sequential actions that are present in our everyday behavior. The studies presented in this dissertation address the (neuro)cognitive underpinnings of such movement sequences, in particular in relationship to the role

  17. Sequential decoders for large MIMO systems

    KAUST Repository

    Ali, Konpal S.; Abediseid, Walid; Alouini, Mohamed-Slim

    2014-01-01

    the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity

  18. A framework for sequential multiblock component methods

    NARCIS (Netherlands)

    Smilde, A.K.; Westerhuis, J.A.; Jong, S.de

    2003-01-01

    Multiblock or multiset methods are starting to be used in chemistry and biology to study complex data sets. In chemometrics, sequential multiblock methods are popular; that is, methods that calculate one component at a time and use deflation for finding the next component. In this paper a framework

  19. Classical and sequential limit analysis revisited

    Science.gov (United States)

    Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi

    2018-04-01

    Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.

  20. Sequential Analysis: Hypothesis Testing and Changepoint Detection

    Science.gov (United States)

    2014-07-11

    maintains the flexibility of deciding sooner than the fixed sample size procedure at the price of some lower power [13, 514]. The sequential probability... markets , detection of signals with unknown arrival time in seismology, navigation, radar and sonar signal processing, speech segmentation, and the... skimming cruise missile can yield a significant increase in the probability of raid annihilation. Furthermore, usually detection systems are

  1. STABILIZED SEQUENTIAL QUADRATIC PROGRAMMING: A SURVEY

    Directory of Open Access Journals (Sweden)

    Damián Fernández

    2014-12-01

    Full Text Available We review the motivation for, the current state-of-the-art in convergence results, and some open questions concerning the stabilized version of the sequential quadratic programming algorithm for constrained optimization. We also discuss the tools required for its local convergence analysis, globalization challenges, and extentions of the method to the more general variational problems.

  2. Truly costly sequential search and oligopolistic pricing

    NARCIS (Netherlands)

    Janssen, Maarten C W; Moraga-González, José Luis; Wildenbeest, Matthijs R.

    We modify the paper of Stahl (1989) [Stahl, D.O., 1989. Oligopolistic pricing with sequential consumer search. American Economic Review 79, 700-12] by relaxing the assumption that consumers obtain the first price quotation for free. When all price quotations are costly to obtain, the unique

  3. Zips : mining compressing sequential patterns in streams

    NARCIS (Netherlands)

    Hoang, T.L.; Calders, T.G.K.; Yang, J.; Mörchen, F.; Fradkin, D.; Chau, D.H.; Vreeken, J.; Leeuwen, van M.; Faloutsos, C.

    2013-01-01

    We propose a streaming algorithm, based on the minimal description length (MDL) principle, for extracting non-redundant sequential patterns. For static databases, the MDL-based approach that selects patterns based on their capacity to compress data rather than their frequency, was shown to be

  4. Adult Word Recognition and Visual Sequential Memory

    Science.gov (United States)

    Holmes, V. M.

    2012-01-01

    Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…

  5. Terminating Sequential Delphi Survey Data Collection

    Science.gov (United States)

    Kalaian, Sema A.; Kasim, Rafa M.

    2012-01-01

    The Delphi survey technique is an iterative mail or electronic (e-mail or web-based) survey method used to obtain agreement or consensus among a group of experts in a specific field on a particular issue through a well-designed and systematic multiple sequential rounds of survey administrations. Each of the multiple rounds of the Delphi survey…

  6. The target-to-foils shift in simultaneous and sequential lineups.

    Science.gov (United States)

    Clark, Steven E; Davey, Sherrie L

    2005-04-01

    A theoretical cornerstone in eyewitness identification research is the proposition that witnesses, in making decisions from standard simultaneous lineups, make relative judgments. The present research considers two sources of support for this proposal. An experiment by G. L. Wells (1993) showed that if the target is removed from a lineup, witnesses shift their responses to pick foils, rather than rejecting the lineups, a result we will term a target-to-foils shift. Additional empirical support is provided by results from sequential lineups which typically show higher accuracy than simultaneous lineups, presumably because of a decrease in the use of relative judgments in making identification decisions. The combination of these two lines of research suggests that the target-to-foils shift should be reduced in sequential lineups relative to simultaneous lineups. Results of two experiments showed an overall advantage for sequential lineups, but also showed a target-to-foils shift equal in size for simultaneous and sequential lineups. Additional analyses indicated that the target-to-foils shift in sequential lineups was moderated in part by an order effect and was produced with (Experiment 2) or without (Experiment 1) a shift in decision criterion. This complex pattern of results suggests that more work is needed to understand the processes which underlie decisions in simultaneous and sequential lineups.

  7. Effects of neostriatal 6-OHDA lesion on performance in a rat sequential reaction time task.

    Science.gov (United States)

    Domenger, D; Schwarting, R K W

    2008-10-31

    Work in humans and monkeys has provided evidence that the basal ganglia, and the neurotransmitter dopamine therein, play an important role for sequential learning and performance. Compared to primates, experimental work in rodents is rather sparse, largely due to the fact that tasks comparable to the human ones, especially serial reaction time tasks (SRTT), had been lacking until recently. We have developed a rat model of the SRTT, which allows to study neural correlates of sequential performance and motor sequence execution. Here, we report the effects of dopaminergic neostriatal lesions, performed using bilateral 6-hydroxydopamine injections, on performance of well-trained rats tested in our SRTT. Sequential behavior was measured in two ways: for one, the effects of small violations of otherwise well trained sequences were examined as a measure of attention and automation. Secondly, sequential versus random performance was compared as a measure of sequential learning. Neurochemically, the lesions led to sub-total dopamine depletions in the neostriatum, which ranged around 60% in the lateral, and around 40% in the medial neostriatum. These lesions led to a general instrumental impairment in terms of reduced speed (response latencies) and response rate, and these deficits were correlated with the degree of striatal dopamine loss. Furthermore, the violation test indicated that the lesion group conducted less automated responses. The comparison of random versus sequential responding showed that the lesion group did not retain its superior sequential performance in terms of speed, whereas they did in terms of accuracy. Also, rats with lesions did not improve further in overall performance as compared to pre-lesion values, whereas controls did. These results support previous results that neostriatal dopamine is involved in instrumental behaviour in general. Also, these lesions are not sufficient to completely abolish sequential performance, at least when acquired

  8. Simultaneous sequential monitoring of efficacy and safety led to masking of effects.

    Science.gov (United States)

    van Eekelen, Rik; de Hoop, Esther; van der Tweel, Ingeborg

    2016-08-01

    Usually, sequential designs for clinical trials are applied on the primary (=efficacy) outcome. In practice, other outcomes (e.g., safety) will also be monitored and influence the decision whether to stop a trial early. Implications of simultaneous monitoring on trial decision making are yet unclear. This study examines what happens to the type I error, power, and required sample sizes when one efficacy outcome and one correlated safety outcome are monitored simultaneously using sequential designs. We conducted a simulation study in the framework of a two-arm parallel clinical trial. Interim analyses on two outcomes were performed independently and simultaneously on the same data sets using four sequential monitoring designs, including O'Brien-Fleming and Triangular Test boundaries. Simulations differed in values for correlations and true effect sizes. When an effect was present in both outcomes, competition was introduced, which decreased power (e.g., from 80% to 60%). Futility boundaries for the efficacy outcome reduced overall type I errors as well as power for the safety outcome. Monitoring two correlated outcomes, given that both are essential for early trial termination, leads to masking of true effects. Careful consideration of scenarios must be taken into account when designing sequential trials. Simulation results can help guide trial design. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Simultaneous Versus Sequential Presentation in Testing Recognition Memory for Faces.

    Science.gov (United States)

    Finley, Jason R; Roediger, Henry L; Hughes, Andrea D; Wahlheim, Christopher N; Jacoby, Larry L

    2015-01-01

    Three experiments examined the issue of whether faces could be better recognized in a simul- taneous test format (2-alternative forced choice [2AFC]) or a sequential test format (yes-no). All experiments showed that when target faces were present in the test, the simultaneous procedure led to superior performance (area under the ROC curve), whether lures were high or low in similarity to the targets. However, when a target-absent condition was used in which no lures resembled the targets but the lures were similar to each other, the simultaneous procedure yielded higher false alarm rates (Experiments 2 and 3) and worse overall performance (Experi- ment 3). This pattern persisted even when we excluded responses that participants opted to withhold rather than volunteer. We conclude that for the basic recognition procedures used in these experiments, simultaneous presentation of alternatives (2AFC) generally leads to better discriminability than does sequential presentation (yes-no) when a target is among the alterna- tives. However, our results also show that the opposite can occur when there is no target among the alternatives. An important future step is to see whether these patterns extend to more realistic eyewitness lineup procedures. The pictures used in the experiment are available online at http://www.press.uillinois.edu/journals/ajp/media/testing_recognition/.

  10. Optimal Energy Management of Multi-Microgrids with Sequentially Coordinated Operations

    Directory of Open Access Journals (Sweden)

    Nah-Oak Song

    2015-08-01

    Full Text Available We propose an optimal electric energy management of a cooperative multi-microgrid community with sequentially coordinated operations. The sequentially coordinated operations are suggested to distribute computational burden and yet to make the optimal 24 energy management of multi-microgrids possible. The sequential operations are mathematically modeled to find the optimal operation conditions and illustrated with physical interpretation of how to achieve optimal energy management in the cooperative multi-microgrid community. This global electric energy optimization of the cooperative community is realized by the ancillary internal trading between the microgrids in the cooperative community which reduces the extra cost from unnecessary external trading by adjusting the electric energy production amounts of combined heat and power (CHP generators and amounts of both internal and external electric energy trading of the cooperative community. A simulation study is also conducted to validate the proposed mathematical energy management models.

  11. Sequential decoding of intramuscular EMG signals via estimation of a Markov model.

    Science.gov (United States)

    Monsifrot, Jonathan; Le Carpentier, Eric; Aoustin, Yannick; Farina, Dario

    2014-09-01

    This paper addresses the sequential decoding of intramuscular single-channel electromyographic (EMG) signals to extract the activity of individual motor neurons. A hidden Markov model is derived from the physiological generation of the EMG signal. The EMG signal is described as a sum of several action potentials (wavelet) trains, embedded in noise. For each train, the time interval between wavelets is modeled by a process that parameters are linked to the muscular activity. The parameters of this process are estimated sequentially by a Bayes filter, along with the firing instants. The method was tested on some simulated signals and an experimental one, from which the rates of detection and classification of action potentials were above 95% with respect to the reference decomposition. The method works sequentially in time, and is the first to address the problem of intramuscular EMG decomposition online. It has potential applications for man-machine interfacing based on motor neuron activities.

  12. Sequential sampling: a novel method in farm animal welfare assessment.

    Science.gov (United States)

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall

  13. Mixed mode and sequential oscillations in the cerium-bromate-4-aminophenol photoreaction

    Energy Technology Data Exchange (ETDEWEB)

    Bell, Jeffrey G.; Wang Jichang [Department of Chemistry and Biochemistry, University of Windsor, Windsor, Ontario N9B 3P4 (Canada)

    2013-09-15

    Cerium was introduced to the bromate-aminophenol photochemical oscillator to implement coupled autocatalytic feedbacks. Mixed mode and sequential oscillations emerged in the studied system, making it one of the few chemical oscillators known to support consecutive bifurcations in a batch system. The complex reaction behavior showed a strong dependence on the intensity of illumination supplied to the system. Removal of illumination during an oscillatory window affected both the frequency and amplitude of the oscillation but did not fully extinguish them, indicating that the cerium-bromate-4-aminophenol oscillator was photosensitive rather than photo-controlled. A moderate light intensity allowed for a slow evolution of the system, which proved to be critical for the emergence of transient complex oscillations. Variation of individual reaction parameters was carried out, which indicated that the development of complex oscillations occur in a narrow region and a phase diagram in the 4-aminophenol and sulfuric acid plane demonstrated this. Simulations provide strong support that transient complex oscillations observed experimentally arise from the coupling of two autocatalytic cycles.

  14. Empirical Productivity Indices and Indicators

    NARCIS (Netherlands)

    B.M. Balk (Bert)

    2016-01-01

    textabstractThe empirical measurement of productivity change (or difference) by means of indices and indicators starts with the ex post profit/loss accounts of a production unit. Key concepts are profit, leading to indicators, and profitability, leading to indices. The main task for the productivity

  15. Mobility of radionuclides based on sequential extraction of soils

    International Nuclear Information System (INIS)

    Salbu, B.; Oughton, D.H.; Lien, H.N.; Oestby, G.; Strand, P.

    1992-01-01

    Since 1989, core samples of soil and vegetation from semi-natural pastures have been collected at selected sites in Norway during the growing season. The activity concentrations in soil and vegetation as well as transfer coefficients vary significantly between regions, within regions and even within sampling plot areas. In order to differentiate between mobil and inert fractions of radioactive and stable isotopes of Cs and Sr in soils, samples were extracted sequentially using agents with increasing dissolution power. The reproducibility of the sequential extraction technique is good and the data obtained seems most informative. As the distribution pattern for radioactive and stable isotopes of Cs and Sr are similar, a high degree of isotopic exchange is indicated. Based on easily leachable fractions, mobility factors are calculated. In general the mobility of 90 Sr is higher than for 137 Cs. Mobility factors are not significantly influenced by seasonal variations, but a decrease in the mobile fraction in soil with time is indicated. Mobility factors should be considered useful for modelling purposes. (au)

  16. Sequential method for the assessment of innovations in computer assisted industrial processes; Metodo secuencial para evaluacion de innovaciones en procesos industriales asistido por computadora

    Energy Technology Data Exchange (ETDEWEB)

    Suarez Antola, R [Universidad Catolica del Uruguay, Montevideo (Uruguay); Artucio, G [Ministerio de Industria Energia y Mineria. Direccion Nacional de Tecnologia Nuclear, Montevideo (Uruguay)

    1995-08-01

    A sequential method for the assessment of innovations in industrial processes is proposed, using suitable combinations of mathematical modelling and numerical simulation of dynamics. Some advantages and limitations of the proposed method are discussed. tabs.

  17. Sequential Ground Motion Effects on the Behavior of a Base-Isolated RCC Building

    Directory of Open Access Journals (Sweden)

    Zhi Zheng

    2017-01-01

    Full Text Available The sequential ground motion effects on the dynamic responses of reinforced concrete containment (RCC buildings with typical isolators are studied in this paper. Although the base isolation technique is developed to guarantee the security and integrity of RCC buildings under single earthquakes, seismic behavior of base-isolated RCC buildings under sequential ground motions is deficient. Hence, an ensemble of as-recorded sequential ground motions is employed to study the effect of including aftershocks on the seismic evaluation of base-isolated RCC buildings. The results indicate that base isolation can significantly attenuate the earthquake shaking of the RCC building under not only single earthquakes but also seismic sequences. It is also found that the adverse aftershock effect on the RCC can be reduced due to the base isolation applied to the RCC. More importantly, the study indicates that disregarding aftershocks can induce significant underestimation of the isolator displacement for base-isolated RCC buildings.

  18. Impact of Diagrams on Recalling Sequential Elements in Expository Texts.

    Science.gov (United States)

    Guri-Rozenblit, Sarah

    1988-01-01

    Examines the instructional effectiveness of abstract diagrams on recall of sequential relations in social science textbooks. Concludes that diagrams assist significantly the recall of sequential relations in a text and decrease significantly the rate of order mistakes. (RS)

  19. Updated and standardized genome-scale reconstruction of Mycobacterium tuberculosis H37Rv, iEK1011, simulates flux states indicative of physiological conditions

    DEFF Research Database (Denmark)

    Kavvas, Erol S.; Seif, Yara; Yurkovich, James T.

    2018-01-01

    previous M. tuberculosis H37Rv genome-scale reconstructions. We functionally assess iEK1011 against previous models and show that the model increases correct gene essentiality predictions on two different experimental datasets by 6% (53% to 60%) and 18% (60% to 71%), respectively. We compared simulations...

  20. Fetal Echocardiography and Indications

    Directory of Open Access Journals (Sweden)

    Melih Atahan Güven

    2008-09-01

    Full Text Available Congenital heart diseases are encountered in 0.8% of live births and are among the most frequently diagnosed malformations. At least half of these anomalies end up with death or require surgical interventions and are responsible for 30% of the perinatal mortality. Fetal echocardiography is the sum of knowledge, skill and orientation rather than knowing the embryologic details of the fetal heart. The purpose of fetal echocardiography is to document the presence of normal fetal cardiac anatomy and rhythm in high risk group and to define the anomaly and arrhythmia if present. A certain sequence should be followed during the evaluation of fetal heart. Sequential segmental analysis (SSA and basic definition terminology made it possible to determine a lot of complex cardiac anomalies during prenatal period. By the end of 1970’s, Shinebourne started using sequential segmental analysis for fetal cardiac evaluation and today, prenatal diagnosis of congenital heart disease is possible without any confusion. In this manner, whole fetal heart can be evaluated as the relation of three segments (atria, ventricles and the great arteries with each other, irrelevant of complexity of a possible cardiac anomaly. Presence of increased nuchal thickness during early gestation and abnormal four-chamber-view during ultrasonography by the obstetrician presents a clear indication for fetal echocardiography,however, one should keep in mind that 80-90% of the babies born with a congenital heart disease do not have a familial or maternal risk factor. In addition, it should be remembered that expectant mothers with diabetes mellitus pose an indication for fetal echocardiography.

  1. Transaction costs and sequential bargaining in transferable discharge permit markets.

    Science.gov (United States)

    Netusil, N R; Braden, J B

    2001-03-01

    Market-type mechanisms have been introduced and are being explored for various environmental programs. Several existing programs, however, have not attained the cost savings that were initially projected. Modeling that acknowledges the role of transactions costs and the discrete, bilateral, and sequential manner in which trades are executed should provide a more realistic basis for calculating potential cost savings. This paper presents empirical evidence on potential cost savings by examining a market for the abatement of sediment from farmland. Empirical results based on a market simulation model find no statistically significant change in mean abatement costs under several transaction cost levels when contracts are randomly executed. An alternative method of contract execution, gain-ranked, yields similar results. At the highest transaction cost level studied, trading reduces the total cost of compliance relative to a uniform standard that reflects current regulations.

  2. A one-sided sequential test

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A.; Lux, I. [Hungarian Academy of Sciences, Budapest (Hungary). Atomic Energy Research Inst.

    1996-04-16

    The applicability of the classical sequential probability ratio testing (SPRT) for early failure detection problems is limited by the fact that there is an extra time delay between the occurrence of the failure and its first recognition. Chien and Adams developed a method to minimize this time for the case when the problem can be formulated as testing the mean value of a Gaussian signal. In our paper we propose a procedure that can be applied for both mean and variance testing and that minimizes the time delay. The method is based on a special parametrization of the classical SPRT. The one-sided sequential tests (OSST) can reproduce the results of the Chien-Adams test when applied for mean values. (author).

  3. Documentscape: Intertextuality, Sequentiality & Autonomy at Work

    DEFF Research Database (Denmark)

    Christensen, Lars Rune; Bjørn, Pernille

    2014-01-01

    On the basis of an ethnographic field study, this article introduces the concept of documentscape to the analysis of document-centric work practices. The concept of documentscape refers to the entire ensemble of documents in their mutual intertextual interlocking. Providing empirical data from...... a global software development case, we show how hierarchical structures and sequentiality across the interlocked documents are critical to how actors make sense of the work of others and what to do next in a geographically distributed setting. Furthermore, we found that while each document is created...... as part of a quasi-sequential order, this characteristic does not make the document, as a single entity, into a stable object. Instead, we found that the documents were malleable and dynamic while suspended in intertextual structures. Our concept of documentscape points to how the hierarchical structure...

  4. A minimax procedure in the context of sequential mastery testing

    NARCIS (Netherlands)

    Vos, Hendrik J.

    1999-01-01

    The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master or a nonmaster, or to continue sampling and administering another random test item. The framework of minimax sequential decision theory

  5. Applying the minimax principle to sequential mastery testing

    NARCIS (Netherlands)

    Vos, Hendrik J.

    2002-01-01

    The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master, a nonmaster, or to continue sampling and administering another random item. The framework of minimax sequential decision theory (minimum

  6. Optimal Sequential Rules for Computer-Based Instruction.

    Science.gov (United States)

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  7. On Locally Most Powerful Sequential Rank Tests

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 36, č. 1 (2017), s. 111-125 ISSN 0747-4946 R&D Projects: GA ČR GA17-07384S Grant - others:Nadační fond na podporu vědy(CZ) Neuron Institutional support: RVO:67985807 Keywords : nonparametric test s * sequential ranks * stopping variable Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.339, year: 2016

  8. Sequential pattern recognition by maximum conditional informativity

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří

    2014-01-01

    Roč. 45, č. 1 (2014), s. 39-45 ISSN 0167-8655 R&D Projects: GA ČR(CZ) GA14-02652S; GA ČR(CZ) GA14-10911S Keywords : Multivariate statistics * Statistical pattern recognition * Sequential decision making * Product mixtures * EM algorithm * Shannon information Subject RIV: IN - Informatics, Computer Sci ence Impact factor: 1.551, year: 2014 http://library.utia.cas.cz/separaty/2014/RO/grim-0428565.pdf

  9. Comparing two Poisson populations sequentially: an application

    International Nuclear Information System (INIS)

    Halteman, E.J.

    1986-01-01

    Rocky Flats Plant in Golden, Colorado monitors each of its employees for radiation exposure. Excess exposure is detected by comparing the means of two Poisson populations. A sequential probability ratio test (SPRT) is proposed as a replacement for the fixed sample normal approximation test. A uniformly most efficient SPRT exists, however logistics suggest using a truncated SPRT. The truncated SPRT is evaluated in detail and shown to possess large potential savings in average time spent by employees in the monitoring process

  10. Heat accumulation during sequential cortical bone drilling.

    Science.gov (United States)

    Palmisano, Andrew C; Tai, Bruce L; Belmont, Barry; Irwin, Todd A; Shih, Albert; Holmes, James R

    2016-03-01

    Significant research exists regarding heat production during single-hole bone drilling. No published data exist regarding repetitive sequential drilling. This study elucidates the phenomenon of heat accumulation for sequential drilling with both Kirschner wires (K wires) and standard two-flute twist drills. It was hypothesized that cumulative heat would result in a higher temperature with each subsequent drill pass. Nine holes in a 3 × 3 array were drilled sequentially on moistened cadaveric tibia bone kept at body temperature (about 37 °C). Four thermocouples were placed at the center of four adjacent holes and 2 mm below the surface. A battery-driven hand drill guided by a servo-controlled motion system was used. Six samples were drilled with each tool (2.0 mm K wire and 2.0 and 2.5 mm standard drills). K wire drilling increased temperature from 5 °C at the first hole to 20 °C at holes 6 through 9. A similar trend was found in standard drills with less significant increments. The maximum temperatures of both tools increased from drill sizes was found to be insignificant (P > 0.05). In conclusion, heat accumulated during sequential drilling, with size difference being insignificant. K wire produced more heat than its twist-drill counterparts. This study has demonstrated the heat accumulation phenomenon and its significant effect on temperature. Maximizing the drilling field and reducing the number of drill passes may decrease bone injury. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  11. Sequential neural models with stochastic layers

    DEFF Research Database (Denmark)

    Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

    2016-01-01

    How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

  12. Waste indicators

    Energy Technology Data Exchange (ETDEWEB)

    Dall, O.; Lassen, C.; Hansen, E. [Cowi A/S, Lyngby (Denmark)

    2003-07-01

    The Waste Indicator Project focuses on methods to evaluate the efficiency of waste management. The project proposes the use of three indicators for resource consumption, primary energy and landfill requirements, based on the life-cycle principles applied in the EDIP Project. Trial runs are made With the indicators on paper, glass packaging and aluminium, and two models are identified for mapping the Danish waste management, of which the least extensive focuses on real and potential savings. (au)

  13. Waste indicators

    International Nuclear Information System (INIS)

    Dall, O.; Lassen, C.; Hansen, E.

    2003-01-01

    The Waste Indicator Project focuses on methods to evaluate the efficiency of waste management. The project proposes the use of three indicators for resource consumption, primary energy and landfill requirements, based on the life-cycle principles applied in the EDIP Project. Trial runs are made With the indicators on paper, glass packaging and aluminium, and two models are identified for mapping the Danish waste management, of which the least extensive focuses on real and potential savings. (au)

  14. Quality indicators

    DEFF Research Database (Denmark)

    Hjorth-Andersen, Christian

    1991-01-01

    In recent literature it has been suggested that consumers need have no knowledge of product quality as a number of quality indicators (or signals) may be used as substitutes. Very little attention has been paid to the empirical verification of these studies. The present paper is devoted...... to the issue of how well these indicators perform, using market data provided by consumer magazines from 3 countries. The results strongly indicate that price is a poor quality indicator. The paper also presents some evidence which suggests that seller reputation and easily observable characteristics are also...

  15. Sequential processing deficits in schizophrenia: relationship to neuropsychology and genetics.

    Science.gov (United States)

    Hill, S Kristian; Bjorkquist, Olivia; Carrathers, Tarra; Roseberry, Jarett E; Hochberger, William C; Bishop, Jeffrey R

    2013-12-01

    Utilizing a combination of neuropsychological and cognitive neuroscience approaches may be essential for characterizing cognitive deficits in schizophrenia and eventually assessing cognitive outcomes. This study was designed to compare the stability of select exemplars for these approaches and their correlations in schizophrenia patients with stable treatment and clinical profiles. Reliability estimates for serial order processing were comparable to neuropsychological measures and indicate that experimental serial order processing measures may be less susceptible to practice effects than traditional neuropsychological measures. Correlations were moderate and consistent with a global cognitive factor. Exploratory analyses indicated a potentially critical role of the Met allele of the Catechol-O-methyltransferase (COMT) Val158Met polymorphism in externally paced sequential recall. Experimental measures of serial order processing may reflect frontostriatal dysfunction and be a useful supplement to large neuropsychological batteries. © 2013.

  16. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    Science.gov (United States)

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  17. General indicators

    International Nuclear Information System (INIS)

    2003-01-01

    This document summarizes the main 2002 energy indicators for France. A first table lists the evolution of general indicators between 1973 and 2002: energy bill, price of imported crude oil, energy independence, primary and final energy consumption. The main 2002 results are detailed separately for natural gas, petroleum and coal (consumption, imports, exports, production, stocks, prices). (J.S.)

  18. Involving young people in decision making about sequential cochlear implantation.

    Science.gov (United States)

    Ion, Rebecca; Cropper, Jenny; Walters, Hazel

    2013-11-01

    The National Institute for Health and Clinical Excellence guidelines recommended young people who currently have one cochlear implant be offered assessment for a second, sequential implant, due to the reported improvements in sound localization and speech perception in noise. The possibility and benefits of group information and counselling assessments were considered. Previous research has shown advantages of group sessions involving young people and their families and such groups which also allow young people opportunity to discuss their concerns separately to their parents/guardians are found to be 'hugely important'. Such research highlights the importance of involving children in decision-making processes. Families considering a sequential cochlear implant were invited to a group information/counselling session, which included time for parents and children to meet separately. Fourteen groups were held with approximately four to five families in each session, totalling 62 patients. The sessions were facilitated by the multi-disciplinary team, with a particular psychological focus in the young people's session. Feedback from families has demonstrated positive support for this format. Questionnaire feedback, to which nine families responded, indicated that seven preferred the group session to an individual session and all approved of separate groups for the child and parents/guardians. Overall the group format and psychological focus were well received in this typically surgical setting and emphasized the importance of involving the young person in the decision-making process. This positive feedback also opens up the opportunity to use a group format in other assessment processes.

  19. Sequentially solution-processed, nanostructured polymer photovoltaics using selective solvents

    KAUST Repository

    Kim, Do Hwan; Mei, Jianguo; Ayzner, Alexander L.; Schmidt, Kristin; Giri, Gaurav; Appleton, Anthony L.; Toney, Michael F.; Bao, Zhenan

    2014-01-01

    We demonstrate high-performance sequentially solution-processed organic photovoltaics (OPVs) with a power conversion efficiency (PCE) of 5% for blend films using a donor polymer based on the isoindigo-bithiophene repeat unit (PII2T-C10C8) and a fullerene derivative [6,6]-phenyl-C[71]-butyric acid methyl ester (PC71BM). This has been accomplished by systematically controlling the swelling and intermixing processes of the layer with various processing solvents during deposition of the fullerene. We find that among the solvents used for fullerene deposition that primarily swell but do not re-dissolve the polymer underlayer, there were significant microstructural differences between chloro and o-dichlorobenzene solvents (CB and ODCB, respectively). Specifically, we show that the polymer crystallite orientation distribution in films where ODCB was used to cast the fullerene is broad. This indicates that out-of-plane charge transport through a tortuous transport network is relatively efficient due to a large density of inter-grain connections. In contrast, using CB results in primarily edge-on oriented polymer crystallites, which leads to diminished out-of-plane charge transport. We correlate these microstructural differences with photocurrent measurements, which clearly show that casting the fullerene out of ODCB leads to significantly enhanced power conversion efficiencies. Thus, we believe that tuning the processing solvents used to cast the electron acceptor in sequentially-processed devices is a viable way to controllably tune the blend film microstructure. © 2014 The Royal Society of Chemistry.

  20. Sequential-Simultaneous Processing and Reading Skills in Primary Grade Children.

    Science.gov (United States)

    McRae, Sandra G.

    1986-01-01

    The study examined relationships between two modes of information processing, simultaneous and sequential, and two sets of reading skills, word recognition and comprehension, among 40 second and third grade students. Results indicated there is a relationship between simultaneous processing and reading comprehension. (Author)

  1. Solar Indices

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Collection includes a variety of indices related to solar activity contributed by a number of national and private solar observatories located worldwide. This...

  2. MUSCLE OR MOTIVATION? A STOP SIGNAL STUDY ON THE EFFECTS OF SEQUENTIAL COGNITIVE CONTROL

    Directory of Open Access Journals (Sweden)

    Hilde M. Huizenga

    2012-05-01

    Full Text Available Performance in cognitive control tasks deteriorates when these tasks are performed together with other tasks that also require cognitive control, that is, if simultaneous cognitive control is required. Surprisingly, this decrease in performance is also observed if tasks are preceded by other cognitive control tasks, that is, if sequential cognitive control is required. The common explanation for the latter finding is that previous acts of cognitive control deplete a common resource, just like a muscle becomes fatigued after repeated use. An alternative explanation however has also been put forward, namely that repeated acts of cognitive control reduce the motivation to match allocated resources to required resources. In this paper we formalize these two accounts, the muscle and the motivation account, and show that they yield differential predictions on the interaction between simultaneous and sequential cognitive control. Such an interaction is not predicted by the muscle account, whereas it is predicted by the motivation account.These predictions were tested in a paradigm where participants had to perform a series of stop-signal tasks, these tasks varied both in their demands on simultaneous control and in their demands on sequential control. This paradigm, combined with a multilevel analysis, offered the possibility to test the differential predictions directly. Results of two studies indicate that an interaction between simultaneous and sequential cognitive control is present. Therefore it is concluded that effects of sequential cognitive control are best explained by the motivation account.

  3. Sequential Extraction Versus Comprehensive Characterization of Heavy Metal Species in Brownfield Soils

    Energy Technology Data Exchange (ETDEWEB)

    Dahlin, Cheryl L.; Williamson, Connie A.; Collins, W. Keith; Dahlin, David C.

    2002-06-01

    The applicability of sequential extraction as a means to determine species of heavy-metals was examined by a study on soil samples from two Superfund sites: the National Lead Company site in Pedricktown, NJ, and the Roebling Steel, Inc., site in Florence, NJ. Data from a standard sequential extraction procedure were compared to those from a comprehensive study that combined optical- and scanning-electron microscopy, X-ray diffraction, and chemical analyses. The study shows that larger particles of contaminants, encapsulated contaminants, and/or man-made materials such as slags, coke, metals, and plastics are subject to incasement, non-selectivity, and redistribution in the sequential extraction process. The results indicate that standard sequential extraction procedures that were developed for characterizing species of contaminants in river sediments may be unsuitable for stand-alone determinative evaluations of contaminant species in industrial-site materials. However, if employed as part of a comprehensive, site-specific characterization study, sequential extraction could be a very useful tool.

  4. Spatial updating grand canonical Monte Carlo algorithms for fluid simulation: generalization to continuous potentials and parallel implementation.

    Science.gov (United States)

    O'Keeffe, C J; Ren, Ruichao; Orkoulas, G

    2007-11-21

    Spatial updating grand canonical Monte Carlo algorithms are generalizations of random and sequential updating algorithms for lattice systems to continuum fluid models. The elementary steps, insertions or removals, are constructed by generating points in space either at random (random updating) or in a prescribed order (sequential updating). These algorithms have previously been developed only for systems of impenetrable spheres for which no particle overlap occurs. In this work, spatial updating grand canonical algorithms are generalized to continuous, soft-core potentials to account for overlapping configurations. Results on two- and three-dimensional Lennard-Jones fluids indicate that spatial updating grand canonical algorithms, both random and sequential, converge faster than standard grand canonical algorithms. Spatial algorithms based on sequential updating not only exhibit the fastest convergence but also are ideal for parallel implementation due to the absence of strict detailed balance and the nature of the updating that minimizes interprocessor communication. Parallel simulation results for three-dimensional Lennard-Jones fluids show a substantial reduction of simulation time for systems of moderate and large size. The efficiency improvement by parallel processing through domain decomposition is always in addition to the efficiency improvement by sequential updating.

  5. Influence of Sequential vs. Simultaneous Dual-Task Exercise Training on Cognitive Function in Older Adults.

    Science.gov (United States)

    Tait, Jamie L; Duckham, Rachel L; Milte, Catherine M; Main, Luana C; Daly, Robin M

    2017-01-01

    Emerging research indicates that exercise combined with cognitive training may improve cognitive function in older adults. Typically these programs have incorporated sequential training, where exercise and cognitive training are undertaken separately. However, simultaneous or dual-task training, where cognitive and/or motor training are performed simultaneously with exercise, may offer greater benefits. This review summary provides an overview of the effects of combined simultaneous vs. sequential training on cognitive function in older adults. Based on the available evidence, there are inconsistent findings with regard to the cognitive benefits of sequential training in comparison to cognitive or exercise training alone. In contrast, simultaneous training interventions, particularly multimodal exercise programs in combination with secondary tasks regulated by sensory cues, have significantly improved cognition in both healthy older and clinical populations. However, further research is needed to determine the optimal characteristics of a successful simultaneous training program for optimizing cognitive function in older people.

  6. Consistency of self-reported alcohol consumption on randomized and sequential alcohol purchase tasks

    Directory of Open Access Journals (Sweden)

    Michael eAmlung

    2012-07-01

    Full Text Available Behavioral economic demand for addictive substances is commonly assessed via purchase tasks that measure estimated drug consumption at a range of prices. Purchase tasks typically use escalating prices in sequential order, which may influence performance by providing explicit price reference points. This study investigated the consistency of value preferences on two alcohol purchase tasks (APTs that used either a randomized or sequential price order (price range: free to $30 per drink in a sample of ninety-one young adult monthly drinkers. Randomization of prices significantly reduced relative response consistency (p < .01, although absolute consistency was high for both versions (>95%. Self-reported alcohol consumption across prices and indices of demand were highly similar across versions, although a few notable exceptions were found. These results suggest generally high consistency and overlapping performance between randomized and sequential price assessment. Implications for the behavioral economics literature and priorities for future research are discussed.

  7. Hemodynamic analysis of sequential graft from right coronary system to left coronary system.

    Science.gov (United States)

    Wang, Wenxin; Mao, Boyan; Wang, Haoran; Geng, Xueying; Zhao, Xi; Zhang, Huixia; Xie, Jinsheng; Zhao, Zhou; Lian, Bo; Liu, Youjun

    2016-12-28

    Sequential and single grafting are two surgical procedures of coronary artery bypass grafting. However, it remains unclear if the sequential graft can be used between the right and left coronary artery system. The purpose of this paper is to clarify the possibility of right coronary artery system anastomosis to left coronary system. A patient-specific 3D model was first reconstructed based on coronary computed tomography angiography (CCTA) images. Two different grafts, the normal multi-graft (Model 1) and the novel multi-graft (Model 2), were then implemented on this patient-specific model using virtual surgery techniques. In Model 1, the single graft was anastomosed to right coronary artery (RCA) and the sequential graft was adopted to anastomose left anterior descending (LAD) and left circumflex artery (LCX). While in Model 2, the single graft was anastomosed to LAD and the sequential graft was adopted to anastomose RCA and LCX. A zero-dimensional/three-dimensional (0D/3D) coupling method was used to realize the multi-scale simulation of both the pre-operative and two post-operative models. Flow rates in the coronary artery and grafts were obtained. The hemodynamic parameters were also showed, including wall shear stress (WSS) and oscillatory shear index (OSI). The area of low WSS and OSI in Model 1 was much less than that in Model 2. Model 1 shows optimistic hemodynamic modifications which may enhance the long-term patency of grafts. The anterior segments of sequential graft have better long-term patency than the posterior segments. With rational spatial position of the heart vessels, the last anastomosis of sequential graft should be connected to the main branch.

  8. On Locally Most Powerful Sequential Rank Tests

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 36, č. 1 (2017), s. 111-125 ISSN 0747-4946 R&D Projects: GA ČR GA17-07384S Grant - others:Nadační fond na podporu vědy(CZ) Neuron Institutional support: RVO:67985556 Keywords : nonparametric test s * sequential ranks * stopping variable Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.339, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/kalina-0474065.pdf

  9. Decoding restricted participation in sequential electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Knaut, Andreas; Paschmann, Martin

    2017-06-15

    Restricted participation in sequential markets may cause high price volatility and welfare losses. In this paper we therefore analyze the drivers of restricted participation in the German intraday auction which is a short-term electricity market with quarter-hourly products. Applying a fundamental electricity market model with 15-minute temporal resolution, we identify the lack of sub-hourly market coupling being the most relevant driver of restricted participation. We derive a proxy for price volatility and find that full market coupling may trigger quarter-hourly price volatility to decrease by a factor close to four.

  10. THE DEVELOPMENT OF SPECIAL SEQUENTIALLY-TIMED

    Directory of Open Access Journals (Sweden)

    Stanislav LICHOROBIEC

    2016-06-01

    Full Text Available This article documents the development of the noninvasive use of explosives during the destruction of ice mass in river flows. The system of special sequentially-timed charges utilizes the increase in efficiency of cutting charges by covering them with bags filled with water, while simultaneously increasing the effect of the entire system of timed charges. Timing, spatial combinations during placement, and the linking of these charges results in the loosening of ice barriers on a frozen waterway, while at the same time regulating the size of the ice fragments. The developed charges will increase the operability and safety of IRS units.

  11. Pass-transistor asynchronous sequential circuits

    Science.gov (United States)

    Whitaker, Sterling R.; Maki, Gary K.

    1989-01-01

    Design methods for asynchronous sequential pass-transistor circuits, which result in circuits that are hazard- and critical-race-free and which have added degrees of freedom for the input signals, are discussed. The design procedures are straightforward and easy to implement. Two single-transition-time state assignment methods are presented, and hardware bounds for each are established. A surprising result is that the hardware realizations for each next state variable and output variable is identical for a given flow table. Thus, a state machine with N states and M outputs can be constructed using a single layout replicated N + M times.

  12. A sequential/parallel track selector

    CERN Document Server

    Bertolino, F; Bressani, Tullio; Chiavassa, E; Costa, S; Dellacasa, G; Gallio, M; Musso, A

    1980-01-01

    A medium speed ( approximately 1 mu s) hardware pre-analyzer for the selection of events detected in four planes of drift chambers in the magnetic field of the Omicron Spectrometer at the CERN SC is described. Specific geometrical criteria determine patterns of hits in the four planes of vertical wires that have to be recognized and that are stored as patterns of '1's in random access memories. Pairs of good hits are found sequentially, then the RAMs are used as look-up tables. (6 refs).

  13. Boundary conditions in random sequential adsorption

    Science.gov (United States)

    Cieśla, Michał; Ziff, Robert M.

    2018-04-01

    The influence of different boundary conditions on the density of random packings of disks is studied. Packings are generated using the random sequential adsorption algorithm with three different types of boundary conditions: periodic, open, and wall. It is found that the finite size effects are smallest for periodic boundary conditions, as expected. On the other hand, in the case of open and wall boundaries it is possible to introduce an effective packing size and a constant correction term to significantly improve the packing densities.

  14. Automatic synthesis of sequential control schemes

    International Nuclear Information System (INIS)

    Klein, I.

    1993-01-01

    Of all hard- and software developed for industrial control purposes, the majority is devoted to sequential, or binary valued, control and only a minor part to classical linear control. Typically, the sequential parts of the controller are invoked during startup and shut-down to bring the system into its normal operating region and into some safe standby region, respectively. Despite its importance, fairly little theoretical research has been devoted to this area, and sequential control programs are therefore still created manually without much theoretical support to obtain a systematic approach. We propose a method to create sequential control programs automatically. The main ideas is to spend some effort off-line modelling the plant, and from this model generate the control strategy, that is the plan. The plant is modelled using action structures, thereby concentrating on the actions instead of the states of the plant. In general the planning problem shows exponential complexity in the number of state variables. However, by focusing on the actions, we can identify problem classes as well as algorithms such that the planning complexity is reduced to polynomial complexity. We prove that these algorithms are sound, i.e., the generated solution will solve the stated problem, and complete, i.e., if the algorithms fail, then no solution exists. The algorithms generate a plan as a set of actions and a partial order on this set specifying the execution order. The generated plant is proven to be minimal and maximally parallel. For a larger class of problems we propose a method to split the original problem into a number of simple problems that can each be solved using one of the presented algorithms. It is also shown how a plan can be translated into a GRAFCET chart, and to illustrate these ideas we have implemented a planing tool, i.e., a system that is able to automatically create control schemes. Such a tool can of course also be used on-line if it is fast enough. This

  15. From sequential to parallel programming with patterns

    CERN Document Server

    CERN. Geneva

    2018-01-01

    To increase in both performance and efficiency, our programming models need to adapt to better exploit modern processors. The classic idioms and patterns for programming such as loops, branches or recursion are the pillars of almost every code and are well known among all programmers. These patterns all have in common that they are sequential in nature. Embracing parallel programming patterns, which allow us to program for multi- and many-core hardware in a natural way, greatly simplifies the task of designing a program that scales and performs on modern hardware, independently of the used programming language, and in a generic way.

  16. Sequential extraction of uranium metal contamination

    International Nuclear Information System (INIS)

    Murry, M.M.; Spitz, H.B.; Connick, W.B.

    2016-01-01

    Samples of uranium contaminated dirt collected from the dirt floor of an abandoned metal rolling mill were analyzed for uranium using a sequential extraction protocol involving a series of five increasingly aggressive solvents. The quantity of uranium extracted from the contaminated dirt by each reagent can aid in predicting the fate and transport of the uranium contamination in the environment. Uranium was separated from each fraction using anion exchange, electrodeposition and analyzed by alpha spectroscopy analysis. Results demonstrate that approximately 77 % of the uranium was extracted using NH 4 Ac in 25 % acetic acid. (author)

  17. Simultaneous optimization of sequential IMRT plans

    International Nuclear Information System (INIS)

    Popple, Richard A.; Prellop, Perri B.; Spencer, Sharon A.; Santos, Jennifer F. de los; Duan, Jun; Fiveash, John B.; Brezovich, Ivan A.

    2005-01-01

    Radiotherapy often comprises two phases, in which irradiation of a volume at risk for microscopic disease is followed by a sequential dose escalation to a smaller volume either at a higher risk for microscopic disease or containing only gross disease. This technique is difficult to implement with intensity modulated radiotherapy, as the tolerance doses of critical structures must be respected over the sum of the two plans. Techniques that include an integrated boost have been proposed to address this problem. However, clinical experience with such techniques is limited, and many clinicians are uncomfortable prescribing nonconventional fractionation schemes. To solve this problem, we developed an optimization technique that simultaneously generates sequential initial and boost IMRT plans. We have developed an optimization tool that uses a commercial treatment planning system (TPS) and a high level programming language for technical computing. The tool uses the TPS to calculate the dose deposition coefficients (DDCs) for optimization. The DDCs were imported into external software and the treatment ports duplicated to create the boost plan. The initial, boost, and tolerance doses were specified and used to construct cost functions. The initial and boost plans were optimized simultaneously using a gradient search technique. Following optimization, the fluence maps were exported to the TPS for dose calculation. Seven patients treated using sequential techniques were selected from our clinical database. The initial and boost plans used to treat these patients were developed independently of each other by dividing the tolerance doses proportionally between the initial and boost plans and then iteratively optimizing the plans until a summation that met the treatment goals was obtained. We used the simultaneous optimization technique to generate plans that met the original planning goals. The coverage of the initial and boost target volumes in the simultaneously optimized

  18. Indicators for traffic safety assessment and prediction and their application in micro-simulation modelling : a study of urban and suburban intersections

    OpenAIRE

    Archer, Jeffery

    2005-01-01

    In order to achieve sustainable long-term transport infrastructure development, there is a growing need for fast, reliable and effective methods to evaluate and predict the impact of traffic safety measures. Recognising this need, and the need for an active traffic safety approach, this thesis focuses on traffic safety assessment and prediction based on the use of safety indicators that measure the spatial and/or temporal proximity of safety critical events. The main advantage of such measure...

  19. Operational indicators

    International Nuclear Information System (INIS)

    2010-01-01

    The chapter presents the operational indicators related to budget, travel costs and tickets, the evolution of the annual program for regulatory inspection, the scientific production, requested patents and the numbers related to the production of the services offered by the Institution

  20. On the effect of response transformations in sequential parameter optimization.

    Science.gov (United States)

    Wagner, Tobias; Wessing, Simon

    2012-01-01

    Parameter tuning of evolutionary algorithms (EAs) is attracting more and more interest. In particular, the sequential parameter optimization (SPO) framework for the model-assisted tuning of stochastic optimizers has resulted in established parameter tuning algorithms. In this paper, we enhance the SPO framework by introducing transformation steps before the response aggregation and before the actual modeling. Based on design-of-experiments techniques, we empirically analyze the effect of integrating different transformations. We show that in particular, a rank transformation of the responses provides significant improvements. A deeper analysis of the resulting models and additional experiments with adaptive procedures indicates that the rank and the Box-Cox transformation are able to improve the properties of the resultant distributions with respect to symmetry and normality of the residuals. Moreover, model-based effect plots document a higher discriminatory power obtained by the rank transformation.

  1. Pros and cons of immediately sequential bilateral cataract surgery (ISBCS).

    Science.gov (United States)

    Grzybowski, Andrzej; Wasinska-Borowiec, Weronika; Claoué, Charles

    2016-01-01

    Immediately sequential bilateral cataract surgery (ISBCS) is currently a "hot topic" in ophthalmology. There are well-documented advantages in terms of quicker visual rehabilitation and reduced costs. The risk of bilateral simultaneous endophthalmitis and bilateral blindness is now recognized to be minuscule with the advent of intracameral antibiotics and modern management of endophthalmitis. Refractive surprises are rare for normal eyes and with the use of optical biometry. Where a general anesthetic is indicated for cataract surgery, the risk of death from a second anesthetic is much higher than the risk of blindness. A widely recognized protocol from the International Society of Bilateral Cataract Surgeons needs to be adhered to if surgeons wish to start practicing ISBCS.

  2. Exogenous indirect photoinactivation of bacterial pathogens and indicators in water with natural and synthetic photosensitizers in simulated sunlight with reduced UVB.

    Science.gov (United States)

    Maraccini, P A; Wenk, J; Boehm, A B

    2016-08-01

    To investigate the UVB-independent and exogenous indirect photoinactivation of eight human health-relevant bacterial species in the presence of photosensitizers. Eight bacterial species were exposed to simulated sunlight with greatly reduced UVB light intensity in the presence of three synthetic photosensitizers and two natural photosensitizers. Inactivation curves were fit with shoulder log-linear or first-order kinetic models, from which the presence of a shoulder and magnitude of inactivation rate constants were compared. Eighty-four percent reduction in the UVB light intensity roughly matched a 72-95% reduction in the overall bacterial photoinactivation rate constants in sensitizer-free water. With the UVB light mostly reduced, the exogenous indirect mechanism contribution was evident for most bacteria and photosensitizers tested, although most prominently with the Gram-positive bacteria. Results confirm the importance of UVB light in bacterial photoinactivation and, with the reduction of the UVB light intensity, that the Gram-positive bacteria are more vulnerable to the exogenous indirect mechanism than Gram-negative bacteria. UVB is the most important range of the sunlight spectrum for bacterial photoinactivation. In aquatic environments where photosensitizers are present and there is high UVB light attenuation, UVA and visible wavelengths can contribute to exogenous indirect photoinactivation. © 2016 The Society for Applied Microbiology.

  3. A sequential Monte Carlo model of the combined GB gas and electricity network

    International Nuclear Information System (INIS)

    Chaudry, Modassar; Wu, Jianzhong; Jenkins, Nick

    2013-01-01

    A Monte Carlo model of the combined GB gas and electricity network was developed to determine the reliability of the energy infrastructure. The model integrates the gas and electricity network into a single sequential Monte Carlo simulation. The model minimises the combined costs of the gas and electricity network, these include gas supplies, gas storage operation and electricity generation. The Monte Carlo model calculates reliability indices such as loss of load probability and expected energy unserved for the combined gas and electricity network. The intention of this tool is to facilitate reliability analysis of integrated energy systems. Applications of this tool are demonstrated through a case study that quantifies the impact on the reliability of the GB gas and electricity network given uncertainties such as wind variability, gas supply availability and outages to energy infrastructure assets. Analysis is performed over a typical midwinter week on a hypothesised GB gas and electricity network in 2020 that meets European renewable energy targets. The efficacy of doubling GB gas storage capacity on the reliability of the energy system is assessed. The results highlight the value of greater gas storage facilities in enhancing the reliability of the GB energy system given various energy uncertainties. -- Highlights: •A Monte Carlo model of the combined GB gas and electricity network was developed. •Reliability indices are calculated for the combined GB gas and electricity system. •The efficacy of doubling GB gas storage capacity on reliability of the energy system is assessed. •Integrated reliability indices could be used to assess the impact of investment in energy assets

  4. Hybrid Computerized Adaptive Testing: From Group Sequential Design to Fully Sequential Design

    Science.gov (United States)

    Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff

    2016-01-01

    Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…

  5. Sequential and simultaneous choices: testing the diet selection and sequential choice models.

    Science.gov (United States)

    Freidin, Esteban; Aw, Justine; Kacelnik, Alex

    2009-03-01

    We investigate simultaneous and sequential choices in starlings, using Charnov's Diet Choice Model (DCM) and Shapiro, Siller and Kacelnik's Sequential Choice Model (SCM) to integrate function and mechanism. During a training phase, starlings encountered one food-related option per trial (A, B or R) in random sequence and with equal probability. A and B delivered food rewards after programmed delays (shorter for A), while R ('rejection') moved directly to the next trial without reward. In this phase we measured latencies to respond. In a later, choice, phase, birds encountered the pairs A-B, A-R and B-R, the first implementing a simultaneous choice and the second and third sequential choices. The DCM predicts when R should be chosen to maximize intake rate, and SCM uses latencies of the training phase to predict choices between any pair of options in the choice phase. The predictions of both models coincided, and both successfully predicted the birds' preferences. The DCM does not deal with partial preferences, while the SCM does, and experimental results were strongly correlated to this model's predictions. We believe that the SCM may expose a very general mechanism of animal choice, and that its wider domain of success reflects the greater ecological significance of sequential over simultaneous choices.

  6. The effects of sequential attention shifts within visual working memory.

    Science.gov (United States)

    Li, Qi; Saiki, Jun

    2014-01-01

    Previous studies have shown conflicting data as to whether it is possible to sequentially shift spatial attention among visual working memory (VWM) representations. The present study investigated this issue by asynchronously presenting attentional cues during the retention interval of a change detection task. In particular, we focused on two types of sequential attention shifts: (1) orienting attention to one location, and then withdrawing attention from it, and (2) switching the focus of attention from one location to another. In Experiment 1, a withdrawal cue was presented after a spatial retro-cue to measure the effect of withdrawing attention. The withdrawal cue significantly reduced the cost of invalid spatial cues, but surprisingly, did not attenuate the benefit of valid spatial cues. This indicates that the withdrawal cue only triggered the activation of facilitative components but not inhibitory components of attention. In Experiment 2, two spatial retro-cues were presented successively to examine the effect of switching the focus of attention. We observed equivalent benefits of the first and second spatial cues, suggesting that participants were able to reorient attention from one location to another within VWM, and the reallocation of attention did not attenuate memory at the first-cued location. In Experiment 3, we found that reducing the validity of the preceding spatial cue did lead to a significant reduction in its benefit. However, performance was still better at first-cued locations than at uncued and neutral locations, indicating that the first cue benefit might have been preserved both partially under automatic control and partially under voluntary control. Our findings revealed new properties of dynamic attentional control in VWM maintenance.

  7. The effects of sequential attention shifts within visual working memory

    Science.gov (United States)

    Li, Qi; Saiki, Jun

    2014-01-01

    Previous studies have shown conflicting data as to whether it is possible to sequentially shift spatial attention among visual working memory (VWM) representations. The present study investigated this issue by asynchronously presenting attentional cues during the retention interval of a change detection task. In particular, we focused on two types of sequential attention shifts: (1) orienting attention to one location, and then withdrawing attention from it, and (2) switching the focus of attention from one location to another. In Experiment 1, a withdrawal cue was presented after a spatial retro-cue to measure the effect of withdrawing attention. The withdrawal cue significantly reduced the cost of invalid spatial cues, but surprisingly, did not attenuate the benefit of valid spatial cues. This indicates that the withdrawal cue only triggered the activation of facilitative components but not inhibitory components of attention. In Experiment 2, two spatial retro-cues were presented successively to examine the effect of switching the focus of attention. We observed equivalent benefits of the first and second spatial cues, suggesting that participants were able to reorient attention from one location to another within VWM, and the reallocation of attention did not attenuate memory at the first-cued location. In Experiment 3, we found that reducing the validity of the preceding spatial cue did lead to a significant reduction in its benefit. However, performance was still better at first-cued locations than at uncued and neutral locations, indicating that the first cue benefit might have been preserved both partially under automatic control and partially under voluntary control. Our findings revealed new properties of dynamic attentional control in VWM maintenance. PMID:25237306

  8. Characterization of a sequential pipeline approach to automatic tissue segmentation from brain MR Images

    International Nuclear Information System (INIS)

    Hou, Zujun; Huang, Su

    2008-01-01

    Quantitative analysis of gray matter and white matter in brain magnetic resonance imaging (MRI) is valuable for neuroradiology and clinical practice. Submission of large collections of MRI scans to pipeline processing is increasingly important. We characterized this process and suggest several improvements. To investigate tissue segmentation from brain MR images through a sequential approach, a pipeline that consecutively executes denoising, skull/scalp removal, intensity inhomogeneity correction and intensity-based classification was developed. The denoising phase employs a 3D-extension of the Bayes-Shrink method. The inhomogeneity is corrected by an improvement of the Dawant et al.'s method with automatic generation of reference points. The N3 method has also been evaluated. Subsequently the brain tissue is segmented into cerebrospinal fluid, gray matter and white matter by a generalized Otsu thresholding technique. Intensive comparisons with other sequential or iterative methods have been carried out using simulated and real images. The sequential approach with judicious selection on the algorithm selection in each stage is not only advantageous in speed, but also can attain at least as accurate segmentation as iterative methods under a variety of noise or inhomogeneity levels. A sequential approach to tissue segmentation, which consecutively executes the wavelet shrinkage denoising, scalp/skull removal, inhomogeneity correction and intensity-based classification was developed to automatically segment the brain tissue into CSF, GM and WM from brain MR images. This approach is advantageous in several common applications, compared with other pipeline methods. (orig.)

  9. Optimal Sequential Diagnostic Strategy Generation Considering Test Placement Cost for Multimode Systems

    Directory of Open Access Journals (Sweden)

    Shigang Zhang

    2015-10-01

    Full Text Available Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics.

  10. Optimal Sequential Diagnostic Strategy Generation Considering Test Placement Cost for Multimode Systems

    Science.gov (United States)

    Zhang, Shigang; Song, Lijun; Zhang, Wei; Hu, Zheng; Yang, Yongmin

    2015-01-01

    Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics. PMID:26457709

  11. Angular anisotropy parameters for sequential two-photon double ionization of helium

    International Nuclear Information System (INIS)

    Ivanov, I A; Kheifets, A S

    2009-01-01

    We evaluate photoelectron angular anisotropy /3-parameters for the process of sequential two-photon double electron ionization of helium within the time-independent lowest order perturbation theory (LOPT). Our results indicate that for the photoelectron energies outside the interval (E slow , E fast ), where E slow = ω - IP He + and E fast ω - IP He , there is a considerable deviation from the dipole angular distribution thus indicating the effect of electron correlation.

  12. Sequential probability ratio controllers for safeguards radiation monitors

    International Nuclear Information System (INIS)

    Fehlau, P.E.; Coop, K.L.; Nixon, K.V.

    1984-01-01

    Sequential hypothesis tests applied to nuclear safeguards accounting methods make the methods more sensitive to detecting diversion. The sequential tests also improve transient signal detection in safeguards radiation monitors. This paper describes three microprocessor control units with sequential probability-ratio tests for detecting transient increases in radiation intensity. The control units are designed for three specific applications: low-intensity monitoring with Poisson probability ratios, higher intensity gamma-ray monitoring where fixed counting intervals are shortened by sequential testing, and monitoring moving traffic where the sequential technique responds to variable-duration signals. The fixed-interval controller shortens a customary 50-s monitoring time to an average of 18 s, making the monitoring delay less bothersome. The controller for monitoring moving vehicles benefits from the sequential technique by maintaining more than half its sensitivity when the normal passage speed doubles

  13. Equivalence between quantum simultaneous games and quantum sequential games

    OpenAIRE

    Kobayashi, Naoki

    2007-01-01

    A framework for discussing relationships between different types of games is proposed. Within the framework, quantum simultaneous games, finite quantum simultaneous games, quantum sequential games, and finite quantum sequential games are defined. In addition, a notion of equivalence between two games is defined. Finally, the following three theorems are shown: (1) For any quantum simultaneous game G, there exists a quantum sequential game equivalent to G. (2) For any finite quantum simultaneo...

  14. Discrimination between sequential and simultaneous virtual channels with electrical hearing

    OpenAIRE

    Landsberger, David; Galvin, John J.

    2011-01-01

    In cochlear implants (CIs), simultaneous or sequential stimulation of adjacent electrodes can produce intermediate pitch percepts between those of the component electrodes. However, it is unclear whether simultaneous and sequential virtual channels (VCs) can be discriminated. In this study, CI users were asked to discriminate simultaneous and sequential VCs; discrimination was measured for monopolar (MP) and bipolar + 1 stimulation (BP + 1), i.e., relatively broad and focused stimulation mode...

  15. C-quence: a tool for analyzing qualitative sequential data.

    Science.gov (United States)

    Duncan, Starkey; Collier, Nicholson T

    2002-02-01

    C-quence is a software application that matches sequential patterns of qualitative data specified by the user and calculates the rate of occurrence of these patterns in a data set. Although it was designed to facilitate analyses of face-to-face interaction, it is applicable to any data set involving categorical data and sequential information. C-quence queries are constructed using a graphical user interface. The program does not limit the complexity of the sequential patterns specified by the user.

  16. Discrimination between sequential and simultaneous virtual channels with electrical hearing.

    Science.gov (United States)

    Landsberger, David; Galvin, John J

    2011-09-01

    In cochlear implants (CIs), simultaneous or sequential stimulation of adjacent electrodes can produce intermediate pitch percepts between those of the component electrodes. However, it is unclear whether simultaneous and sequential virtual channels (VCs) can be discriminated. In this study, CI users were asked to discriminate simultaneous and sequential VCs; discrimination was measured for monopolar (MP) and bipolar + 1 stimulation (BP + 1), i.e., relatively broad and focused stimulation modes. For sequential VCs, the interpulse interval (IPI) varied between 0.0 and 1.8 ms. All stimuli were presented at comfortably loud, loudness-balanced levels at a 250 pulse per second per electrode (ppse) stimulation rate. On average, CI subjects were able to reliably discriminate between sequential and simultaneous VCs. While there was no significant effect of IPI or stimulation mode on VC discrimination, some subjects exhibited better VC discrimination with BP + 1 stimulation. Subjects' discrimination between sequential and simultaneous VCs was correlated with electrode discrimination, suggesting that spatial selectivity may influence perception of sequential VCs. To maintain equal loudness, sequential VC amplitudes were nearly double those of simultaneous VCs, presumably resulting in a broader spread of excitation. These results suggest that perceptual differences between simultaneous and sequential VCs might be explained by differences in the spread of excitation. © 2011 Acoustical Society of America

  17. Lineup composition, suspect position, and the sequential lineup advantage.

    Science.gov (United States)

    Carlson, Curt A; Gronlund, Scott D; Clark, Steven E

    2008-06-01

    N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate in the simultaneous lineup, and no sequential lineup advantage was found. This led the authors to hypothesize that protection from a sequential lineup might emerge only when an innocent suspect stands out from the other lineup members. In Experiment 2, participants viewed a simultaneous or sequential lineup with either the guilty suspect or 1 of 3 innocent suspects. Lineup fairness was varied to influence the degree to which a suspect stood out. A sequential lineup advantage was found only for the unfair lineups. Additional analyses of suspect position in the sequential lineups showed an increase in the diagnosticity of suspect identifications as the suspect was placed later in the sequential lineup. These results suggest that the sequential lineup advantage is dependent on lineup composition and suspect position. (c) 2008 APA, all rights reserved

  18. Group-sequential analysis may allow for early trial termination

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich

    2017-01-01

    BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...

  19. Sequential infiltration synthesis for advanced lithography

    Energy Technology Data Exchange (ETDEWEB)

    Darling, Seth B.; Elam, Jeffrey W.; Tseng, Yu-Chih; Peng, Qing

    2017-10-10

    A plasma etch resist material modified by an inorganic protective component via sequential infiltration synthesis (SIS) and methods of preparing the modified resist material. The modified resist material is characterized by an improved resistance to a plasma etching or related process relative to the unmodified resist material, thereby allowing formation of patterned features into a substrate material, which may be high-aspect ratio features. The SIS process forms the protective component within the bulk resist material through a plurality of alternating exposures to gas phase precursors which infiltrate the resist material. The plasma etch resist material may be initially patterned using photolithography, electron-beam lithography or a block copolymer self-assembly process.

  20. Clinical evaluation of synthetic aperture sequential beamforming

    DEFF Research Database (Denmark)

    Hansen, Peter Møller; Hemmsen, Martin Christian; Lange, Theis

    2012-01-01

    In this study clinically relevant ultrasound images generated with synthetic aperture sequential beamforming (SASB) is compared to images generated with a conventional technique. The advantage of SASB is the ability to produce high resolution ultrasound images with a high frame rate and at the same...... time massively reduce the amount of generated data. SASB was implemented in a system consisting of a conventional ultrasound scanner connected to a PC via a research interface. This setup enables simultaneous recording with both SASB and conventional technique. Eighteen volunteers were ultrasound...... scanned abdominally, and 84 sequence pairs were recorded. Each sequence pair consists of two simultaneous recordings of the same anatomical location with SASB and conventional B-mode imaging. The images were evaluated in terms of spatial resolution, contrast, unwanted artifacts, and penetration depth...

  1. Sequential cooling insert for turbine stator vane

    Science.gov (United States)

    Jones, Russel B

    2017-04-04

    A sequential flow cooling insert for a turbine stator vane of a small gas turbine engine, where the impingement cooling insert is formed as a single piece from a metal additive manufacturing process such as 3D metal printing, and where the insert includes a plurality of rows of radial extending impingement cooling air holes alternating with rows of radial extending return air holes on a pressure side wall, and where the insert includes a plurality of rows of chordwise extending second impingement cooling air holes on a suction side wall. The insert includes alternating rows of radial extending cooling air supply channels and return air channels that form a series of impingement cooling on the pressure side followed by the suction side of the insert.

  2. Gleason-Busch theorem for sequential measurements

    Science.gov (United States)

    Flatt, Kieran; Barnett, Stephen M.; Croke, Sarah

    2017-12-01

    Gleason's theorem is a statement that, given some reasonable assumptions, the Born rule used to calculate probabilities in quantum mechanics is essentially unique [A. M. Gleason, Indiana Univ. Math. J. 6, 885 (1957), 10.1512/iumj.1957.6.56050]. We show that Gleason's theorem contains within it also the structure of sequential measurements, and along with this the state update rule. We give a small set of axioms, which are physically motivated and analogous to those in Busch's proof of Gleason's theorem [P. Busch, Phys. Rev. Lett. 91, 120403 (2003), 10.1103/PhysRevLett.91.120403], from which the familiar Kraus operator form follows. An axiomatic approach has practical relevance as well as fundamental interest, in making clear those assumptions which underlie the security of quantum communication protocols. Interestingly, the two-time formalism is seen to arise naturally in this approach.

  3. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay

    2016-01-01

    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  4. Sequential Stereotype Priming: A Meta-Analysis.

    Science.gov (United States)

    Kidder, Ciara K; White, Katherine R; Hinojos, Michelle R; Sandoval, Mayra; Crites, Stephen L

    2017-08-01

    Psychological interest in stereotype measurement has spanned nearly a century, with researchers adopting implicit measures in the 1980s to complement explicit measures. One of the most frequently used implicit measures of stereotypes is the sequential priming paradigm. The current meta-analysis examines stereotype priming, focusing specifically on this paradigm. To contribute to ongoing discussions regarding methodological rigor in social psychology, one primary goal was to identify methodological moderators of the stereotype priming effect-whether priming is due to a relation between the prime and target stimuli, the prime and target response, participant task, stereotype dimension, stimulus onset asynchrony (SOA), and stimuli type. Data from 39 studies yielded 87 individual effect sizes from 5,497 participants. Analyses revealed that stereotype priming is significantly moderated by the presence of prime-response relations, participant task, stereotype dimension, target stimulus type, SOA, and prime repetition. These results carry both practical and theoretical implications for future research on stereotype priming.

  5. Sequential Acral Lentiginous Melanomas of the Foot

    Directory of Open Access Journals (Sweden)

    Jiro Uehara

    2010-12-01

    Full Text Available A 64-year-old Japanese woman had a lightly brown-blackish pigmented macule (1.2 cm in diameter on the left sole of her foot. She received surgical excision following a diagnosis of acral lentiginous melanoma (ALM, which was confirmed histopathologically. One month after the operation, a second melanoma lesion was noticed adjacent to the grafted site. Histopathologically, the two lesions had no continuity, but HMB-45 and cyclin D1 double-positive cells were detected not only on aggregates of atypical melanocytes but also on single cells near the cutting edge of the first lesion. The unique occurrence of a sequential lesion of a primary melanoma might be caused by stimulated subclinical field cells during the wound healing process following the initial operation. This case warrants further investigation to establish the appropriate surgical margin of ALM lesions.

  6. Dancing Twins: Stellar Hierarchies That Formed Sequentially?

    Science.gov (United States)

    Tokovinin, Andrei

    2018-04-01

    This paper draws attention to the class of resolved triple stars with moderate ratios of inner and outer periods (possibly in a mean motion resonance) and nearly circular, mutually aligned orbits. Moreover, stars in the inner pair are twins with almost identical masses, while the mass sum of the inner pair is comparable to the mass of the outer component. Such systems could be formed either sequentially (inside-out) by disk fragmentation with subsequent accretion and migration, or by a cascade hierarchical fragmentation of a rotating cloud. Orbits of the outer and inner subsystems are computed or updated in four such hierarchies: LHS 1070 (GJ 2005, periods 77.6 and 17.25 years), HIP 9497 (80 and 14.4 years), HIP 25240 (1200 and 47.0 years), and HIP 78842 (131 and 10.5 years).

  7. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay

    2016-01-05

    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  8. Sequential Therapy in Metastatic Renal Cell Carcinoma

    Directory of Open Access Journals (Sweden)

    Bradford R Hirsch

    2016-04-01

    Full Text Available The treatment of metastatic renal cell carcinoma (mRCC has changed dramatically in the past decade. As the number of available agents, and related volume of research, has grown, it is increasingly complex to know how to optimally treat patients. The authors are practicing medical oncologists at the US Oncology Network, the largest community-based network of oncology providers in the country, and represent the leadership of the Network's Genitourinary Research Committee. We outline our thought process in approaching sequential therapy of mRCC and the use of real-world data to inform our approach. We also highlight the evolving literature that will impact practicing oncologists in the near future.

  9. Microstructure history effect during sequential thermomechanical processing

    International Nuclear Information System (INIS)

    Yassar, Reza S.; Murphy, John; Burton, Christina; Horstemeyer, Mark F.; El kadiri, Haitham; Shokuhfar, Tolou

    2008-01-01

    The key to modeling the material processing behavior is the linking of the microstructure evolution to its processing history. This paper quantifies various microstructural features of an aluminum automotive alloy that undergoes sequential thermomechanical processing which is comprised hot rolling of a 150-mm billet to a 75-mm billet, rolling to 3 mm, annealing, and then cold rolling to a 0.8-mm thickness sheet. The microstructural content was characterized by means of electron backscatter diffraction, scanning electron microscopy, and transmission electron microscopy. The results clearly demonstrate the evolution of precipitate morphologies, dislocation structures, and grain orientation distributions. These data can be used to improve material models that claim to capture the history effects of the processing materials

  10. Prosody and alignment: a sequential perspective

    Science.gov (United States)

    Szczepek Reed, Beatrice

    2010-12-01

    In their analysis of a corpus of classroom interactions in an inner city high school, Roth and Tobin describe how teachers and students accomplish interactional alignment by prosodically matching each other's turns. Prosodic matching, and specific prosodic patterns are interpreted as signs of, and contributions to successful interactional outcomes and positive emotions. Lack of prosodic matching, and other specific prosodic patterns are interpreted as features of unsuccessful interactions, and negative emotions. This forum focuses on the article's analysis of the relation between interpersonal alignment, emotion and prosody. It argues that prosodic matching, and other prosodic linking practices, play a primarily sequential role, i.e. one that displays the way in which participants place and design their turns in relation to other participants' turns. Prosodic matching, rather than being a conversational action in itself, is argued to be an interactional practice (Schegloff 1997), which is not always employed for the accomplishment of `positive', or aligning actions.

  11. Monitoring sequential electron transfer with EPR

    International Nuclear Information System (INIS)

    Thurnauer, M.C.; Feezel, L.L.; Snyder, S.W.; Tang, J.; Norris, J.R.; Morris, A.L.; Rustandi, R.R.

    1989-01-01

    A completely general model which treats electron spin polarization (ESP) found in a system in which radical pairs with different magnetic interactions are formed sequentially has been described. This treatment has been applied specifically to the ESP found in the bacterial reaction center. Test cases show clearly how parameters such as structure, lifetime, and magnetic interactions within the successive radical pairs affect the ESP, and demonstrate that previous treatments of this problem have been incomplete. The photosynthetic bacterial reaction center protein is an ideal system for testing the general model of ESP. The radical pair which exhibits ESP, P 870 + Q - (P 870 + is the oxidized, primary electron donor, a bacteriochlorophyll special pair and Q - is the reduced, primary quinone acceptor) is formed via sequential electron transport through the intermediary radical pair P 870 + I - (I - is the reduced, intermediary electron acceptor, a bacteriopheophytin). In addition, it is possible to experimentally vary most of the important parameters, such as the lifetime of the intermediary radical pair and the magnetic interactions in each pair. It has been shown how selective isotopic substitution ( 1 H or 2 H) on P 870 , I and Q affects the ESP of the EPR spectrum of P 870 + Q - , observed at two different microwave frequencies, in Fe 2+ -depleted bacterial reaction centers of Rhodobacter sphaeroides R26. Thus, the relative magnitudes of the magnetic properties (nuclear hyperfine and g-factor differences) which influence ESP development were varied. The results support the general model of ESP in that they suggest that the P 870 + Q - radical pair interactions are the dominant source of ESP production in 2 H bacterial reaction centers

  12. Modeling sequential context effects in diagnostic interpretation of screening mammograms.

    Science.gov (United States)

    Alamudun, Folami; Paulus, Paige; Yoon, Hong-Jun; Tourassi, Georgia

    2018-07-01

    Prior research has shown that physicians' medical decisions can be influenced by sequential context, particularly in cases where successive stimuli exhibit similar characteristics when analyzing medical images. This type of systematic error is known to psychophysicists as sequential context effect as it indicates that judgments are influenced by features of and decisions about the preceding case in the sequence of examined cases, rather than being based solely on the peculiarities unique to the present case. We determine if radiologists experience some form of context bias, using screening mammography as the use case. To this end, we explore correlations between previous perceptual behavior and diagnostic decisions and current decisions. We hypothesize that a radiologist's visual search pattern and diagnostic decisions in previous cases are predictive of the radiologist's current diagnostic decisions. To test our hypothesis, we tasked 10 radiologists of varied experience to conduct blind reviews of 100 four-view screening mammograms. Eye-tracking data and diagnostic decisions were collected from each radiologist under conditions mimicking clinical practice. Perceptual behavior was quantified using the fractal dimension of gaze scanpath, which was computed using the Minkowski-Bouligand box-counting method. To test the effect of previous behavior and decisions, we conducted a multifactor fixed-effects ANOVA. Further, to examine the predictive value of previous perceptual behavior and decisions, we trained and evaluated a predictive model for radiologists' current diagnostic decisions. ANOVA tests showed that previous visual behavior, characterized by fractal analysis, previous diagnostic decisions, and image characteristics of previous cases are significant predictors of current diagnostic decisions. Additionally, predictive modeling of diagnostic decisions showed an overall improvement in prediction error when the model is trained on additional information about

  13. Simulation reframed.

    Science.gov (United States)

    Kneebone, Roger L

    2016-01-01

    Simulation is firmly established as a mainstay of clinical education, and extensive research has demonstrated its value. Current practice uses inanimate simulators (with a range of complexity, sophistication and cost) to address the patient 'as body' and trained actors or lay people (Simulated Patients) to address the patient 'as person'. These approaches are often separate.Healthcare simulation to date has been largely for the training and assessment of clinical 'insiders', simulating current practices. A close coupling with the clinical world restricts access to the facilities and practices of simulation, often excluding patients, families and publics. Yet such perspectives are an essential component of clinical practice. This paper argues that simulation offers opportunities to move outside a clinical 'insider' frame and create connections with other individuals and groups. Simulation becomes a bridge between experts whose worlds do not usually intersect, inviting an exchange of insights around embodied practices-the 'doing' of medicine-without jeopardising the safety of actual patients.Healthcare practice and education take place within a clinical frame that often conceals parallels with other domains of expert practice. Valuable insights emerge by viewing clinical practice not only as the application of medical science but also as performance and craftsmanship.Such connections require a redefinition of simulation. Its essence is not expensive elaborate facilities. Developments such as hybrid, distributed and sequential simulation offer examples of how simulation can combine 'patient as body' with 'patient as person' at relatively low cost, democratising simulation and exerting traction beyond the clinical sphere.The essence of simulation is a purposeful design, based on an active process of selection from an originary world, abstraction of what is criterial and re - presentation in another setting for a particular purpose or audience. This may be done within

  14. DYNAMIC ANALYSIS OF THE BULK TRITIUM SHIPPING PACKAGE SUBJECTED TO CLOSURE TORQUES AND SEQUENTIAL IMPACTS

    International Nuclear Information System (INIS)

    Wu, T; Paul Blanton, P; Kurt Eberl, K

    2007-01-01

    This paper presents a finite-element technique to simulate the structural responses and to evaluate the cumulative damage of a radioactive material packaging requiring bolt closure-tightening torque and subjected to the scenarios of the Hypothetical Accident Conditions (HAC) defined in the Code of Federal Regulations Title 10 part 71 (10CFR71). Existing finite-element methods for modeling closure stresses from bolt pre-load are not readily adaptable to dynamic analyses. The HAC events are required to occur sequentially per 10CFR71 and thus the evaluation of the cumulative damage is desirable. Generally, each HAC event is analyzed separately and the cumulative damage is partially addressed by superposition. This results in relying on additional physical testing to comply with 10CFR71 requirements for assessment of cumulative damage. The proposed technique utilizes the combination of kinematic constraints, rigid-body motions and structural deformations to overcome some of the difficulties encountered in modeling the effect of cumulative damage. This methodology provides improved numerical solutions in compliance with the 10CFR71 requirements for sequential HAC tests. Analyses were performed for the Bulk Tritium Shipping Package (BTSP) designed by Savannah River National Laboratory to demonstrate the applications of the technique. The methodology proposed simulates the closure bolt torque preload followed by the sequential HAC events, the 30-foot drop and the 30-foot dynamic crush. The analytical results will be compared to the package test data

  15. Campbell and moment measures for finite sequential spatial processes

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette)

    2006-01-01

    textabstractWe define moment and Campbell measures for sequential spatial processes, prove a Campbell-Mecke theorem, and relate the results to their counterparts in the theory of point processes. In particular, we show that any finite sequential spatial process model can be derived as the vector

  16. Reading Remediation Based on Sequential and Simultaneous Processing.

    Science.gov (United States)

    Gunnison, Judy; And Others

    1982-01-01

    The theory postulating a dichotomy between sequential and simultaneous processing is reviewed and its implications for remediating reading problems are reviewed. Research is cited on sequential-simultaneous processing for early and advanced reading. A list of remedial strategies based on the processing dichotomy addresses decoding and lexical…

  17. Induction of simultaneous and sequential malolactic fermentation in durian wine.

    Science.gov (United States)

    Taniasuri, Fransisca; Lee, Pin-Rou; Liu, Shao-Quan

    2016-08-02

    This study represented for the first time the impact of malolactic fermentation (MLF) induced by Oenococcus oeni and its inoculation strategies (simultaneous vs. sequential) on the fermentation performance as well as aroma compound profile of durian wine. There was no negative impact of simultaneous inoculation of O. oeni and Saccharomyces cerevisiae on the growth and fermentation kinetics of S. cerevisiae as compared to sequential fermentation. Simultaneous MLF did not lead to an excessive increase in volatile acidity as compared to sequential MLF. The kinetic changes of organic acids (i.e. malic, lactic, succinic, acetic and α-ketoglutaric acids) varied with simultaneous and sequential MLF relative to yeast alone. MLF, regardless of inoculation mode, resulted in higher production of fermentation-derived volatiles as compared to control (alcoholic fermentation only), including esters, volatile fatty acids, and terpenes, except for higher alcohols. Most indigenous volatile sulphur compounds in durian were decreased to trace levels with little differences among the control, simultaneous and sequential MLF. Among the different wines, the wine with simultaneous MLF had higher concentrations of terpenes and acetate esters while sequential MLF had increased concentrations of medium- and long-chain ethyl esters. Relative to alcoholic fermentation only, both simultaneous and sequential MLF reduced acetaldehyde substantially with sequential MLF being more effective. These findings illustrate that MLF is an effective and novel way of modulating the volatile and aroma compound profile of durian wine. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. A Survey of Multi-Objective Sequential Decision-Making

    NARCIS (Netherlands)

    Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.

    2013-01-01

    Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential

  19. Dynamics-based sequential memory: Winnerless competition of patterns

    International Nuclear Information System (INIS)

    Seliger, Philip; Tsimring, Lev S.; Rabinovich, Mikhail I.

    2003-01-01

    We introduce a biologically motivated dynamical principle of sequential memory which is based on winnerless competition (WLC) of event images. This mechanism is implemented in a two-layer neural model of sequential spatial memory. We present the learning dynamics which leads to the formation of a WLC network. After learning, the system is capable of associative retrieval of prerecorded sequences of patterns

  20. Sequential, progressive, equal-power, reflective beam-splitter arrays

    Science.gov (United States)

    Manhart, Paul K.

    2017-11-01

    The equations to calculate equal-power reflectivity of a sequential series of beam splitters is presented. Non-sequential optical design examples are offered for uniform illumination using diode lasers. Objects created using Boolean operators and Swept Surfaces can create objects capable of reflecting light into predefined elevation and azimuth angles. Analysis of the illumination patterns for the array are also presented.

  1. The sequential price of anarchy for atomic congestion games

    NARCIS (Netherlands)

    de Jong, Jasper; Uetz, Marc Jochen; Liu, Tie-Yan; Qi, Qi; Ye, Yinyu

    2014-01-01

    In situations without central coordination, the price of anarchy relates the quality of any Nash equilibrium to the quality of a global optimum. Instead of assuming that all players choose their actions simultaneously, we consider games where players choose their actions sequentially. The sequential

  2. Quantum Probability Zero-One Law for Sequential Terminal Events

    Science.gov (United States)

    Rehder, Wulf

    1980-07-01

    On the basis of the Jauch-Piron quantum probability calculus a zero-one law for sequential terminal events is proven, and the significance of certain crucial axioms in the quantum probability calculus is discussed. The result shows that the Jauch-Piron set of axioms is appropriate for the non-Boolean algebra of sequential events.

  3. Lineup Composition, Suspect Position, and the Sequential Lineup Advantage

    Science.gov (United States)

    Carlson, Curt A.; Gronlund, Scott D.; Clark, Steven E.

    2008-01-01

    N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate…

  4. Accounting for Heterogeneous Returns in Sequential Schooling Decisions

    NARCIS (Netherlands)

    Zamarro, G.

    2006-01-01

    This paper presents a method for estimating returns to schooling that takes into account that returns may be heterogeneous among agents and that educational decisions are made sequentially.A sequential decision model is interesting because it explicitly considers that the level of education of each

  5. Sequential designs for sensitivity analysis of functional inputs in computer experiments

    International Nuclear Information System (INIS)

    Fruth, J.; Roustant, O.; Kuhnt, S.

    2015-01-01

    Computer experiments are nowadays commonly used to analyze industrial processes aiming at achieving a wanted outcome. Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on the response variable. In this work we focus on sensitivity analysis of a scalar-valued output of a time-consuming computer code depending on scalar and functional input parameters. We investigate a sequential methodology, based on piecewise constant functions and sequential bifurcation, which is both economical and fully interpretable. The new approach is applied to a sheet metal forming problem in three sequential steps, resulting in new insights into the behavior of the forming process over time. - Highlights: • Sensitivity analysis method for functional and scalar inputs is presented. • We focus on the discovery of most influential parts of the functional domain. • We investigate economical sequential methodology based on piecewise constant functions. • Normalized sensitivity indices are introduced and investigated theoretically. • Successful application to sheet metal forming on two functional inputs

  6. Sequential blind identification of underdetermined mixtures using a novel deflation scheme.

    Science.gov (United States)

    Zhang, Mingjian; Yu, Simin; Wei, Gang

    2013-09-01

    In this brief, we consider the problem of blind identification in underdetermined instantaneous mixture cases, where there are more sources than sensors. A new blind identification algorithm, which estimates the mixing matrix in a sequential fashion, is proposed. By using the rank-1 detecting device, blind identification is reformulated as a constrained optimization problem. The identification of one column of the mixing matrix hence reduces to an optimization task for which an efficient iterative algorithm is proposed. The identification of the other columns of the mixing matrix is then carried out by a generalized eigenvalue decomposition-based deflation method. The key merit of the proposed deflation method is that it does not suffer from error accumulation. The proposed sequential blind identification algorithm provides more flexibility and better robustness than its simultaneous counterpart. Comparative simulation results demonstrate the superior performance of the proposed algorithm over the simultaneous blind identification algorithm.

  7. Development and sensitivity analysis of a fullykinetic model of sequential reductive dechlorination in subsurface

    DEFF Research Database (Denmark)

    Malaguerra, Flavio; Chambon, Julie Claire Claudia; Albrechtsen, Hans-Jørgen

    2010-01-01

    and natural degradation of chlorinated solvents frequently occurs in the subsurface through sequential reductive dechlorination. However, the occurrence and the performance of natural sequential reductive dechlorination strongly depends on environmental factor such as redox conditions, presence of fermenting...... organic matter / electron donors, presence of specific biomass, etc. Here we develop a new fully-kinetic biogeochemical reactive model able to simulate chlorinated solvents degradation as well as production and consumption of molecular hydrogen. The model is validated using batch experiment data......Chlorinated hydrocarbons originating from point sources are amongst the most prevalent contaminants of ground water and often represent a serious threat to groundwater-based drinking water resources. Natural attenuation of contaminant plumes can play a major role in contaminated site management...

  8. PARTICLE FILTERING WITH SEQUENTIAL PARAMETER LEARNING FOR NONLINEAR BOLD fMRI SIGNALS.

    Science.gov (United States)

    Xia, Jing; Wang, Michelle Yongmei

    Analyzing the blood oxygenation level dependent (BOLD) effect in the functional magnetic resonance imaging (fMRI) is typically based on recent ground-breaking time series analysis techniques. This work represents a significant improvement over existing approaches to system identification using nonlinear hemodynamic models. It is important for three reasons. First, instead of using linearized approximations of the dynamics, we present a nonlinear filtering based on the sequential Monte Carlo method to capture the inherent nonlinearities in the physiological system. Second, we simultaneously estimate the hidden physiological states and the system parameters through particle filtering with sequential parameter learning to fully take advantage of the dynamic information of the BOLD signals. Third, during the unknown static parameter learning, we employ the low-dimensional sufficient statistics for efficiency and avoiding potential degeneration of the parameters. The performance of the proposed method is validated using both the simulated data and real BOLD fMRI data.

  9. Anatomy of a defective barrier: sequential glove leak detection in a surgical and dental environment.

    Science.gov (United States)

    Albin, M S; Bunegin, L; Duke, E S; Ritter, R R; Page, C P

    1992-02-01

    a) To determine the frequency of perforations in latex surgical gloves before, during, and after surgical and dental procedures; b) to evaluate the topographical distribution of perforations in latex surgical gloves after surgical and dental procedures; and c) to validate methods of testing for latex surgical glove patency. Multitrial tests under in vitro conditions and a prospective sequential patient study using consecutive testing. An outpatient dental clinic at a university dental school, the operating suite in a medical school affiliated with the Veteran's Hospital, and a biomechanics laboratory. Surgeons, scrub nurses, and dental technicians participating in 50 surgical and 50 dental procedures. We collected 679 latex surgical gloves after surgical procedures and tested them for patency by using a water pressure test. We also employed an electronic glove leak detector before donning, after sequential time intervals, and upon termination of 47 surgical (sequential surgical), 50 dental (sequential dental), and in three orthopedic cases where double gloving was used. The electronic glove leak detector was validated by using electronic point-by-point surface probing, fluorescein dye diffusion, as well as detecting glove punctures made with a 27-gauge needle. The random study indicated a leak rate of 33.0% (224 out of 679) in latex surgical gloves; the sequential surgical study demonstrated patency in 203 out of 347 gloves (58.5%); the sequential dental study showed 34 leaks in the 106 gloves used (32.1%); and with double gloving, the leak rate decreased to 25.0% (13 of 52 gloves tested). While the allowable FDA defect rate for unused latex surgical gloves is 1.5%, we noted defect rates in unused gloves of 5.5% in the sequential surgical, 1.9% in the sequential dental, and 4.0% in our electronic glove leak detector validating study. In the sequential surgical study, 52% of the leaks had occurred by 75 mins, and in the sequential dental study, 75% of the leaks

  10. Sequential double photodetachment of He- in elliptically polarized laser fields

    Science.gov (United States)

    Génévriez, Matthieu; Dunseath, Kevin M.; Terao-Dunseath, Mariko; Urbain, Xavier

    2018-02-01

    Four-photon double detachment of the helium negative ion is investigated experimentally and theoretically for photon energies where the transient helium atom is in the 1 s 2 s 3S or 1 s 2 p P3o states, which subsequently ionize by absorption of three photons. Ionization is enhanced by intermediate resonances, giving rise to series of peaks in the He+ spectrum, which we study in detail. The He+ yield is measured in the wavelength ranges from 530 to 560 nm and from 685 to 730 nm and for various polarizations of the laser light. Double detachment is treated theoretically as a sequential process, within the framework of R -matrix theory for the first step and effective Hamiltonian theory for the second step. Experimental conditions are accurately modeled, and the measured and simulated yields are in good qualitative and, in some cases, quantitative agreement. Resonances in the double detachment spectra can be attributed to well-defined Rydberg states of the transient atom. The double detachment yield exhibits a strong dependence on the laser polarization which can be related to the magnetic quantum number of the intermediate atomic state. We also investigate the possibility of nonsequential double detachment with a two-color experiment but observe no evidence for it.

  11. Adaptive Control Using Fully Online Sequential-Extreme Learning Machine and a Case Study on Engine Air-Fuel Ratio Regulation

    Directory of Open Access Journals (Sweden)

    Pak Kin Wong

    2014-01-01

    Full Text Available Most adaptive neural control schemes are based on stochastic gradient-descent backpropagation (SGBP, which suffers from local minima problem. Although the recently proposed regularized online sequential-extreme learning machine (ReOS-ELM can overcome this issue, it requires a batch of representative initial training data to construct a base model before online learning. The initial data is usually difficult to collect in adaptive control applications. Therefore, this paper proposes an improved version of ReOS-ELM, entitled fully online sequential-extreme learning machine (FOS-ELM. While retaining the advantages of ReOS-ELM, FOS-ELM discards the initial training phase, and hence becomes suitable for adaptive control applications. To demonstrate its effectiveness, FOS-ELM was applied to the adaptive control of engine air-fuel ratio based on a simulated engine model. Besides, controller parameters were also analyzed, in which it is found that large hidden node number with small regularization parameter leads to the best performance. A comparison among FOS-ELM and SGBP was also conducted. The result indicates that FOS-ELM achieves better tracking and convergence performance than SGBP, since FOS-ELM tends to learn the unknown engine model globally whereas SGBP tends to “forget” what it has learnt. This implies that FOS-ELM is more preferable for adaptive control applications.

  12. The effect of lineup member similarity on recognition accuracy in simultaneous and sequential lineups.

    Science.gov (United States)

    Flowe, Heather D; Ebbesen, Ebbe B

    2007-02-01

    Two experiments investigated whether remembering is affected by the similarity of the study face relative to the alternatives in a lineup. In simultaneous and sequential lineups, choice rates and false alarms were larger in low compared to high similarity lineups, indicating criterion placement was affected by lineup similarity structure (Experiment 1). In Experiment 2, foil choices and similarity ranking data for target present lineups were compared to responses made when the target was removed from the lineup (only the 5 foils were presented). The results indicated that although foils were selected more often in target-removed lineups in the simultaneous compared to the sequential condition, responses shifted from the target to one of the foils at equal rates across lineup procedures.

  13. Constrained treatment planning using sequential beam selection

    International Nuclear Information System (INIS)

    Woudstra, E.; Storchi, P.R.M.

    2000-01-01

    In this paper an algorithm is described for automated treatment plan generation. The algorithm aims at delivery of the prescribed dose to the target volume without violation of constraints for target, organs at risk and the surrounding normal tissue. Pre-calculated dose distributions for all candidate orientations are used as input. Treatment beams are selected in a sequential way. A score function designed for beam selection is used for the simultaneous selection of beam orientations and weights. In order to determine the optimum choice for the orientation and the corresponding weight of each new beam, the score function is first redefined to account for the dose distribution of the previously selected beams. Addition of more beams to the plan is stopped when the target dose is reached or when no additional dose can be delivered without violating a constraint. In the latter case the score function is modified by importance factor changes to enforce better sparing of the organ with the limiting constraint and the algorithm is run again. (author)

  14. Phenomenology of the next sequential lepton

    International Nuclear Information System (INIS)

    Rizzo, T.G.

    1980-01-01

    We consider the phenomenology of a sequential, charged lepton in the mass range 6 --13 GeV. We find the semileptonic branching ratio of such a lepton to be approx. 13%; the dominant two-body modes are found to include the decay L → ν/sub L/F* with a branching ratio approx. 6%. In this analysis we assume that the mass of the lepton under consideration is lighter than the t quark such that decays such as L → ν/sub L/t-barq, where q= (d, s, or b) are kinematically forbidden. We also find that decays such as L → ν/sub L/B* (c-barb) can also be as large as approx. 6% depending on the mixing angles; the lifetime of such a lepton is found to be approx. 2.6 x 10 -12 M/sub L/ -5 sec, where M/sub L/ is in GeV

  15. The Origin of Sequential Chromospheric Brightenings

    Science.gov (United States)

    Kirk, M. S.; Balasubramaniam, K. S.; Jackiewicz, J.; Gilbert, H. R.

    2017-06-01

    Sequential chromospheric brightenings (SCBs) are often observed in the immediate vicinity of erupting flares and are associated with coronal mass ejections. Since their initial discovery in 2005, there have been several subsequent investigations of SCBs. These studies have used differing detection and analysis techniques, making it difficult to compare results between studies. This work employs the automated detection algorithm of Kirk et al. (Solar Phys. 283, 97, 2013) to extract the physical characteristics of SCBs in 11 flares of varying size and intensity. We demonstrate that the magnetic substructure within the SCB appears to have a significantly smaller area than the corresponding Hα emission. We conclude that SCBs originate in the lower corona around 0.1 R_{⊙} above the photosphere, propagate away from the flare center at speeds of 35 - 85 km s^{-1}, and have peak photosphere magnetic intensities of 148±2.9 G. In light of these measurements, we infer SCBs to be distinctive chromospheric signatures of erupting coronal mass ejections.

  16. Sequential decoders for large MIMO systems

    KAUST Repository

    Ali, Konpal S.

    2014-05-01

    Due to their ability to provide high data rates, multiple-input multiple-output (MIMO) systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In this paper, we employ the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. Numerical results are done that show moderate bias values result in a decent performance-complexity trade-off. We also attempt to bound the error by bounding the bias, using the minimum distance of a lattice. The variations in complexity with SNR have an interesting trend that shows room for considerable improvement. Our work is compared against linear decoders (LDs) aided with Element-based Lattice Reduction (ELR) and Complex Lenstra-Lenstra-Lovasz (CLLL) reduction. © 2014 IFIP.

  17. Social Influences in Sequential Decision Making.

    Directory of Open Access Journals (Sweden)

    Markus Schöbel

    Full Text Available People often make decisions in a social environment. The present work examines social influence on people's decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others' authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions.

  18. Social Influences in Sequential Decision Making

    Science.gov (United States)

    Schöbel, Markus; Rieskamp, Jörg; Huber, Rafael

    2016-01-01

    People often make decisions in a social environment. The present work examines social influence on people’s decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others’ authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions. PMID:26784448

  19. Sequential acquisition of mutations in myelodysplastic syndromes.

    Science.gov (United States)

    Makishima, Hideki

    2017-01-01

    Recent progress in next-generation sequencing technologies allows us to discover frequent mutations throughout the coding regions of myelodysplastic syndromes (MDS), potentially providing us with virtually a complete spectrum of driver mutations in this disease. As shown by many study groups these days, such driver mutations are acquired in a gene-specific fashion. For instance, DDX41 mutations are observed in germline cells long before MDS presentation. In blood samples from healthy elderly individuals, somatic DNMT3A and TET2 mutations are detected as age-related clonal hematopoiesis and are believed to be a risk factor for hematological neoplasms. In MDS, mutations of genes such as NRAS and FLT3, designated as Type-1 genes, may be significantly associated with leukemic evolution. Another type (Type-2) of genes, including RUNX1 and GATA2, are related to progression from low-risk to high-risk MDS. Overall, various driver mutations are sequentially acquired in MDS, at a specific time, in either germline cells, normal hematopoietic cells, or clonal MDS cells.

  20. Building a Lego wall: Sequential action selection.

    Science.gov (United States)

    Arnold, Amy; Wing, Alan M; Rotshtein, Pia

    2017-05-01

    The present study draws together two distinct lines of enquiry into the selection and control of sequential action: motor sequence production and action selection in everyday tasks. Participants were asked to build 2 different Lego walls. The walls were designed to have hierarchical structures with shared and dissociated colors and spatial components. Participants built 1 wall at a time, under low and high load cognitive states. Selection times for correctly completed trials were measured using 3-dimensional motion tracking. The paradigm enabled precise measurement of the timing of actions, while using real objects to create an end product. The experiment demonstrated that action selection was slowed at decision boundary points, relative to boundaries where no between-wall decision was required. Decision points also affected selection time prior to the actual selection window. Dual-task conditions increased selection errors. Errors mostly occurred at boundaries between chunks and especially when these required decisions. The data support hierarchical control of sequenced behavior. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Sequential experimental design based generalised ANOVA

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    2016-07-15

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  2. Social Influences in Sequential Decision Making.

    Science.gov (United States)

    Schöbel, Markus; Rieskamp, Jörg; Huber, Rafael

    2016-01-01

    People often make decisions in a social environment. The present work examines social influence on people's decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others' authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions.

  3. Impact of disguise on identification decision and confidence with simultaneous and sequential lineups

    OpenAIRE

    Mansour, Jamal K; Beaudry, J L; Bertrand, M I; Kalmet, N; Melsom, E; Lindsay, R C L

    2012-01-01

    Prior research indicates that disguise negatively affects lineup identifications, but the mechanisms by which disguise works have not been explored, and different disguises have not been compared. In two experiments (Ns = 87 and 91) we manipulated degree of coverage by two different types of disguise: a stocking mask or sunglasses and toque (i.e., knitted hat). Participants viewed mock-crime videos followed by simultaneous or sequential lineups. Disguise and lineup type did not interact. In s...

  4. Parameter sampling capabilities of sequential and simultaneous data assimilation: I. Analytical comparison

    International Nuclear Information System (INIS)

    Fossum, Kristian; Mannseth, Trond

    2014-01-01

    We assess the parameter sampling capabilities of some Bayesian, ensemble-based, joint state-parameter (JS) estimation methods. The forward model is assumed to be non-chaotic and have nonlinear components, and the emphasis is on results obtained for the parameters in the state-parameter vector. A variety of approximate sampling methods exist, and a number of numerical comparisons between such methods have been performed. Often, more than one of the defining characteristics vary from one method to another, so it can be difficult to point out which characteristic of the more successful method in such a comparison was decisive. In this study, we single out one defining characteristic for comparison; whether or not data are assimilated sequentially or simultaneously. The current paper is concerned with analytical investigations into this issue. We carefully select one sequential and one simultaneous JS method for the comparison. We also design a corresponding pair of pure parameter estimation methods, and we show how the JS methods and the parameter estimation methods are pairwise related. It is shown that the sequential and the simultaneous parameter estimation methods are equivalent for one particular combination of observations with different degrees of nonlinearity. Strong indications are presented for why one may expect the sequential parameter estimation method to outperform the simultaneous parameter estimation method for all other combinations of observations. Finally, the conditions for when similar relations can be expected to hold between the corresponding JS methods are discussed. A companion paper, part II (Fossum and Mannseth 2014 Inverse Problems 30 114003), is concerned with statistical analysis of results from a range of numerical experiments involving sequential and simultaneous JS estimation, where the design of the numerical investigation is motivated by our findings in the current paper. (paper)

  5. Comparison of Sequential Regimen and Standard Therapy for Helicobacter pylori Eradication in Patients with Dyspepsia

    Directory of Open Access Journals (Sweden)

    Gh. Roshanaei

    2013-10-01

    Full Text Available Introduction & Objective: Some studies have reported successful eradication rates using se-quential therapy but more recent studies performed in Asia did not find a similar benefit. Due to inconsistencies in the comparison of standard triple drugs therapy and sequential regimen, in the previous researches we decided to compare these treatments in Persian patients. Materials & Methods: This study is a randomized clinical trial, performed in one hundred and forty patients suffering from dyspepsia with indication for H. pylori eradication between No-vember 2010 and March 2012.Patients were randomized in two equal groups. The patients in the first group (standard were treated by omeprazole capsule 20 mg BID, amoxicillin cap-sule 1 gr BID, clarithromycin tablet 500mg BID for 14 days; while the patients in the second group (sequential were treated by omeprazole capsule 20 mg for 10 days, amoxicillin cap-sule 1 gr BID for 5 days, then clarithromycin tablet 500 mg and tinidazole tablet 500 mg BID for other 5 days. 4-6 weeks after the treatment, we compared the eradication of H.pylori be-tween the two groups by urease breathe test with C14. Results: H. pylori infection was successfully cured in 57/70 (81.43% with a 10-day sequen-tial therapy, in 60/70 (85.75% with the standard fourteen-day triple therapy, respectively. Conclusion: We detected no significant differences between the 10-day sequential eradication therapy for H. pylori and 14-day standard triple treatment among the patients. (Sci J Hamadan Univ Med Sci 2013; 20 (3:184-193

  6. Sequential egocentric navigation and reliance on landmarks in Williams syndrome and typical development

    Directory of Open Access Journals (Sweden)

    Hannah eBroadbent

    2015-02-01

    Full Text Available Visuospatial difficulties in Williams syndrome (WS are well documented. Recently, research has shown that spatial difficulties in WS extend to large-scale space, particularly in coding space using an allocentric frame of reference. Typically developing (TD children and adults predominantly rely on the use of a sequential egocentric strategy to navigate a large-scale route (retracing a sequence of left-right body turns. The aim of this study was to examine whether individuals with WS are able to employ a sequential egocentric strategy to guide learning and the retracing of a route. Forty-eight TD children, aged 5, 7 and 9 years and 18 participants with WS were examined on their ability to learn and retrace routes in two (6-turn virtual environment mazes (with and without landmarks. The ability to successfully retrace a route following the removal of landmarks (use of sequential egocentric coding was also examined.Although in line with TD 5 year-olds when learning a route with landmarks, individuals with WS showed significantly greater detriment when these landmarks were removed, relative to all TD groups. Moreover, the WS group made significantly more errors than all TD groups when learning a route that never contained landmarks. On a perceptual view-matching task, results revealed a high level of performance across groups, indicative of an ability to use this visual information to potentially aid navigation. These findings suggest that individuals with WS rely on landmarks to a greater extent than TD children, both for learning a route and for retracing a recently learned route. TD children, but not individuals with WS, were able to fall back on the use of a sequential egocentric strategy to navigate when landmarks were not present. Only TD children therefore coded sequential route information simultaneously with landmark information. The results are discussed in relation to known atypical cortical development and perceptual-matching abilities

  7. Gstat: a program for geostatistical modelling, prediction and simulation

    Science.gov (United States)

    Pebesma, Edzer J.; Wesseling, Cees G.

    1998-01-01

    Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.

  8. Comparison of ablation centration after bilateral sequential versus simultaneous LASIK.

    Science.gov (United States)

    Lin, Jane-Ming; Tsai, Yi-Yu

    2005-01-01

    To compare ablation centration after bilateral sequential and simultaneous myopic LASIK. A retrospective randomized case series was performed of 670 eyes of 335 consecutive patients who had undergone either bilateral sequential (group 1) or simultaneous (group 2) myopic LASIK between July 2000 and July 2001 at the China Medical University Hospital, Taichung, Taiwan. The ablation centrations of the first and second eyes in the two groups were compared 3 months postoperatively. Of 670 eyes, 274 eyes (137 patients) comprised the sequential group and 396 eyes (198 patients) comprised the simultaneous group. Three months post-operatively, 220 eyes of 110 patients (80%) in the sequential group and 236 eyes of 118 patients (60%) in the simultaneous group provided topographic data for centration analysis. For the first eyes, mean decentration was 0.39 +/- 0.26 mm in the sequential group and 0.41 +/- 0.19 mm in the simultaneous group (P = .30). For the second eyes, mean decentration was 0.28 +/- 0.23 mm in the sequential group and 0.30 +/- 0.21 mm in the simultaneous group (P = .36). Decentration in the second eyes significantly improved in both groups (group 1, P = .02; group 2, P sequential group and 0.32 +/- 0.18 mm in the simultaneous group (P = .33). The difference of ablation center angles between the first and second eyes was 43.2 sequential group and 45.1 +/- 50.8 degrees in the simultaneous group (P = .42). Simultaneous bilateral LASIK is comparable to sequential surgery in ablation centration.

  9. The sequential trauma score - a new instrument for the sequential mortality prediction in major trauma*

    Directory of Open Access Journals (Sweden)

    Huber-Wagner S

    2010-05-01

    Full Text Available Abstract Background There are several well established scores for the assessment of the prognosis of major trauma patients that all have in common that they can be calculated at the earliest during intensive care unit stay. We intended to develop a sequential trauma score (STS that allows prognosis at several early stages based on the information that is available at a particular time. Study design In a retrospective, multicenter study using data derived from the Trauma Registry of the German Trauma Society (2002-2006, we identified the most relevant prognostic factors from the patients basic data (P, prehospital phase (A, early (B1, and late (B2 trauma room phase. Univariate and logistic regression models as well as score quality criteria and the explanatory power have been calculated. Results A total of 2,354 patients with complete data were identified. From the patients basic data (P, logistic regression showed that age was a significant predictor of survival (AUCmodel p, area under the curve = 0.63. Logistic regression of the prehospital data (A showed that blood pressure, pulse rate, Glasgow coma scale (GCS, and anisocoria were significant predictors (AUCmodel A = 0.76; AUCmodel P + A = 0.82. Logistic regression of the early trauma room phase (B1 showed that peripheral oxygen saturation, GCS, anisocoria, base excess, and thromboplastin time to be significant predictors of survival (AUCmodel B1 = 0.78; AUCmodel P +A + B1 = 0.85. Multivariate analysis of the late trauma room phase (B2 detected cardiac massage, abbreviated injury score (AIS of the head ≥ 3, the maximum AIS, the need for transfusion or massive blood transfusion, to be the most important predictors (AUCmodel B2 = 0.84; AUCfinal model P + A + B1 + B2 = 0.90. The explanatory power - a tool for the assessment of the relative impact of each segment to mortality - is 25% for P, 7% for A, 17% for B1 and 51% for B2. A spreadsheet for the easy calculation of the sequential trauma

  10. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  11. Program For Parallel Discrete-Event Simulation

    Science.gov (United States)

    Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.

    1991-01-01

    User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.

  12. Sequential formation of subgroups in OB associations

    International Nuclear Information System (INIS)

    Elmegreen, B.G.; Lada, C.J.

    1977-01-01

    We reconsider the structure and formation of OB association in view of recent radio and infrared observations of the adjacent molecular clouds. As a result of this reexamination, we propose that OB subgroups are formed in a step-by-step process which involves the propagation of ionization (I) and shock (S) fronts through a molecular cloud complex. OB stars formed at the edge of a molecular cloud drive these I-S fronts into the cloud. A layer of dense neutral material accumulates between the I and S fronts and eventually becomes gravitationally unstable. This process is analyzed in detail. Several arguments concerning the temperature and mass of this layer suggest that a new OB subgroup will form. After approximately one-half million years, these stars will emerge from and disrupt the star-forming layer. A new shock will be driven into the remaining molecular cloud and will initiate another cycle of star formation.Several observed properties of OB associations are shown to follow from a sequential star-forming mechanism. These include the spatial separation and systematic differences in age of OB subgroups in a given association, the regularity of subgroup masses, the alignment of subgroups along the galactic plane, and their physical expansion. Detailed observations of ionization fronts, masers, IR sources, and molecular clouds are also in agreement with this model. Finally, this mechanism provides a means of dissipating a molecular cloud and exposing less massive stars (e.g., T Tauri stars) which may have formed ahead of the shock as part of the original cloud collapsed and fragmented

  13. District heating in sequential energy supply

    International Nuclear Information System (INIS)

    Persson, Urban; Werner, Sven

    2012-01-01

    Highlights: ► European excess heat recovery and utilisation by district heat distribution. ► Heat recovery in district heating systems – a structural energy efficiency measure. ► Introduction of new theoretical concepts to express excess heat recovery. ► Fourfold potential for excess heat utilisation in EU27 compared to current levels. ► Large scale excess heat recovery – a collaborative challenge for future Europe. -- Abstract: Increased recovery of excess heat from thermal power generation and industrial processes has great potential to reduce primary energy demands in EU27. In this study, current excess heat utilisation levels by means of district heat distribution are assessed and expressed by concepts such as recovery efficiency, heat recovery rate, and heat utilisation rate. For two chosen excess heat activities, current average EU27 heat recovery levels are compared to currently best Member State practices, whereby future potentials of European excess heat recovery and utilisation are estimated. The principle of sequential energy supply is elaborated to capture the conceptual idea of excess heat recovery in district heating systems as a structural and organisational energy efficiency measure. The general conditions discussed concerning expansion of heat recovery into district heating systems include infrastructure investments in district heating networks, collaboration agreements, maintained value chains, policy support, world market energy prices, allocation of synergy benefits, and local initiatives. The main conclusion from this study is that a future fourfold increase of current EU27 excess heat utilisation by means of district heat distribution to residential and service sectors is conceived as plausible if applying best Member State practice. This estimation is higher than the threefold increase with respect to direct feasible distribution costs estimated by the same authors in a previous study. Hence, no direct barriers appear with

  14. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  15. An Efficient System Based On Closed Sequential Patterns for Web Recommendations

    OpenAIRE

    Utpala Niranjan; R.B.V. Subramanyam; V-Khana

    2010-01-01

    Sequential pattern mining, since its introduction has received considerable attention among the researchers with broad applications. The sequential pattern algorithms generally face problems when mining long sequential patterns or while using very low support threshold. One possible solution of such problems is by mining the closed sequential patterns, which is a condensed representation of sequential patterns. Recently, several researchers have utilized the sequential pattern discovery for d...

  16. Adrenal vein sampling in primary aldosteronism: concordance of simultaneous vs sequential sampling.

    Science.gov (United States)

    Almarzooqi, Mohamed-Karji; Chagnon, Miguel; Soulez, Gilles; Giroux, Marie-France; Gilbert, Patrick; Oliva, Vincent L; Perreault, Pierre; Bouchard, Louis; Bourdeau, Isabelle; Lacroix, André; Therasse, Eric

    2017-02-01

    Many investigators believe that basal adrenal venous sampling (AVS) should be done simultaneously, whereas others opt for sequential AVS for simplicity and reduced cost. This study aimed to evaluate the concordance of sequential and simultaneous AVS methods. Between 1989 and 2015, bilateral simultaneous sets of basal AVS were obtained twice within 5 min, in 188 consecutive patients (59 women and 129 men; mean age: 53.4 years). Selectivity was defined by adrenal-to-peripheral cortisol ratio ≥2, and lateralization was defined as an adrenal aldosterone-to-cortisol ratio ≥2, the contralateral side. Sequential AVS was simulated using right sampling at -5 min (t = -5) and left sampling at 0 min (t = 0). There was no significant difference in mean selectivity ratio (P = 0.12 and P = 0.42 for the right and left sides respectively) and in mean lateralization ratio (P = 0.93) between t = -5 and t = 0. Kappa for selectivity between 2 simultaneous AVS was 0.71 (95% CI: 0.60-0.82), whereas it was 0.84 (95% CI: 0.76-0.92) and 0.85 (95% CI: 0.77-0.93) between sequential and simultaneous AVS at respectively -5 min and at 0 min. Kappa for lateralization between 2 simultaneous AVS was 0.84 (95% CI: 0.75-0.93), whereas it was 0.86 (95% CI: 0.78-0.94) and 0.80 (95% CI: 0.71-0.90) between sequential AVS and simultaneous AVS at respectively -5 min at 0 min. Concordance between simultaneous and sequential AVS was not different than that between 2 repeated simultaneous AVS in the same patient. Therefore, a better diagnostic performance is not a good argument to select the AVS method. © 2017 European Society of Endocrinology.

  17. Lithofacies and associated reservoir properties co-simulations constraint by seismic data; Cosimulations de lithofacies et de proprietes reservoirs associees contraintes par les donnees sismiques

    Energy Technology Data Exchange (ETDEWEB)

    Fichtl, P.

    1998-01-19

    Integration of data different sources and nature leads to more accurate reservoir models, useful for controlling fluid and assessing final uncertainties. In this frame, this thesis presents a new technique for co-simulating in 3D two high resolution properties - one categorical, one continuous - conditionally to well information and under the constraint of seismic data. This technique could be applied to simulate lithofacies and related reservoir properties like acoustic impedances or porosities. The proposed algorithm combines a non-parametric approach for the categorical variable and a parametric approach for the continuous variable through a sequential co-simulation. The co-simulation process is divided in two steps: in the first step, the lithofacies is co-simulated with the seismic information by a sequential indicator co-simulation with co-kriging and, in the second step, the reservoir property of interest is simulated from the previously co-simulated lithofacies using sequential Gaussian (co- )simulation or P-field simulation. A validation study on a synthetic but realistic model shows that this technique provides alternative models of lithofacies and associated high resolution acoustic impedances consistent with the seismic data. The seismic information constraining the co-simulations contributes to reduce the uncertainties for the lithofacies distribution at the reservoir level. In some case, a Markov co-regionalization model can be used for simplifying the inference and modelling of the cross-covariances; finally, the co-simulation algorithm was applied to a 3D real case study with objective the joint numerical modelling of lithofacies and porosity in a fluvial channel reservoir. (author) 88 refs.

  18. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  19. Adaptive Online Sequential ELM for Concept Drift Tackling

    Directory of Open Access Journals (Sweden)

    Arif Budiman

    2016-01-01

    Full Text Available A machine learning method needs to adapt to over time changes in the environment. Such changes are known as concept drift. In this paper, we propose concept drift tackling method as an enhancement of Online Sequential Extreme Learning Machine (OS-ELM and Constructive Enhancement OS-ELM (CEOS-ELM by adding adaptive capability for classification and regression problem. The scheme is named as adaptive OS-ELM (AOS-ELM. It is a single classifier scheme that works well to handle real drift, virtual drift, and hybrid drift. The AOS-ELM also works well for sudden drift and recurrent context change type. The scheme is a simple unified method implemented in simple lines of code. We evaluated AOS-ELM on regression and classification problem by using concept drift public data set (SEA and STAGGER and other public data sets such as MNIST, USPS, and IDS. Experiments show that our method gives higher kappa value compared to the multiclassifier ELM ensemble. Even though AOS-ELM in practice does not need hidden nodes increase, we address some issues related to the increasing of the hidden nodes such as error condition and rank values. We propose taking the rank of the pseudoinverse matrix as an indicator parameter to detect “underfitting” condition.

  20. The role of sequential chemoradiation for local advanced oropharyngeal carcinoma

    International Nuclear Information System (INIS)

    Masterson, Liam; Tanweer, Faiz

    2013-01-01

    This study aims to assess survival, prognostic indicators, and pattern of failure for advanced oropharyngeal cancer treated by induction chemotherapy followed by concomitant chemoradiation (sequential CRT). A retrospective review of 80 consecutive patients who underwent chemoradiation [doublet cisplatin and 5-fluorouracil (PF)] for local advanced oropharyngeal carcinoma at a tertiary center from March 2003 to July 2008 is reported. Seven studies utilizing a similar protocol were reviewed, and all outcomes are collated. At a median follow-up of 32 months, the 3-year overall survival was 75%. Tumor size (p<0.001), age at presentation (p<0.002), and failure to complete the full course of induction chemotherapy (p<0.01) were all found to be significant factors affecting survival. Induction chemotherapy followed by concomitant chemoradiation utilizing doublet PF is an effective treatment for local advanced oropharyngeal carcinoma. At present, the addition of a taxane to the PF regimen cannot be assumed to provide benefit until further evidence emerges from a representative controlled trial. (author)

  1. Further comments on the sequential probability ratio testing methods

    Energy Technology Data Exchange (ETDEWEB)

    Kulacsy, K. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics

    1997-05-23

    The Bayesian method for belief updating proposed in Racz (1996) is examined. The interpretation of the belief function introduced therein is found, and the method is compared to the classical binary Sequential Probability Ratio Testing method (SPRT). (author).

  2. Sequential lineups: shift in criterion or decision strategy?

    Science.gov (United States)

    Gronlund, Scott D

    2004-04-01

    R. C. L. Lindsay and G. L. Wells (1985) argued that a sequential lineup enhanced discriminability because it elicited use of an absolute decision strategy. E. B. Ebbesen and H. D. Flowe (2002) argued that a sequential lineup led witnesses to adopt a more conservative response criterion, thereby affecting bias, not discriminability. Height was encoded as absolute (e.g., 6 ft [1.83 m] tall) or relative (e.g., taller than). If a sequential lineup elicited an absolute decision strategy, the principle of transfer-appropriate processing predicted that performance should be best when height was encoded absolutely. Conversely, if a simultaneous lineup elicited a relative decision strategy, performance should be best when height was encoded relatively. The predicted interaction was observed, providing direct evidence for the decision strategies explanation of what happens when witnesses view a sequential lineup.

  3. Relations between the simultaneous and sequential transfer of two nucleons

    International Nuclear Information System (INIS)

    Satchler, G.R.

    1982-01-01

    The author discusses the perturbative treatment of simultaneous and sequential two-nucleon transfer reactions with special regards to the DWBA. As examples the (t,p), (p,t), and (α,d) reactions are considered. (HSI)

  4. Retrieval of sea surface velocities using sequential Ocean Colour ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    pended sediment dispersion patterns, in sequential two time lapsed images. .... face advective velocities consists essentially of iden- tifying the ... matrix is time consuming, a significant reduction .... Chauhan, P. 2002 Personal Communication.

  5. Process tomography via sequential measurements on a single quantum system

    CSIR Research Space (South Africa)

    Bassa, H

    2015-09-01

    Full Text Available The authors utilize a discrete (sequential) measurement protocol to investigate quantum process tomography of a single two-level quantum system, with an unknown initial state, undergoing Rabi oscillations. The ignorance of the dynamical parameters...

  6. Generalized infimum and sequential product of quantum effects

    International Nuclear Information System (INIS)

    Li Yuan; Sun Xiuhong; Chen Zhengli

    2007-01-01

    The quantum effects for a physical system can be described by the set E(H) of positive operators on a complex Hilbert space H that are bounded above by the identity operator I. For A, B(set-membership sign)E(H), the operation of sequential product A(convolution sign)B=A 1/2 BA 1/2 was proposed as a model for sequential quantum measurements. A nice investigation of properties of the sequential product has been carried over [Gudder, S. and Nagy, G., 'Sequential quantum measurements', J. Math. Phys. 42, 5212 (2001)]. In this note, we extend some results of this reference. In particular, a gap in the proof of Theorem 3.2 in this reference is overcome. In addition, some properties of generalized infimum A sqcap B are studied

  7. Sequential Low Cost Interventions Double Hand Hygiene Rates ...

    African Journals Online (AJOL)

    Sequential Low Cost Interventions Double Hand Hygiene Rates Among Medical Teams in a Resource Limited Setting. Results of a Hand Hygiene Quality Improvement Project Conducted At University Teaching Hospital of Kigali (Chuk), Kigali, Rwanda.

  8. The impact of eyewitness identifications from simultaneous and sequential lineups.

    Science.gov (United States)

    Wright, Daniel B

    2007-10-01

    Recent guidelines in the US allow either simultaneous or sequential lineups to be used for eyewitness identification. This paper investigates how potential jurors weight the probative value of the different outcomes from both of these types of lineups. Participants (n=340) were given a description of a case that included some exonerating and some incriminating evidence. There was either a simultaneous or a sequential lineup. Depending on the condition, an eyewitness chose the suspect, chose a filler, or made no identification. The participant had to judge the guilt of the suspect and decide whether to render a guilty verdict. For both simultaneous and sequential lineups an identification had a large effect,increasing the probability of a guilty verdict. There were no reliable effects detected between making no identification and identifying a filler. The effect sizes were similar for simultaneous and sequential lineups. These findings are important for judges and other legal professionals to know for trials involving lineup identifications.

  9. A sequential mixed methods research approach to investigating HIV ...

    African Journals Online (AJOL)

    2016-09-03

    Sep 3, 2016 ... Sequential mixed methods research is an effective approach for ... show the effectiveness of the research method. ... qualitative data before quantitative datasets ..... whereby both types of data are collected simultaneously.

  10. Biaxially mechanical tuning of 2-D reversible and irreversible surface topologies through simultaneous and sequential wrinkling.

    Science.gov (United States)

    Yin, Jie; Yagüe, Jose Luis; Boyce, Mary C; Gleason, Karen K

    2014-02-26

    Controlled buckling is a facile means of structuring surfaces. The resulting ordered wrinkling topologies provide surface properties and features desired for multifunctional applications. Here, we study the biaxially dynamic tuning of two-dimensional wrinkled micropatterns under cyclic mechanical stretching/releasing/restretching simultaneously or sequentially. A biaxially prestretched PDMS substrate is coated with a stiff polymer deposited by initiated chemical vapor deposition (iCVD). Applying a mechanical release/restretch cycle in two directions loaded simultaneously or sequentially to the wrinkled system results in a variety of dynamic and tunable wrinkled geometries, the evolution of which is investigated using in situ optical profilometry, numerical simulations, and theoretical modeling. Results show that restretching ordered herringbone micropatterns, created through sequential release of biaxial prestrain, leads to reversible and repeatable surface topography. The initial flat surface and the same wrinkled herringbone pattern are obtained alternatively after cyclic release/restretch processes, owing to the highly ordered structure leaving no avenue for trapping irregular topological regions during cycling as further evidenced by the uniformity of strains distributions and negligible residual strain. Conversely, restretching disordered labyrinth micropatterns created through simultaneous release shows an irreversible surface topology whether after sequential or simultaneous restretching due to creation of irregular surface topologies with regions of highly concentrated strain upon formation of the labyrinth which then lead to residual strains and trapped topologies upon cycling; furthermore, these trapped topologies depend upon the subsequent strain histories as well as the cycle. The disordered labyrinth pattern varies after each cyclic release/restretch process, presenting residual shallow patterns instead of achieving a flat state. The ability to

  11. Concatenated coding system with iterated sequential inner decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1995-01-01

    We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder......We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder...

  12. Multichannel, sequential or combined X-ray spectrometry

    International Nuclear Information System (INIS)

    Florestan, J.

    1979-01-01

    X-ray spectrometer qualities and defects are evaluated for sequential and multichannel categories. Multichannel X-ray spectrometer has time-coherency advantage and its results could be more reproducible; on the other hand some spatial incoherency limits low percentage and traces applications, specially when backgrounds are very variable. In this last case, sequential X-ray spectrometer would find again great usefulness [fr

  13. A Survey of Multi-Objective Sequential Decision-Making

    OpenAIRE

    Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.

    2013-01-01

    Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-obj...

  14. Configural and component processing in simultaneous and sequential lineup procedures

    OpenAIRE

    Flowe, HD; Smith, HMJ; Karoğlu, N; Onwuegbusi, TO; Rai, L

    2015-01-01

    Configural processing supports accurate face recognition, yet it has never been examined within the context of criminal identification lineups. We tested, using the inversion paradigm, the role of configural processing in lineups. Recent research has found that face discrimination accuracy in lineups is better in a simultaneous compared to a sequential lineup procedure. Therefore, we compared configural processing in simultaneous and sequential lineups to examine whether there are differences...

  15. Sequential weak continuity of null Lagrangians at the boundary

    Czech Academy of Sciences Publication Activity Database

    Kalamajska, A.; Kraemer, S.; Kružík, Martin

    2014-01-01

    Roč. 49, 3/4 (2014), s. 1263-1278 ISSN 0944-2669 R&D Projects: GA ČR GAP201/10/0357 Institutional support: RVO:67985556 Keywords : null Lagrangians * nonhomogeneous nonlinear mappings * sequential weak/in measure continuity Subject RIV: BA - General Mathematics Impact factor: 1.518, year: 2014 http://library.utia.cas.cz/separaty/2013/MTR/kruzik-sequential weak continuity of null lagrangians at the boundary.pdf

  16. A Trust-region-based Sequential Quadratic Programming Algorithm

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints.......This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints....

  17. Sequential bayes estimation algorithm with cubic splines on uniform meshes

    International Nuclear Information System (INIS)

    Hossfeld, F.; Mika, K.; Plesser-Walk, E.

    1975-11-01

    After outlining the principles of some recent developments in parameter estimation, a sequential numerical algorithm for generalized curve-fitting applications is presented combining results from statistical estimation concepts and spline analysis. Due to its recursive nature, the algorithm can be used most efficiently in online experimentation. Using computer-sumulated and experimental data, the efficiency and the flexibility of this sequential estimation procedure is extensively demonstrated. (orig.) [de

  18. Sequential contrast-enhanced MR imaging of the penis.

    Science.gov (United States)

    Kaneko, K; De Mouy, E H; Lee, B E

    1994-04-01

    To determine the enhancement patterns of the penis at magnetic resonance (MR) imaging. Sequential contrast material-enhanced MR images of the penis in a flaccid state were obtained in 16 volunteers (12 with normal penile function and four with erectile dysfunction). Subjects with normal erectile function showed gradual and centrifugal enhancement of the corpora cavernosa, while those with erectile dysfunction showed poor enhancement with abnormal progression. Sequential contrast-enhanced MR imaging provides additional morphologic information for the evaluation of erectile dysfunction.

  19. Measurement of sequential change of regional ventilation by new developed Kr-81m method in asthmatics

    International Nuclear Information System (INIS)

    Shimada, Takao; Narita, Hiroto; Ishida, Hirohide; Terashima, Yoichi; Hirasawa, Korenori; Mori, Yutaka; Kawakami, Kenji

    1991-01-01

    Fazio has reported the distribution of Kr-81m by the continuous inhalation method indicates distribution of ventilation. To estimate sequential ventilation change with the continuous Kr-81m inhalation method, it is necessary to keep the concentration of Kr-81m constant. However this is frequently ignored. Because of this, we have developed the new method to maintain constant concentration of Kr-81m and compared the reliability of this method to the conventional method. The results of phantom study showed that the concentration of Kr-81m is kept constant, and sequential change of ventilation can be estimated only by our new method. On application of this method in asthmatics, we have discovered the existence of the region where ventilation reduced by inhalation of a bronchodilator. (author)

  20. Sequentially-crosslinked biomimetic bioactive glass/gelatin methacryloyl composites hydrogels for bone regeneration.

    Science.gov (United States)

    Zheng, Jiafu; Zhao, Fujian; Zhang, Wen; Mo, Yunfei; Zeng, Lei; Li, Xian; Chen, Xiaofeng

    2018-08-01

    In recent years, gelatin-based composites hydrogels have been intensively investigated because of their inherent bioactivity, biocompatibility and biodegradability. Herein, we fabricated photocrosslinkable biomimetic composites hydrogels from bioactive glass (BG) and gelatin methacryloyl (GelMA) by a sequential physical and chemical crosslinking (gelation + UV) approach. The results showed that the compressive modulus of composites hydrogels increased significantly through the sequential crosslinking approach. The addition of BG resulted in a significant increase in physiological stability and apatite-forming ability. In vitro data indicated that BG/GelMA composites hydrogels promoted cell attachment, proliferation and differentiation. Overall, the BG/GelMA composites hydrogels combined the advantages of good biocompatibility and bioactivity, and had potential applications in bone regeneration. Copyright © 2018. Published by Elsevier B.V.

  1. Sequential analysis of biochemical markers of bone resorption and bone densitometry in multiple myeloma

    DEFF Research Database (Denmark)

    Abildgaard, Niels; Brixen, K; Eriksen, E.F

    2004-01-01

    BACKGROUND AND OBJECTIVES: Bone lesions often occur in multiple myeloma (MM), but no tests have proven useful in identifying patients with increased risk. Bone marker assays and bone densitometry are non-invasive methods that can be used repeatedly at low cost. This study was performed to evaluate...... 6 weeks, DEXA-scans performed every 3 months, and skeletal radiographs were done every 6 months as well as when indicated. RESULTS: Serum ICTP and urinary NTx were predictive of progressive bone events. Markers of bone formation, bone mineral density assessments, and M component measurements were...... changes, and our data do not support routine use of sequential DEXA-scans. However, lumbar DEXA-scans at diagnosis can identify patients with increased risk of early vertebral collapses. Sequential analyses of serum ICTP and urinary NTx are useful for monitoring bone damage....

  2. Reliability Evaluation of Distribution System Considering Sequential Characteristics of Distributed Generation

    Directory of Open Access Journals (Sweden)

    Sheng Wanxing

    2016-01-01

    Full Text Available In allusion to the randomness of output power of distributed generation (DG, a reliability evaluation model based on sequential Monte Carlo simulation (SMCS for distribution system with DG is proposed. Operating states of the distribution system can be sampled by SMCS in chronological order thus the corresponding output power of DG can be generated. The proposed method has been tested on feeder F4 of IEEE-RBTS Bus 6. The results show that reliability evaluation of distribution system considering the uncertainty of output power of DG can be effectively implemented by SMCS.

  3. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  4. Sequential unconstrained minimization algorithms for constrained optimization

    International Nuclear Information System (INIS)

    Byrne, Charles

    2008-01-01

    The problem of minimizing a function f(x):R J → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G k (x)=f(x)+g k (x), to obtain x k . The auxiliary functions g k (x):D subset of R J → R + are nonnegative on the set D, each x k is assumed to lie within D, and the objective is to minimize the continuous function f:R J → R over x in the set C = D-bar, the closure of D. We assume that such minimizers exist, and denote one such by x-circumflex. We assume that the functions g k (x) satisfy the inequalities 0≤g k (x)≤G k-1 (x)-G k-1 (x k-1 ), for k = 2, 3, .... Using this assumption, we show that the sequence {(x k )} is decreasing and converges to f(x-circumflex). If the restriction of f(x) to D has bounded level sets, which happens if x-circumflex is unique and f(x) is closed, proper and convex, then the sequence {x k } is bounded, and f(x*)=f(x-circumflex), for any cluster point x*. Therefore, if x-circumflex is unique, x* = x-circumflex and {x k } → x-circumflex. When x-circumflex is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton–Raphson method. The proof techniques used for SUMMA can be extended to obtain related results

  5. Group-sequential analysis may allow for early trial termination

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich

    2017-01-01

    BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...... assumed to be normally distributed, and sequential one-sided hypothesis tests on the population standard deviation of the differences against a hypothesised value of 1.5 were performed, employing an alpha spending function. The fixed-sample analysis (N = 45) was compared with the group-sequential analysis...... strategies comprising one (at N = 23), two (at N = 15, 30), or three interim analyses (at N = 11, 23, 34), respectively, which were defined post hoc. RESULTS: When performing interim analyses with one third and two thirds of patients, sufficient agreement could be concluded after the first interim analysis...

  6. Sequential reconstruction of driving-forces from nonlinear nonstationary dynamics

    Science.gov (United States)

    Güntürkün, Ulaş

    2010-07-01

    This paper describes a functional analysis-based method for the estimation of driving-forces from nonlinear dynamic systems. The driving-forces account for the perturbation inputs induced by the external environment or the secular variations in the internal variables of the system. The proposed algorithm is applicable to the problems for which there is too little or no prior knowledge to build a rigorous mathematical model of the unknown dynamics. We derive the estimator conditioned on the differentiability of the unknown system’s mapping, and smoothness of the driving-force. The proposed algorithm is an adaptive sequential realization of the blind prediction error method, where the basic idea is to predict the observables, and retrieve the driving-force from the prediction error. Our realization of this idea is embodied by predicting the observables one-step into the future using a bank of echo state networks (ESN) in an online fashion, and then extracting the raw estimates from the prediction error and smoothing these estimates in two adaptive filtering stages. The adaptive nature of the algorithm enables to retrieve both slowly and rapidly varying driving-forces accurately, which are illustrated by simulations. Logistic and Moran-Ricker maps are studied in controlled experiments, exemplifying chaotic state and stochastic measurement models. The algorithm is also applied to the estimation of a driving-force from another nonlinear dynamic system that is stochastic in both state and measurement equations. The results are judged by the posterior Cramer-Rao lower bounds. The method is finally put into test on a real-world application; extracting sun’s magnetic flux from the sunspot time series.

  7. The influence of spatial congruency and movement preparation time on saccade curvature in simultaneous and sequential dual-tasks.

    Science.gov (United States)

    Moehler, Tobias; Fiehler, Katja

    2015-11-01

    Saccade curvature represents a sensitive measure of oculomotor inhibition with saccades curving away from covertly attended locations. Here we investigated whether and how saccade curvature depends on movement preparation time when a perceptual task is performed during or before saccade preparation. Participants performed a dual-task including a visual discrimination task at a cued location and a saccade task to the same location (congruent) or to a different location (incongruent). Additionally, we varied saccade preparation time (time between saccade cue and Go-signal) and the occurrence of the discrimination task (during saccade preparation=simultaneous vs. before saccade preparation=sequential). We found deteriorated perceptual performance in incongruent trials during simultaneous task performance while perceptual performance was unaffected during sequential task performance. Saccade accuracy and precision were deteriorated in incongruent trials during simultaneous and, to a lesser extent, also during sequential task performance. Saccades consistently curved away from covertly attended non-saccade locations. Saccade curvature was unaffected by movement preparation time during simultaneous task performance but decreased and finally vanished with increasing movement preparation time during sequential task performance. Our results indicate that the competing saccade plan to the covertly attended non-saccade location is maintained during simultaneous task performance until the perceptual task is solved while in the sequential condition, in which the discrimination task is solved prior to the saccade task, oculomotor inhibition decays gradually with movement preparation time. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Evaluación de la infiltración como indicador de calidad de suelo mediante un microsimulador de lluvias Evaluation of infiltration as soil quality indicator by a micro rainfall simulator

    Directory of Open Access Journals (Sweden)

    A. M. Aoki

    2006-06-01

    Full Text Available Los simuladores de lluvia son usados desde hace tiempo en investigaciones sobre erosión y escurrimiento. Este trabajo tuvo por objetivos: 1 evaluar comparativamente la infiltración, medida mediante microsimulador de lluvias, como indicador de calidad de suelo, 2 comparar y seleccionar ecuaciones que describan adecuadamente el proceso de infiltración. Los ensayos se realizaron sobre un suelo Haplustol típico de textura franco limosa, ubicado en la región central de la provincia de Córdoba, Argentina. Se seleccionaron tres sitios de ensayo: una situación testigo que corresponde a un suelo bajo bosque nativo y dos correspondientes a un suelo en el que se realizó monocultivo de soja con labranza convencional. Se aplicaron distintas intensidades de lluvia simulada. Se comparó el ajuste estadístico de los datos experimentales a dos ecuaciones: Philip y Horton. Se observó que: 1 la velocidad final del proceso de infiltración se comporta como un indicador de calidad de suelo válido para detectar diferencias significativas en las propiedades del horizonte superficial de un suelo Haplustol típico, en condiciones de bosque nativo y en un agroecosistema manejado con labranza convencional; y 2 la ecuación de Horton describe mejor que la de Philip el proceso de infiltración de agua para el suelo y condiciones bajo estudio.Rainfall simulators have been used for the last twenty years in erosion and runoff research. This paper had two goals: 1 to comparatively evaluate the infiltration as soil quality indicator using a rainfall micro simulator; and, 2 to compare and choose the equations that adequately fit the infiltration process. The assays were made on a typical Haplustol soil with silty loam texture situated in the Central Region of Cordoba Province. Three test sites were selected: a witness site under native forest and two corresponding to soils under soybean monoculture and conventional tillage. Several simulated rain intensities were

  9. Measuring sustainability by Energy Efficiency Analysis for Korean Power Companies: A Sequential Slacks-Based Efficiency Measure

    Directory of Open Access Journals (Sweden)

    Ning Zhang

    2014-03-01

    Full Text Available Improving energy efficiency has been widely regarded as one of the most cost-effective ways to improve sustainability and mitigate climate change. This paper presents a sequential slack-based efficiency measure (SSBM application to model total-factor energy efficiency with undesirable outputs. This approach simultaneously takes into account the sequential environmental technology, total input slacks, and undesirable outputs for energy efficiency analysis. We conduct an empirical analysis of energy efficiency incorporating greenhouse gas emissions of Korean power companies during 2007–2011. The results indicate that most of the power companies are not performing at high energy efficiency. Sequential technology has a significant effect on the energy efficiency measurements. Some policy suggestions based on the empirical results are also presented.

  10. On Modeling Large-Scale Multi-Agent Systems with Parallel, Sequential and Genuinely Asynchronous Cellular Automata

    International Nuclear Information System (INIS)

    Tosic, P.T.

    2011-01-01

    We study certain types of Cellular Automata (CA) viewed as an abstraction of large-scale Multi-Agent Systems (MAS). We argue that the classical CA model needs to be modified in several important respects, in order to become a relevant and sufficiently general model for the large-scale MAS, and so that thus generalized model can capture many important MAS properties at the level of agent ensembles and their long-term collective behavior patterns. We specifically focus on the issue of inter-agent communication in CA, and propose sequential cellular automata (SCA) as the first step, and genuinely Asynchronous Cellular Automata (ACA) as the ultimate deterministic CA-based abstract models for large-scale MAS made of simple reactive agents. We first formulate deterministic and nondeterministic versions of sequential CA, and then summarize some interesting configuration space properties (i.e., possible behaviors) of a restricted class of sequential CA. In particular, we compare and contrast those properties of sequential CA with the corresponding properties of the classical (that is, parallel and perfectly synchronous) CA with the same restricted class of update rules. We analytically demonstrate failure of the studied sequential CA models to simulate all possible behaviors of perfectly synchronous parallel CA, even for a very restricted class of non-linear totalistic node update rules. The lesson learned is that the interleaving semantics of concurrency, when applied to sequential CA, is not refined enough to adequately capture the perfect synchrony of parallel CA updates. Last but not least, we outline what would be an appropriate CA-like abstraction for large-scale distributed computing insofar as the inter-agent communication model is concerned, and in that context we propose genuinely asynchronous CA. (author)

  11. Three-point method for measuring the geometric error components of linear and rotary axes based on sequential multilateration

    International Nuclear Information System (INIS)

    Zhang, Zhenjiu; Hu, Hong

    2013-01-01

    The linear and rotary axes are fundamental parts of multi-axis machine tools. The geometric error components of the axes must be measured for motion error compensation to improve the accuracy of the machine tools. In this paper, a simple method named the three point method is proposed to measure the geometric error of the linear and rotary axes of the machine tools using a laser tracker. A sequential multilateration method, where uncertainty is verified through simulation, is applied to measure the 3D coordinates. Three noncollinear points fixed on the stage of each axis are selected. The coordinates of these points are simultaneously measured using a laser tracker to obtain their volumetric errors by comparing these coordinates with ideal values. Numerous equations can be established using the geometric error models of each axis. The geometric error components can be obtained by solving these equations. The validity of the proposed method is verified through a series of experiments. The results indicate that the proposed method can measure the geometric error of the axes to compensate for the errors in multi-axis machine tools.

  12. Simultaneous Versus Sequential Ptosis and Strabismus Surgery in Children.

    Science.gov (United States)

    Revere, Karen E; Binenbaum, Gil; Li, Jonathan; Mills, Monte D; Katowitz, William R; Katowitz, James A

    The authors sought to compare the clinical outcomes of simultaneous versus sequential ptosis and strabismus surgery in children. Retrospective, single-center cohort study of children requiring both ptosis and strabismus surgery on the same eye. Simultaneous surgeries were performed during a single anesthetic event; sequential surgeries were performed at least 7 weeks apart. Outcomes were ptosis surgery success (margin reflex distance 1 ≥ 2 mm, good eyelid contour, and good eyelid crease); strabismus surgery success (ocular alignment within 10 prism diopters of orthophoria and/or improved head position); surgical complications; and reoperations. Fifty-six children were studied, 38 had simultaneous surgery and 18 sequential. Strabismus surgery was performed first in 38/38 simultaneous and 6/18 sequential cases. Mean age at first surgery was 64 months, with mean follow up 27 months. A total of 75% of children had congenital ptosis; 64% had comitant strabismus. A majority of ptosis surgeries were frontalis sling (59%) or Fasanella-Servat (30%) procedures. There were no significant differences between simultaneous and sequential groups with regards to surgical success rates, complications, or reoperations (all p > 0.28). In the first comparative study of simultaneous versus sequential ptosis and strabismus surgery, no advantage for sequential surgery was seen. Despite a theoretical risk of postoperative eyelid malposition or complications when surgeries were performed in a combined manner, the rate of such outcomes was not increased with simultaneous surgeries. Performing ptosis and strabismus surgery together appears to be clinically effective and safe, and reduces anesthesia exposure during childhood.

  13. Human visual system automatically encodes sequential regularities of discrete events.

    Science.gov (United States)

    Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki

    2010-06-01

    For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential

  14. Sequential reduction of external networks for the security- and short circuit monitor in power system control centers

    Energy Technology Data Exchange (ETDEWEB)

    Dietze, P [Siemens A.G., Erlangen (Germany, F.R.). Abt. ESTE

    1978-01-01

    For the evaluation of the effects of switching operations or simulation of line, transformer, and generator outages the influence of interconnected neighbor networks is modelled by network equivalents in the process computer. The basic passive conductivity model is produced by sequential reduction and adapted to fit the active network behavior. The reduction routine uses the admittance matrix, sparse technique and optimal ordering; it is applicable to process computer applications.

  15. Effects of sequential streaming on auditory masking using psychoacoustics and auditory evoked potentials.

    Science.gov (United States)

    Verhey, Jesko L; Ernst, Stephan M A; Yasin, Ifat

    2012-03-01

    The present study was aimed at investigating the relationship between the mismatch negativity (MMN) and psychoacoustical effects of sequential streaming on comodulation masking release (CMR). The influence of sequential streaming on CMR was investigated using a psychoacoustical alternative forced-choice procedure and electroencephalography (EEG) for the same group of subjects. The psychoacoustical data showed, that adding precursors comprising of only off-signal-frequency maskers abolished the CMR. Complementary EEG data showed an MMN irrespective of the masker envelope correlation across frequency when only the off-signal-frequency masker components were present. The addition of such precursors promotes a separation of the on- and off-frequency masker components into distinct auditory objects preventing the auditory system from using comodulation as an additional cue. A frequency-specific adaptation changing the representation of the flanking bands in the streaming conditions may also contribute to the reduction of CMR in the stream conditions, however, it is unlikely that adaptation is the primary reason for the streaming effect. A neurophysiological correlate of sequential streaming was found in EEG data using MMN, but the magnitude of the MMN was not correlated with the audibility of the signal in CMR experiments. Dipole source analysis indicated different cortical regions involved in processing auditory streaming and modulation detection. In particular, neural sources for processing auditory streaming include cortical regions involved in decision-making. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Accumulation of evidence during sequential decision making: the importance of top-down factors.

    Science.gov (United States)

    de Lange, Floris P; Jensen, Ole; Dehaene, Stanislas

    2010-01-13

    In the last decade, great progress has been made in characterizing the accumulation of neural information during simple unitary perceptual decisions. However, much less is known about how sequentially presented evidence is integrated over time for successful decision making. The aim of this study was to study the mechanisms of sequential decision making in humans. In a magnetoencephalography (MEG) study, we presented healthy volunteers with sequences of centrally presented arrows. Sequence length varied between one and five arrows, and the accumulated directions of the arrows informed the subject about which hand to use for a button press at the end of the sequence (e.g., LRLRR should result in a right-hand press). Mathematical modeling suggested that nonlinear accumulation was the rational strategy for performing this task in the presence of no or little noise, whereas quasilinear accumulation was optimal in the presence of substantial noise. MEG recordings showed a correlate of evidence integration over parietal and central cortex that was inversely related to the amount of accumulated evidence (i.e., when more evidence was accumulated, neural activity for new stimuli was attenuated). This modulation of activity likely reflects a top-down influence on sensory processing, effectively constraining the influence of sensory information on the decision variable over time. The results indicate that, when making decisions on the basis of sequential information, the human nervous system integrates evidence in a nonlinear manner, using the amount of previously accumulated information to constrain the accumulation of additional evidence.

  17. Strong-field non-sequential ionization: The vector momentum distribution of multiply charged Ne ions

    International Nuclear Information System (INIS)

    Rottke, H.; Trump, C.; Wittmann, M.; Korn, G.; Becker, W.; Hoffmann, K.; Sandner, W.; Moshammer, R.; Feuerstein, B.; Dorn, A.; Schroeter, C.D.; Ullrich, J.; Schmitt, W.

    2000-01-01

    COLTRIMS (COLd Target Recoil-Ion Momentum Spectroscopy) was used to measure the vector momentum distribution of Ne n+ (n=1,2,3) ions formed in ultrashort (30 fsec) high-intensity (≅10 15 W/cm 2 ) laser pulses with center wavelength at 795 nm. To a high degree of accuracy the length of the Ne n+ ion momentum vector is equal to the length of the total momentum vector of the n photoelectrons released, with both vectors pointing into opposite directions. At a light intensity where non-sequential ionization of the atom dominates the Ne 2+ and Ne 3+ momentum distributions show distinct maxima at 4.0 a.u. and 7.5 a.u. along the polarization axis of the linearly polarized light beam. First, this is a clear signature of non-sequential multiple ionization. Second, it indicates that instantaneous emission of two (or more) electrons at electric field strength maxima of the light wave can be ruled out as main mechanism of non-sequential strong-field multiple ionization. In contrast, this experimental result is in accordance with the kinematical constraints of the 'rescattering model'

  18. Sequential reduction–oxidation for photocatalytic degradation of tetrabromobisphenol A: Kinetics and intermediates

    International Nuclear Information System (INIS)

    Guo, Yaoguang; Lou, Xiaoyi; Xiao, Dongxue; Xu, Lei; Wang, Zhaohui; Liu, Jianshe

    2012-01-01

    Highlights: ► Sequential photocatalytic reduction–oxidation degradation of TBBPA was firstly examined. ► Different atmospheres were found to have significant effect on debromination reaction. ► A possible sequential photocatalytic reduction–oxidation pathway was proposed. - Abstract: C-Br bond cleavage is considered as a key step to reduce their toxicities and increase degradation rates for most brominated organic pollutants. Here a sequential reduction/oxidation strategy (i.e. debromination followed by photocatalytic oxidation) for photocatalytic degradation of tetrabromobisphenol A (TBBPA), one of the most frequently used brominated flame retardants, was proposed on the basis of kinetic analysis and intermediates identification. The results demonstrated that the rates of debromination and even photodegradation of TBBPA strongly depended on the atmospheres, initial TBBPA concentrations, pH of the reaction solution, hydrogen donors, and electron acceptors. These kinetic data and byproducts identification obtained by GC–MS measurement indicated that reductive debromination reaction by photo-induced electrons dominated under N 2 -saturated condition, while oxidation reaction by photoexcited holes or hydroxyl radicals played a leading role when air was saturated. It also suggested that the reaction might be further optimized for pretreatment of TBBPA-contaminated wastewater by a two-stage reductive debromination/subsequent oxidative decomposition process in the UV-TiO 2 system by changing the reaction atmospheres.

  19. Hypotension Risk Prediction via Sequential Contrast Patterns of ICU Blood Pressure.

    Science.gov (United States)

    Ghosh, Shameek; Feng, Mengling; Nguyen, Hung; Li, Jinyan

    2016-09-01

    Acute hypotension is a significant risk factor for in-hospital mortality at intensive care units. Prolonged hypotension can cause tissue hypoperfusion, leading to cellular dysfunction and severe injuries to multiple organs. Prompt medical interventions are thus extremely important for dealing with acute hypotensive episodes (AHE). Population level prognostic scoring systems for risk stratification of patients are suboptimal in such scenarios. However, the design of an efficient risk prediction system can significantly help in the identification of critical care patients, who are at risk of developing an AHE within a future time span. Toward this objective, a pattern mining algorithm is employed to extract informative sequential contrast patterns from hemodynamic data, for the prediction of hypotensive episodes. The hypotensive and normotensive patient groups are extracted from the MIMIC-II critical care research database, following an appropriate clinical inclusion criteria. The proposed method consists of a data preprocessing step to convert the blood pressure time series into symbolic sequences, using a symbolic aggregate approximation algorithm. Then, distinguishing subsequences are identified using the sequential contrast mining algorithm. These subsequences are used to predict the occurrence of an AHE in a future time window separated by a user-defined gap interval. Results indicate that the method performs well in terms of the prediction performance as well as in the generation of sequential patterns of clinical significance. Hence, the novelty of sequential patterns is in their usefulness as potential physiological biomarkers for building optimal patient risk stratification systems and for further clinical investigation of interesting patterns in critical care patients.

  20. Selective condensation drives partitioning and sequential secretion of cyst wall proteins in differentiating Giardia lamblia.

    Directory of Open Access Journals (Sweden)

    Christian Konrad

    2010-04-01

    Full Text Available Controlled secretion of a protective extracellular matrix is required for transmission of the infective stage of a large number of protozoan and metazoan parasites. Differentiating trophozoites of the highly minimized protozoan parasite Giardia lamblia secrete the proteinaceous portion of the cyst wall material (CWM consisting of three paralogous cyst wall proteins (CWP1-3 via organelles termed encystation-specific vesicles (ESVs. Phylogenetic and molecular data indicate that Diplomonads have lost a classical Golgi during reductive evolution. However, neogenesis of ESVs in encysting Giardia trophozoites transiently provides basic Golgi functions by accumulating presorted CWM exported from the ER for maturation. Based on this "minimal Golgi" hypothesis we predicted maturation of ESVs to a trans Golgi-like stage, which would manifest as a sorting event before regulated secretion of the CWM. Here we show that proteolytic processing of pro-CWP2 in maturing ESVs coincides with partitioning of CWM into two fractions, which are sorted and secreted sequentially with different kinetics. This novel sorting function leads to rapid assembly of a structurally defined outer cyst wall, followed by slow secretion of the remaining components. Using live cell microscopy we find direct evidence for condensed core formation in maturing ESVs. Core formation suggests that a mechanism controlled by phase transitions of the CWM from fluid to condensed and back likely drives CWM partitioning and makes sorting and sequential secretion possible. Blocking of CWP2 processing by a protease inhibitor leads to mis-sorting of a CWP2 reporter. Nevertheless, partitioning and sequential secretion of two portions of the CWM are unaffected in these cells. Although these cysts have a normal appearance they are not water resistant and therefore not infective. Our findings suggest that sequential assembly is a basic architectural principle of protective wall formation and requires

  1. Sequential Sampling Plan of Anthonomus grandis (Coleoptera: Curculionidae) in Cotton Plants.

    Science.gov (United States)

    Grigolli, J F J; Souza, L A; Mota, T A; Fernandes, M G; Busoli, A C

    2017-04-01

    The boll weevil, Anthonomus grandis grandis Boheman (Coleoptera: Curculionidae), is one of the most important pests of cotton production worldwide. The objective of this work was to develop a sequential sampling plan for the boll weevil. The studies were conducted in Maracaju, MS, Brazil, in two seasons with cotton cultivar FM 993. A 10,000-m2 area of cotton was subdivided into 100 of 10- by 10-m plots, and five plants per plot were evaluated weekly, recording the number of squares with feeding + oviposition punctures of A. grandis in each plant. A sequential sampling plan by the maximum likelihood ratio test was developed, using a 10% threshold level of squares attacked. A 5% security level was adopted for the elaboration of the sequential sampling plan. The type I and type II error used was 0.05, recommended for studies with insects. The adjustment of the frequency distributions used were divided into two phases, so that the model that best fit to the data was the negative binomial distribution up to 85 DAE (Phase I), and from there the best fit was Poisson distribution (Phase II). The equations that define the decision-making for Phase I are S0 = -5.1743 + 0.5730N and S1 = 5.1743 + 0.5730N, and for the Phase II are S0 = -4.2479 + 0.5771N and S1 = 4.2479 + 0.5771N. The sequential sampling plan developed indicated the maximum number of sample units expected for decision-making is ∼39 and 31 samples for Phases I and II, respectively. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Can standard sequential extraction determinations effectively define heavy metal species in superfund site soils?

    Energy Technology Data Exchange (ETDEWEB)

    Dahlin, Cheryl L.; Williamson, Connie A.; Collins, Wesley K.; Dahlin, David C.

    2001-01-01

    Speciation and distribution of heavy metals in soils controls the degree to which metals and their compounds are mobile, extractable, and plant-available. Consequently, speciation impacts the success of remediation efforts both by defining the relationship of the contaminants with their environment and by guiding development and evaluation of workable remediation strategies. The U.S. Department of Energy, Albany Research Center (Albany, OR), under a two-year interagency project with the U.S. Environmental Protection Agency (EPA), examined the suitability of sequential extraction as a definitive means to determine species of heavy metals in soil samples. Representative soil samples, contaminated with lead, arsenic, and/or chromium, were collected by EPA personnel from two Superfund sites, the National Lead Company site in Pedricktown, NJ, and the Roebling Steel, Inc., site in Florence, NJ. Data derived from Tessier=s standard three-stage sequential-extraction procedure were compared to data from a comprehensive characterization study that combined optical- and scanning-electron microscopy (with energy-dispersive x-ray and wavelength-dispersive x-ray analyses), x-ray diffraction, and chemical analyses. The results show that standard sequential-extraction procedures that were developed for characterizing species of contaminants in river sediments may be unsuitable for sole evaluation of contaminant species in industrial-site materials (particularly those that contain larger particles of the contaminants, encapsulated contaminants, and/or man-made materials such as slags, metals, and plastics). However, each sequential extraction or comprehensive characterization procedure has it=s own strengths and weaknesses. Findings of this study indicate that the use of both approaches, during the early stages of site studies, would be a best practice. The investigation also highlights the fact that an effective speciation study does not simply identify metal contaminants as

  3. Native Frames: Disentangling Sequential from Concerted Three-Body Fragmentation

    Science.gov (United States)

    Rajput, Jyoti; Severt, T.; Berry, Ben; Jochim, Bethany; Feizollah, Peyman; Kaderiya, Balram; Zohrabi, M.; Ablikim, U.; Ziaee, Farzaneh; Raju P., Kanaka; Rolles, D.; Rudenko, A.; Carnes, K. D.; Esry, B. D.; Ben-Itzhak, I.

    2018-03-01

    A key question concerning the three-body fragmentation of polyatomic molecules is the distinction of sequential and concerted mechanisms, i.e., the stepwise or simultaneous cleavage of bonds. Using laser-driven fragmentation of OCS into O++C++S+ and employing coincidence momentum imaging, we demonstrate a novel method that enables the clear separation of sequential and concerted breakup. The separation is accomplished by analyzing the three-body fragmentation in the native frame associated with each step and taking advantage of the rotation of the intermediate molecular fragment, CO2 + or CS2 + , before its unimolecular dissociation. This native-frame method works for any projectile (electrons, ions, or photons), provides details on each step of the sequential breakup, and enables the retrieval of the relevant spectra for sequential and concerted breakup separately. Specifically, this allows the determination of the branching ratio of all these processes in OCS3 + breakup. Moreover, we find that the first step of sequential breakup is tightly aligned along the laser polarization and identify the likely electronic states of the intermediate dication that undergo unimolecular dissociation in the second step. Finally, the separated concerted breakup spectra show clearly that the central carbon atom is preferentially ejected perpendicular to the laser field.

  4. Synthesizing genetic sequential logic circuit with clock pulse generator.

    Science.gov (United States)

    Chuang, Chia-Hua; Lin, Chun-Liang

    2014-05-28

    Rhythmic clock widely occurs in biological systems which controls several aspects of cell physiology. For the different cell types, it is supplied with various rhythmic frequencies. How to synthesize a specific clock signal is a preliminary but a necessary step to further development of a biological computer in the future. This paper presents a genetic sequential logic circuit with a clock pulse generator based on a synthesized genetic oscillator, which generates a consecutive clock signal whose frequency is an inverse integer multiple to that of the genetic oscillator. An analogous electronic waveform-shaping circuit is constructed by a series of genetic buffers to shape logic high/low levels of an oscillation input in a basic sinusoidal cycle and generate a pulse-width-modulated (PWM) output with various duty cycles. By controlling the threshold level of the genetic buffer, a genetic clock pulse signal with its frequency consistent to the genetic oscillator is synthesized. A synchronous genetic counter circuit based on the topology of the digital sequential logic circuit is triggered by the clock pulse to synthesize the clock signal with an inverse multiple frequency to the genetic oscillator. The function acts like a frequency divider in electronic circuits which plays a key role in the sequential logic circuit with specific operational frequency. A cascaded genetic logic circuit generating clock pulse signals is proposed. Based on analogous implement of digital sequential logic circuits, genetic sequential logic circuits can be constructed by the proposed approach to generate various clock signals from an oscillation signal.

  5. Energy-separated sequential irradiation for ripple pattern tailoring on silicon surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Tanuj [Department of Physics, Central University of Haryana, Jant-Pali, Mahendergarh 1123029 (India); Inter University Accelerator Centre, Aruna Asaf Ali Marg, New Delhi 110067 (India); Kumar, Manish, E-mail: manishbharadwaj@gmail.com [Department of Physics, Central University of Rajasthan, Kishangarh 305801 (India); Panchal, Vandana [Department of Physics, National Institute of Technology, Kurukshetra 136119 (India); Sahoo, P.K. [School of Physical Sciences, National Institute of Science Education and Research, Bhubaneswar 751005 (India); Kanjilal, D. [Inter University Accelerator Centre, Aruna Asaf Ali Marg, New Delhi 110067 (India)

    2015-12-01

    Highlights: • A new process for controlling the near-surface amorphization of ripples on Si surfaces. • Ripples generation by 100 KeV Ar{sup +} and amorphization control by 60 KeV Ar{sup +} irradiation. • Advantage of energy-separated irradiation demonstrated by detailed RBS and AFM studies. • Relevant mechanism is presented on the basis of DAMAGE and SIMNRA simulations. • Key role of solid flow towards the amorphous/crystalline interface is demonstrated. - Abstract: Nanoscale ripples on semiconductor surfaces have potential application in biosensing and optoelectronics, but suffer from uncontrolled surface-amorphization when prepared by conventional ion-irradiation methods. A two-step, energy-separated sequential-irradiation enables simultaneous control of surface-amorphization and ripple-dimensions on Si(1 0 0). The evolution of ripples using 100 keV Ar{sup +} bombardment and further tuning of the patterns using a sequential-irradiation by 60 keV Ar{sup +} at different fluences are demonstrated. The advantage of this approach as opposed to increased fluence at the same energy is clarified by atomic force microscopy and Rutherford backscattering spectroscopy investigations. The explanation of our findings is presented through DAMAGE simulation.

  6. Safety test No. S-6, launch pad abort sequential test Phase II: solid propellant fire

    International Nuclear Information System (INIS)

    Snow, E.C.

    1975-08-01

    In preparation for the Lincoln Laboratory's LES 8/9 space mission, a series of tests was performed to evaluate the nuclear safety capability of the Multi-Hundred Watt (MHW) Radioisotope Thermoelectric Generator (RTG) to be used to supply power for the satellite. One such safety test is Test No. S-6, Launch Pad Abort Sequential Test. The objective of this test was to subject the RTG and its components to the sequential environments characteristic of a catastrophic launch pad accident to evaluate their capability to contain the 238 PuO 2 fuel. This sequence of environments was to have consisted of the blast overpressure and fragments, followed by the fireball, low velocity impact on the launch pad, and solid propellant fire. The blast overpressure and fragments were subsequently eliminated from this sequence. The procedures and results of Phase II of Test S-6, Solid Propellant Fire are presented. In this phase of the test, a simulant Fuel Sphere Assembly (FSA) and a mockup of a damaged Heat Source Assembly (HSA) were subjected to single proximity solid propellant fires of approximately 10-min duration. Steel was introduced into both tests to simulate the effects of launch pad debris and the solid rocket motor (SRM) casing that might be present in the fire zone. (TFD)

  7. Comparison of sequential and single extraction in order to estimate environmental impact of metals from fly ash

    Directory of Open Access Journals (Sweden)

    Tasić Aleksandra M.

    2016-01-01

    Full Text Available The aim of this paper was to simulate leaching of metals from fly ash in different environmental conditions using ultrasound and microwave-assisted extraction techniques. Single-agent extraction and sequential extraction procedures were used to determine the levels of different metals leaching. The concentration of metals (Al, Fe, Mn, Cd, Co, Cr, Ni, Pb, Cu, As, Be in fly ash extracts were measured by Inductively Coupled Plasma-Atomic Emission Spectrometry. Single-agent extractions of metals were conducted during sonication times of 10, 20, 30, 40 and 50 min. Single-agent extraction with deionized water was also undertaken by exposing samples to microwave radiation at the temperature of 50°C. The sequential extraction was undertaken according to the BCR procedure which was modified and applied to study the partitioning of metals in coal fly ash. The microwave-assisted sequential extraction was performed at different extraction temperatures: 50, 100 and 150°C. The partitioning of metals between the individual fractions was investigated and discussed. The efficiency of the extraction process for each step was examined. In addition, the results of the microwave-assisted sequential extraction are compared to the results obtained by standard ASTM method. The mobility of most elements contained in fly ash is markedly pH sensitive. [Projekat Ministarstva nauke Republike Srbije, br. 172030, br. 176006 i br. III43009

  8. Tornado missile simulation and risk analysis

    International Nuclear Information System (INIS)

    Twisdale, L.A.; Dunn, W.L.; Chu, J.

    1978-01-01

    Mathematical models of the contributing events to the tornado missile hazard at nuclear power plants have been developed in which the major sources of uncertainty have been considered in a probabilistic framework. These models have been structured into a sequential event formalism which permits the treatment of both single and multiple missile generation events. A simulation computer code utilizing these models has been developed to obtain estimates of tornado missile event likelihoods. Two case studies have been analyzed; the results indicate that the probability of a single missile from the sampling population impacting any of the plant's targets is less then about 10 -7 per reactor-year. Additional work is needed for verification and sensitivity study

  9. Risk Assessment of Sediment Pollution Using Geostatistical Simulations

    Science.gov (United States)

    Golay, J.; Kanevski, M.

    2012-04-01

    Environmental monitoring networks (EMN) discreetly measure the intensities of continuous phenomena (e.g. pollution, temperature, etc.). Spatial prediction models, like kriging, are then used for modeling. But, they give rise to smooth representations of phenomena which leads to overestimations or underestimations of extreme values. Moreover, they do not reproduce the spatial variability of the original data and the corresponding uncertainties. When dealing with risk assessment, this is unacceptable, since extreme values must be retrieved and probabilities of exceeding given thresholds must be computed [Kanevski et al., 2009]. In order to overcome these obstacles, geostatistics provides another approach: conditional stochastic simulations. Here, the basic idea is to generate multiple estimates of variable values (e.g. pollution concentration) at every location of interest which are calculated as stochastic realizations of an unknown random function (see, for example, [Kanevski, 2008], where both theoretical concepts and real data case studies are presented in detail). Many algorithms implement this approach. The most widely used in spatial modeling are sequential Gaussian simulations/cosimulations, sequential indicator simulations/cosimulations and direct simulations. In the present study, several algorithms of geostatistical conditional simulations were applied on real data collected from Lake Geneva. The main objectives were to compare their effectiveness in reproducing global statistics (histograms, variograms) and the way they characterize the variability and uncertainty of the contamination patterns. The dataset is composed of 200 measurements of the contamination of the lake sediments by heavy metals (i.e. Cadmium, Mercury, Zinc, Copper, Titanium and Chromium). The results obtained show some differences highlighting that risk assessment can be influenced by the algorithm it relies on. Moreover, hybrid models based on machine learning algorithms and

  10. In vivo comparison of simultaneous versus sequential injection technique for thermochemical ablation in a porcine model.

    Science.gov (United States)

    Cressman, Erik N K; Shenoi, Mithun M; Edelman, Theresa L; Geeslin, Matthew G; Hennings, Leah J; Zhang, Yan; Iaizzo, Paul A; Bischof, John C

    2012-01-01

    To investigate simultaneous and sequential injection thermochemical ablation in a porcine model, and compare them to sham and acid-only ablation. This IACUC-approved study involved 11 pigs in an acute setting. Ultrasound was used to guide placement of a thermocouple probe and coaxial device designed for thermochemical ablation. Solutions of 10 M acetic acid and NaOH were used in the study. Four injections per pig were performed in identical order at a total rate of 4 mL/min: saline sham, simultaneous, sequential, and acid only. Volume and sphericity of zones of coagulation were measured. Fixed specimens were examined by H&E stain. Average coagulation volumes were 11.2 mL (simultaneous), 19.0 mL (sequential) and 4.4 mL (acid). The highest temperature, 81.3°C, was obtained with simultaneous injection. Average temperatures were 61.1°C (simultaneous), 47.7°C (sequential) and 39.5°C (acid only). Sphericity coefficients (0.83-0.89) had no statistically significant difference among conditions. Thermochemical ablation produced substantial volumes of coagulated tissues relative to the amounts of reagents injected, considerably greater than acid alone in either technique employed. The largest volumes were obtained with sequential injection, yet this came at a price in one case of cardiac arrest. Simultaneous injection yielded the highest recorded temperatures and may be tolerated as well as or better than acid injection alone. Although this pilot study did not show a clear advantage for either sequential or simultaneous methods, the results indicate that thermochemical ablation is attractive for further investigation with regard to both safety and efficacy.

  11. Power distribution system reliability evaluation using dagger-sampling Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Y.; Zhao, S.; Ma, Y. [North China Electric Power Univ., Hebei (China). Dept. of Electrical Engineering

    2009-03-11

    A dagger-sampling Monte Carlo simulation method was used to evaluate power distribution system reliability. The dagger-sampling technique was used to record the failure of a component as an incident and to determine its occurrence probability by generating incident samples using random numbers. The dagger sampling technique was combined with the direct sequential Monte Carlo method to calculate average values of load point indices and system indices. Results of the 2 methods with simulation times of up to 100,000 years were then compared. The comparative evaluation showed that less computing time was required using the dagger-sampling technique due to its higher convergence speed. When simulation times were 1000 years, the dagger-sampling method required 0.05 seconds to accomplish an evaluation, while the direct method required 0.27 seconds. 12 refs., 3 tabs., 4 figs.

  12. Lexical decoder for continuous speech recognition: sequential neural network approach

    International Nuclear Information System (INIS)

    Iooss, Christine

    1991-01-01

    The work presented in this dissertation concerns the study of a connectionist architecture to treat sequential inputs. In this context, the model proposed by J.L. Elman, a recurrent multilayers network, is used. Its abilities and its limits are evaluated. Modifications are done in order to treat erroneous or noisy sequential inputs and to classify patterns. The application context of this study concerns the realisation of a lexical decoder for analytical multi-speakers continuous speech recognition. Lexical decoding is completed from lattices of phonemes which are obtained after an acoustic-phonetic decoding stage relying on a K Nearest Neighbors search technique. Test are done on sentences formed from a lexicon of 20 words. The results are obtained show the ability of the proposed connectionist model to take into account the sequentiality at the input level, to memorize the context and to treat noisy or erroneous inputs. (author) [fr

  13. Computing Sequential Equilibria for Two-Player Games

    DEFF Research Database (Denmark)

    Miltersen, Peter Bro; Sørensen, Troels Bjerre

    2006-01-01

    Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Koller and Pfeffer pointed out that the strategies...... obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming...... a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial time. In addition, the equilibrium we find is normal-form perfect. Our technique generalizes to general-sum games...

  14. Computing sequential equilibria for two-player games

    DEFF Research Database (Denmark)

    Miltersen, Peter Bro

    2006-01-01

    Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Their algorithm has been used by AI researchers...... for constructing prescriptive strategies for concrete, often fairly large games. Koller and Pfeffer pointed out that the strategies obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency...... by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial...

  15. A Bayesian sequential processor approach to spectroscopic portal system decisions

    Energy Technology Data Exchange (ETDEWEB)

    Sale, K; Candy, J; Breitfeller, E; Guidry, B; Manatt, D; Gosnell, T; Chambers, D

    2007-07-31

    The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waiting for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.

  16. Visual short-term memory for sequential arrays.

    Science.gov (United States)

    Kumar, Arjun; Jiang, Yuhong

    2005-04-01

    The capacity of visual short-term memory (VSTM) for a single visual display has been investigated in past research, but VSTM for multiple sequential arrays has been explored only recently. In this study, we investigate the capacity of VSTM across two sequential arrays separated by a variable stimulus onset asynchrony (SOA). VSTM for spatial locations (Experiment 1), colors (Experiments 2-4), orientations (Experiments 3 and 4), and conjunction of color and orientation (Experiment 4) were tested, with the SOA across the two sequential arrays varying from 100 to 1,500 msec. We find that VSTM for the trailing array is much better than VSTM for the leading array, but when averaged across the two arrays VSTM has a constant capacity independent of the SOA. We suggest that multiple displays compete for retention in VSTM and that separating information into two temporally discrete groups does not enhance the overall capacity of VSTM.

  17. Sequential determination of important ecotoxic radionuclides in nuclear waste samples

    International Nuclear Information System (INIS)

    Bilohuscin, J.

    2016-01-01

    In the dissertation thesis we focused on the development and optimization of a sequential determination method for radionuclides 93 Zr, 94 Nb, 99 Tc and 126 Sn, employing extraction chromatography sorbents TEVA (R) Resin and Anion Exchange Resin, supplied by Eichrom Industries. Prior to the attestation of sequential separation of these proposed radionuclides from radioactive waste samples, a unique sequential procedure of 90 Sr, 239 Pu, 241 Am separation from urine matrices was tried, using molecular recognition sorbents of AnaLig (R) series and extraction chromatography sorbent DGA (R) Resin. On these experiments, four various sorbents were continually used for separation, including PreFilter Resin sorbent, which removes interfering organic materials present in raw urine. After the acquisition of positive results of this sequential procedure followed experiments with a 126 Sn separation using TEVA (R) Resin and Anion Exchange Resin sorbents. Radiochemical recoveries obtained from samples of radioactive evaporate concentrates and sludge showed high efficiency of the separation, while values of 126 Sn were under the minimum detectable activities MDA. Activity of 126 Sn was determined after ingrowth of daughter nuclide 126m Sb on HPGe gamma detector, with minimal contamination of gamma interfering radionuclides with decontamination factors (D f ) higher then 1400 for 60 Co and 47000 for 137 Cs. Based on the acquired experiments and results of these separation procedures, a complex method of sequential separation of 93 Zr, 94 Nb, 99 Tc and 126 Sn was proposed, which included optimization steps similar to those used in previous parts of the dissertation work. Application of the sequential separation method for sorbents TEVA (R) Resin and Anion Exchange Resin on real samples of radioactive wastes provided satisfactory results and an economical, time sparing, efficient method. (author)

  18. Sequential analysis in neonatal research-systematic review.

    Science.gov (United States)

    Lava, Sebastiano A G; Elie, Valéry; Ha, Phuong Thi Viet; Jacqz-Aigrain, Evelyne

    2018-05-01

    As more new drugs are discovered, traditional designs come at their limits. Ten years after the adoption of the European Paediatric Regulation, we performed a systematic review on the US National Library of Medicine and Excerpta Medica database of sequential trials involving newborns. Out of 326 identified scientific reports, 21 trials were included. They enrolled 2832 patients, of whom 2099 were analyzed: the median number of neonates included per trial was 48 (IQR 22-87), median gestational age was 28.7 (IQR 27.9-30.9) weeks. Eighteen trials used sequential techniques to determine sample size, while 3 used continual reassessment methods for dose-finding. In 16 studies reporting sufficient data, the sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674) with respect to a traditional trial. When the number of neonates finally included in the analysis was considered, the difference became significant: 35 (57%) patients (IQR 10 to 136.5, p = 0.0033). Sequential trial designs have not been frequently used in Neonatology. They might potentially be able to reduce the number of patients in drug trials, although this is not always the case. What is known: • In evaluating rare diseases in fragile populations, traditional designs come at their limits. About 20% of pediatric trials are discontinued, mainly because of recruitment problems. What is new: • Sequential trials involving newborns were infrequently used and only a few (n = 21) are available for analysis. • The sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674).

  19. TELEGRAPHS TO INCANDESCENT LAMPS: A SEQUENTIAL PROCESS OF INNOVATION

    Directory of Open Access Journals (Sweden)

    Laurence J. Malone

    2000-01-01

    Full Text Available This paper outlines a sequential process of technological innovation in the emergence of the electrical industry in the United States from 1830 to 1880. Successive inventions that realize the commercial possibilities of electricity provided the foundation for an industry where technical knowledge, invention and diffusion were ultimately consolidated within the managerial structure of new firms. The genesis of the industry is traced, sequentially, through the development of the telegraph, arc light and incandescent lamp. Exploring the origins of the telegraph and incandescent lamp reveals a process where a series of inventions and firms result from successful efforts touse scientific principles to create new commodities and markets.

  20. Properties of simultaneous and sequential two-nucleon transfer

    International Nuclear Information System (INIS)

    Pinkston, W.T.; Satchler, G.R.

    1982-01-01

    Approximate forms of the first- and second-order distorted-wave Born amplitudes are used to study the overall structure, particularly the selection rules, of the amplitudes for simultaneous and sequential transfer of two nucleons. The role of the spin-state assumed for the intermediate deuterons in sequential (t, p) reactions is stressed. The similarity of one-step and two-step amplitudes for (α, d) reactions is exhibited, and the consequent absence of any obvious J-dependence in their interference is noted. (orig.)

  1. Sequential approach to Colombeau's theory of generalized functions

    International Nuclear Information System (INIS)

    Todorov, T.D.

    1987-07-01

    J.F. Colombeau's generalized functions are constructed as equivalence classes of the elements of a specially chosen ultrapower of the class of the C ∞ -functions. The elements of this ultrapower are considered as sequences of C ∞ -functions, so in a sense, the sequential construction presented here refers to the original Colombeau theory just as, for example, the Mikusinski sequential approach to the distribution theory refers to the original Schwartz theory of distributions. The paper could be used as an elementary introduction to the Colombeau theory in which recently a solution was found to the problem of multiplication of Schwartz distributions. (author). Refs

  2. Instrumental Landing Using Audio Indication

    Science.gov (United States)

    Burlak, E. A.; Nabatchikov, A. M.; Korsun, O. N.

    2018-02-01

    The paper proposes an audio indication method for presenting to a pilot the information regarding the relative positions of an aircraft in the tasks of precision piloting. The implementation of the method is presented, the use of such parameters of audio signal as loudness, frequency and modulation are discussed. To confirm the operability of the audio indication channel the experiments using modern aircraft simulation facility were carried out. The simulated performed the instrument landing using the proposed audio method to indicate the aircraft deviations in relation to the slide path. The results proved compatible with the simulated instrumental landings using the traditional glidescope pointers. It inspires to develop the method in order to solve other precision piloting tasks.

  3. A comparison of sequential and information-based methods for determining the co-integration rank in heteroskedastic VAR MODELS

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Angelis, Luca De; Rahbek, Anders

    2015-01-01

    In this article, we investigate the behaviour of a number of methods for estimating the co-integration rank in VAR systems characterized by heteroskedastic innovation processes. In particular, we compare the efficacy of the most widely used information criteria, such as Akaike Information Criterion....... The relative finite-sample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BIC-based procedure and the bootstrap sequential test procedure deliver the best overall performance in terms......-based method to over-estimate the co-integration rank in relatively small sample sizes....

  4. Fractionation of metals by sequential extraction procedures (BCR and Tessier) in soil exposed to fire of wide temperature range

    Science.gov (United States)

    Fajkovic, Hana; Rončević, Sanda; Nemet, Ivan; Prohić, Esad; Leontić-Vazdar, Dana

    2017-04-01

    Forest fire presents serious problem, especially in Mediterranean Region. Effects of fire are numerous, from climate change and deforestation to loss of soil organic matter and changes in soil properties. One of the effects, not well documented, is possible redistribution and/or remobilisation of pollutants previously deposited in the soil, due to the new physical and chemical soil properties and changes in equilibrium conditions. For understanding and predicting possible redistribution and/or remobilisation of potential pollutants from soil, affected by fire different in temperature, several laboratory investigations were carried out. To evaluate the influence of organic matter on soil under fire, three soil samples were analysed and compared: (a) the one with added coniferous organic matter; (b) deciduous organic matter (b) and (c) soil without additional organic matter. Type of organic matter is closely related to pH of soil, as pH is influencing the mobility of some pollutants, e.g. metals. For that reason pH was also measured through all experimental steps. Each of mentioned soil samples (a, b and c) were heated at 1+3 different temperatures (25°C, 200°C, 500°C and 850°C). After heating, whereby fire effect on soil was simulated, samples were analysed by BCR protocol with the addition of a first step of sequential extraction procedure by Tessier and analysis of residual by aqua regia. Element fractionation of heavy metals by this procedure was used to determine the amounts of selected elements (Al, Cd, Cr, Co, Cu, Fe, Mn, Ni, Pb and Zn). Selected metal concentrations were determined using inductively coupled plasma atomic emission spectrometer. Further on, loss of organic matter was calculated after each heating procedure as well as the mineral composition. The mineral composition was determined using an X-ray diffraction. From obtained results, it can be concluded that temperature has an influence on concentration of elements in specific step of

  5. Sequential hepato-splenic scintigraphy for measurement of hepatic blood flow

    International Nuclear Information System (INIS)

    Biersack, H.J.; Knopp, R.; Dahlem, R.; Winkler, C.; Thelen, M.; Schulz, D.; Schmidt, R.

    1977-01-01

    The arterial and portal components of total liver blood flow were determined quantitatively in 31 patients by means of a new, non-invasive method. Sequential hepato-splenic scintigraphy has been employed, using a scintillation camera linked to a computer system. In normals, the proportion of portal flow was 71%, whereas in patients with portal hypertension it averaged 21%. Our experience indicates that the procedure can be of considerable value in the pre-operative diagnosis and postoperative follow-up of portal hypertension. (orig.) [de

  6. Sequential hepato-splenic scintigraphy for measurement of hepatic blood flow

    Energy Technology Data Exchange (ETDEWEB)

    Biersack, H J; Knopp, R; Dahlem, R; Winkler, C [Bonn Univ. (Germany, F.R.). Inst. fuer Klinische und Experimentelle Nuklearmedizin; Thelen, M [Bonn Univ. (Germany, F.R.). Radiologische Klinik; Schulz, D; Schmidt, R [Bonn Univ. (Germany, F.R.). Chirurgische Klinik und Poliklinik

    1977-01-01

    The arterial and portal components of total liver blood flow were determined quantitatively in 31 patients by means of a new, non-invasive method. Sequential hepato-splenic scintigraphy has been employed, using a scintillation camera linked to a computer system. In normals, the proportion of portal flow was 71%, whereas in patients with portal hypertension it averaged 21%. Our experience indicates that the procedure can be of considerable value in the pre-operative diagnosis and postoperative follow-up of portal hypertension.

  7. Sinonasal carcinoma presenting as chronic sinusitis and sequential bilateral visual loss

    Directory of Open Access Journals (Sweden)

    Wei-Yu Chiang

    2015-01-01

    Full Text Available Sinonasal undifferentiated carcinoma-related rhinogenic optic neuropathy is rare and may lead to visual loss. To the best of our knowledge, this is the first report of bilateral sequential visual loss induced by this etiology. It is important to differentiate between chronic sinusitis and malignancy on the basis of specific findings on magnetic resonance images. Surgical decompression with multidisciplinary therapy, including steroids, chemotherapy, and radiotherapy, is indicated. However, no visual improvement was noted in this case, emphasizing the rapid disease progression and importance of early diagnosis and treatment.

  8. The reproducibility and variability of sequential left ventricular ejection fraction measurements by the nuclear stethoscope

    International Nuclear Information System (INIS)

    Kurata, Chinori; Hayashi, Hideharu; Kobayashi, Akira; Yamazaki, Noboru

    1986-01-01

    We evaluated the reproducibility and variability of sequential left ventricular ejection fraction (LVEF) measurements by the nuclear stethoscope in 72 patients. The group as a whole demonstrated excellent reproducibility (r = 0.96). However, repeat LVEF measurements by the nuclear stethoscope at 5-minute interval showed around 9 % absolute difference, at 95 % confidence levels, from one measurement to the next. The finding indicates that a change in LVEF greater than 9 % is necessary for determining an acute effect of an intervention in individual cases. (author)

  9. Comparative study of lesions created by high-intensity focused ultrasound using sequential discrete and continuous scanning strategies.

    Science.gov (United States)

    Fan, Tingbo; Liu, Zhenbo; Zhang, Dong; Tang, Mengxing

    2013-03-01

    Lesion formation and temperature distribution induced by high-intensity focused ultrasound (HIFU) were investigated both numerically and experimentally via two energy-delivering strategies, i.e., sequential discrete and continuous scanning modes. Simulations were presented based on the combination of Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation and bioheat equation. Measurements were performed on tissue-mimicking phantoms sonicated by a 1.12-MHz single-element focused transducer working at an acoustic power of 75 W. Both the simulated and experimental results show that, in the sequential discrete mode, obvious saw-tooth-like contours could be observed for the peak temperature distribution and the lesion boundaries, with the increasing interval space between two adjacent exposure points. In the continuous scanning mode, more uniform peak temperature distributions and lesion boundaries would be produced, and the peak temperature values would decrease significantly with the increasing scanning speed. In addition, compared to the sequential discrete mode, the continuous scanning mode could achieve higher treatment efficiency (lesion area generated per second) with a lower peak temperature. The present studies suggest that the peak temperature and tissue lesion resulting from the HIFU exposure could be controlled by adjusting the transducer scanning speed, which is important for improving the HIFU treatment efficiency.

  10. Construction of computational program of aging in insulating materials for searching reversed sequential test conditions to give damage equivalent to simultaneous exposure of heat and radiation

    International Nuclear Information System (INIS)

    Fuse, Norikazu; Homma, Hiroya; Okamoto, Tatsuki

    2013-01-01

    Two consecutive numerical calculations on degradation of polymeric insulations under thermal and radiation environment are carried out to simulate so-called reversal sequential acceleration test. The aim of the calculation is to search testing conditions which provide material damage equivalent to the case of simultaneous exposure of heat and radiation. At least following four parameters are needed to be considered in the sequential method; dose rate and exposure time in radiation, as well as temperature and aging time in heating. The present paper discusses the handling of these parameters and shows some trial calculation results. (author)

  11. Fast regularizing sequential subspace optimization in Banach spaces

    International Nuclear Information System (INIS)

    Schöpfer, F; Schuster, T

    2009-01-01

    We are concerned with fast computations of regularized solutions of linear operator equations in Banach spaces in case only noisy data are available. To this end we modify recently developed sequential subspace optimization methods in such a way that the therein employed Bregman projections onto hyperplanes are replaced by Bregman projections onto stripes whose width is in the order of the noise level

  12. A sequential hypothesis test based on a generalized Azuma inequality

    NARCIS (Netherlands)

    Reijsbergen, D.P.; Scheinhardt, Willem R.W.; de Boer, Pieter-Tjerk

    We present a new power-one sequential hypothesis test based on a bound for the probability that a bounded zero-mean martingale ever crosses a curve of the form $a(n+k)^b$. The proof of the bound is of independent interest.

  13. Sequential and simultaneous revascularization in adult orthotopic piggyback liver transplantation

    NARCIS (Netherlands)

    Polak, WG; Miyamoto, S; Nemes, BA; Peeters, PMJG; de Jong, KP; Porte, RJ; Slooff, MJH

    The aim of the study was to assess whether there is a difference in outcome after sequential or simultaneous revascularization during orthotopic liver transplantation (OLT) in terms of patient and graft survival, mortality, morbidity, and liver function. The study population consisted of 102 adult

  14. A generally applicable sequential alkaline phosphatase immunohistochemical double staining

    NARCIS (Netherlands)

    van der Loos, Chris M.; Teeling, Peter

    2008-01-01

    A universal type of sequential double alkaline phosphatase immunohistochemical staining is described that can be used for formalin-fixed, paraffin-embedded and cryostat tissue sections from human and mouse origin. It consists of two alkaline phosphatase detection systems including enzymatic

  15. Excessive pressure in multichambered cuffs used for sequential compression therapy

    NARCIS (Netherlands)

    Segers, P; Belgrado, JP; Leduc, A; Leduc, O; Verdonck, P

    2002-01-01

    Background and Purpose. Pneumatic compression devices, used as part of the therapeutic strategy for lymphatic drainage, often have cuffs with multiple chambers that are, inflated sequentially. The purpose of this study was to investigate (1) the relationship between cuff chamber pressure

  16. Retrieval of sea surface velocities using sequential Ocean Colour

    Indian Academy of Sciences (India)

    The Indian remote sensing satellite, IRS-P4 (Oceansat-I) launched on May 26th, 1999 carried two sensors on board, i.e., the Ocean Colour Monitor (OCM) and the Multi-frequency Scanning Microwave Radiometer (MSMR) dedicated for oceanographic research. Sequential data of IRS-P4 OCM has been analysed over parts ...

  17. Sequential and Biomechanical Factors Constrain Timing and Motion in Tapping

    NARCIS (Netherlands)

    Loehr, J.D.; Palmer, C.

    2009-01-01

    The authors examined how timing accuracy in tapping sequences is influenced by sequential effects of preceding finger movements and biomechanical interdependencies among fingers. Skilled pianists tapped Sequences at 3 rates; in each sequence, a finger whose motion was more or less independent of

  18. What determines the impact of context on sequential action?

    NARCIS (Netherlands)

    Ruitenberg, M.F.L.; Verwey, Willem B.; Abrahamse, E.L.

    2015-01-01

    In the current study we build on earlier observations that memory-based sequential action is better in the original learning context than in other contexts. We examined whether changes in the perceptual context have differential impact across distinct processing phases (preparation versus execution

  19. The Efficacy of Sequential Therapy in Eradication of Helicobacter ...

    African Journals Online (AJOL)

    2017-05-22

    May 22, 2017 ... pylori (H. pylori) eradication rates of standard triple, sequential and quadruple therapies including claritromycin regimes in this study. Materials and Methods: A total of 160 patients with dyspeptic symptoms were enrolled to the study. The patients were randomized to four groups of treatment protocols.

  20. The efficacy of sequential therapy in eradication of Helicobacter ...

    African Journals Online (AJOL)

    ... the Helicobacter pylori (H. pylori) eradication rates of standard triple, sequential and quadruple therapies including claritromycin regimes in this study. Materials and Methods: A total of 160 patients with dyspeptic symptoms were enrolled to the study. The patients were randomized to four groups of treatment protocols.

  1. In Vivo Evaluation of Synthetic Aperture Sequential Beamforming

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian; Hansen, Peter Møller; Lange, Theis

    2012-01-01

    Ultrasound in vivo imaging using synthetic aperture sequential beamformation (SASB) is compared with conventional imaging in a double blinded study using side-by-side comparisons. The objective is to evaluate if the image quality in terms of penetration depth, spatial resolution, contrast...

  2. Quantum chromodynamics as the sequential fragmenting with inactivation

    International Nuclear Information System (INIS)

    Botet, R.

    1996-01-01

    We investigate the relation between the modified leading log approximation of the perturbative QCD and the sequential binary fragmentation process. We will show that in the absence of inactivation, this process is equivalent to the QCD gluodynamics. The inactivation term yields a precise prescription of how to include the hadronization in the QCD equations. (authors)

  3. Quantum chromodynamics as the sequential fragmenting with inactivation

    Energy Technology Data Exchange (ETDEWEB)

    Botet, R. [Paris-11 Univ., 91 - Orsay (France). Lab. de Physique des Solides; Ploszajczak, M. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France)

    1996-12-31

    We investigate the relation between the modified leading log approximation of the perturbative QCD and the sequential binary fragmentation process. We will show that in the absence of inactivation, this process is equivalent to the QCD gluodynamics. The inactivation term yields a precise prescription of how to include the hadronization in the QCD equations. (authors). 15 refs.

  4. The Motivating Language of Principals: A Sequential Transformative Strategy

    Science.gov (United States)

    Holmes, William Tobias

    2012-01-01

    This study implemented a Sequential Transformative Mixed Methods design with teachers (as recipients) and principals (to give voice) in the examination of principal talk in two different school accountability contexts (Continuously Improving and Continuously Zigzag) using the conceptual framework of Motivating Language Theory. In phase one,…

  5. The sequential price of anarchy for atomic congestion games

    NARCIS (Netherlands)

    de Jong, Jasper; Uetz, Marc Jochen

    2014-01-01

    In situations without central coordination, the price of anarchy relates the quality of any Nash equilibrium to the quality of a global optimum. Instead of assuming that all players choose their actions simultaneously, here we consider games where players choose their actions sequentially. The

  6. Sequential infiltration synthesis for enhancing multiple-patterning lithography

    Science.gov (United States)

    Darling, Seth B.; Elam, Jeffrey W.; Tseng, Yu-Chih

    2017-06-20

    Simplified methods of multiple-patterning photolithography using sequential infiltration synthesis to modify the photoresist such that it withstands plasma etching better than unmodified resist and replaces one or more hard masks and/or a freezing step in MPL processes including litho-etch-litho-etch photolithography or litho-freeze-litho-etch photolithography.

  7. A sequential mixed methods research approach to investigating HIV ...

    African Journals Online (AJOL)

    Sequential mixed methods research is an effective approach for investigating complex problems, but it has not been extensively used in construction management research. In South Africa, the HIV/AIDS pandemic has seen construction management taking on a vital responsibility since the government called upon the ...

  8. Algorithm for Non-proportional Loading in Sequentially Linear Analysis

    NARCIS (Netherlands)

    Yu, C.; Hoogenboom, P.C.J.; Rots, J.G.; Saouma, V.; Bolander, J.; Landis, E.

    2016-01-01

    Sequentially linear analysis (SLA) is an alternative to the Newton-Raphson method for analyzing the nonlinear behavior of reinforced concrete and masonry structures. In this paper SLA is extended to load cases that are applied one after the other, for example first dead load and then wind load. It

  9. Concurrent Learning of Control in Multi agent Sequential Decision Tasks

    Science.gov (United States)

    2018-04-17

    Concurrent Learning of Control in Multi-agent Sequential Decision Tasks The overall objective of this project was to develop multi-agent reinforcement... learning (MARL) approaches for intelligent agents to autonomously learn distributed control policies in decentral- ized partially observable... learning of policies in Dec-POMDPs, established performance bounds, evaluated these algorithms both theoretically and empirically, The views

  10. Sequential stenotic strictures of the small bowel leading to obstruction

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Small bowel obstructions (SBOs) are primarily caused by adhesions, hernias, neoplasms, or inflammatory strictures. Intraluminal strictures are an uncommon cause of SBO. This report describes our findings in a unique case of sequential, stenotic intraluminal strictures of the small intestine, discusses the differential diagnosis of intraluminal intestinal strictures, and reviews the literature regarding intraluminal pathology.

  11. Decomposition of Copper (II) Sulfate Pentahydrate: A Sequential Gravimetric Analysis.

    Science.gov (United States)

    Harris, Arlo D.; Kalbus, Lee H.

    1979-01-01

    Describes an improved experiment of the thermal dehydration of copper (II) sulfate pentahydrate. The improvements described here are control of the temperature environment and a quantitative study of the decomposition reaction to a thermally stable oxide. Data will suffice to show sequential gravimetric analysis. (Author/SA)

  12. Standardized method for reproducing the sequential X-rays flap

    International Nuclear Information System (INIS)

    Brenes, Alejandra; Molina, Katherine; Gudino, Sylvia

    2009-01-01

    A method is validated to estandardize in the taking, developing and analysis of bite-wing radiographs taken in sequential way, in order to compare and evaluate detectable changes in the evolution of the interproximal lesions through time. A radiographic positioner called XCP® is modified by means of a rigid acrylic guide, to achieve proper of the X ray equipment core positioning relative to the XCP® ring and the reorientation during the sequential x-rays process. 16 subjects of 4 to 40 years old are studied for a total number of 32 registries. Two x-rays of the same block of teeth of each subject have been taken in sequential way, with a minimal difference of 30 minutes between each one, before the placement of radiographic attachment. The images have been digitized with a Super Cam® scanner and imported to a software. The measurements in X and Y-axis for both x-rays were performed to proceed to compare. The intraclass correlation index (ICI) has shown that the proposed method is statistically related to measurement (mm) obtained in the X and Y-axis for both sequential series of x-rays (p=0.01). The measures of central tendency and dispersion have shown that the usual occurrence is indifferent between the two measurements (Mode 0.000 and S = 0083 and 0.109) and that the probability of occurrence of different values is lower than expected. (author) [es

  13. Sequential Analysis of Metals in Municipal Dumpsite Composts of ...

    African Journals Online (AJOL)

    ... Ni) in Municipal dumpsite compost were determined by the sequential extraction method. Chemical parameters such as pH, conductivity, and organic carbon contents of the samples were also determined. Analysis of the extracts was carried out by atomic absorption spectrophotometer machine (Buck Scientific VPG 210).

  14. Investigation of the sequential validity of quality improvement team ...

    African Journals Online (AJOL)

    Background: Self-assessment is widely used in the health care improvement collaboratives quality improvement (QI) teams' to assess their own performance. There is mixed evidence on the validity of this approach. This study investigated sequential validity of self-assessments in a QI HIV collaborative in Tanzania.

  15. The one-shot deviation principle for sequential rationality

    DEFF Research Database (Denmark)

    Hendon, Ebbe; Whitta-Jacobsen, Hans Jørgen; Sloth, Birgitte

    1996-01-01

    We present a decentralization result which is useful for practical and theoretical work with sequential equilibrium, perfect Bayesian equilibrium, and related equilibrium concepts for extensive form games. A weak consistency condition is sufficient to obtain an analogy to the well known One-Stage......-Stage-Deviation Principle for subgame perfect equilibrium...

  16. A solution for automatic parallelization of sequential assembly code

    Directory of Open Access Journals (Sweden)

    Kovačević Đorđe

    2013-01-01

    Full Text Available Since modern multicore processors can execute existing sequential programs only on a single core, there is a strong need for automatic parallelization of program code. Relying on existing algorithms, this paper describes one new software solution tool for parallelization of sequential assembly code. The main goal of this paper is to develop the parallelizator which reads sequential assembler code and at the output provides parallelized code for MIPS processor with multiple cores. The idea is the following: the parser translates assembler input file to program objects suitable for further processing. After that the static single assignment is done. Based on the data flow graph, the parallelization algorithm separates instructions on different cores. Once sequential code is parallelized by the parallelization algorithm, registers are allocated with the algorithm for linear allocation, and the result at the end of the program is distributed assembler code on each of the cores. In the paper we evaluate the speedup of the matrix multiplication example, which was processed by the parallelizator of assembly code. The result is almost linear speedup of code execution, which increases with the number of cores. The speed up on the two cores is 1.99, while on 16 cores the speed up is 13.88.

  17. Making Career Decisions--A Sequential Elimination Approach.

    Science.gov (United States)

    Gati, Itamar

    1986-01-01

    Presents a model for career decision making based on the sequential elimination of occupational alternatives, an adaptation for career decisions of Tversky's (1972) elimination-by-aspects theory of choice. The expected utility approach is reviewed as a representative compensatory model for career decisions. Advantages, disadvantages, and…

  18. Biohydrogen production from beet molasses by sequential dark and photofermentation

    NARCIS (Netherlands)

    Özgür, E.; Mars, A.E.; Peksel, B.; Louwerse, A.; Yücel, M.; Gündüz, U.; Claassen, P.A.M.; Eroglu, I.

    2010-01-01

    Biological hydrogen production using renewable resources is a promising possibility to generate hydrogen in a sustainable way. In this study, a sequential dark and photofermentation has been employed for biohydrogen production using sugar beet molasses as a feedstock. An extreme thermophile

  19. Influence of synchronous and sequential stimulation on muscle fatigue

    NARCIS (Netherlands)

    Thomsen, M.; Thomsen, M.; Veltink, Petrus H.

    1997-01-01

    In acute experiments the sciatic nerve of the rat is electrically stimulated to induce fatigue in the medial Gastrocnemius muscle. Fatigue tests are carried out using intermittent stimulation of different compartments (sequential) or a single compartment (synchronous) of the sciatic nerve. The

  20. A Relational Account of Call-by-Value Sequentiality

    DEFF Research Database (Denmark)

    Riecke, Jon Gary; Sandholm, Anders Bo

    2002-01-01

    We construct a model for FPC, a purely functional, sequential, call-by-value language. The model is built from partial continuous functions, in the style of Plotkin, further constrained to be uniform with respect to a class of logical relations. We prove that the model is fully abstract....

  1. Sequential kidney scintiscanning before and after vascular reconstruction

    International Nuclear Information System (INIS)

    Siems, H.H.; Allenberg, J.R.; Hupp, T.; Clorius, J.H.

    1985-01-01

    In this follow-up study sequential scintigraphy was performed on 20 of selected patients up to 3.4 years after operation, the results are compared with the pre-operative examinations and with the surgical effect on the increased blood pressure. (orig./MG) [de

  2. Trial Sequential Analysis in systematic reviews with meta-analysis

    Directory of Open Access Journals (Sweden)

    Jørn Wetterslev

    2017-03-01

    Full Text Available Abstract Background Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors and too many false negative conclusions (type II errors. Methods We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. Results The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D2 measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in

  3. Sequential growth factor application in bone marrow stromal cell ligament engineering.

    Science.gov (United States)

    Moreau, Jodie E; Chen, Jingsong; Horan, Rebecca L; Kaplan, David L; Altman, Gregory H

    2005-01-01

    In vitro bone marrow stromal cell (BMSC) growth may be enhanced through culture medium supplementation, mimicking the biochemical environment in which cells optimally proliferate and differentiate. We hypothesize that the sequential administration of growth factors to first proliferate and then differentiate BMSCs cultured on silk fiber matrices will support the enhanced development of ligament tissue in vitro. Confluent second passage (P2) BMSCs obtained from purified bone marrow aspirates were seeded on RGD-modified silk matrices. Seeded matrices were divided into three groups for 5 days of static culture, with medium supplement of basic fibroblast growth factor (B) (1 ng/mL), epidermal growth factor (E; 1 ng/mL), or growth factor-free control (C). After day 5, medium supplementation was changed to transforming growth factor-beta1 (T; 5 ng/mL) or C for an additional 9 days of culture. Real-time RT-PCR, SEM, MTT, histology, and ELISA for collagen type I of all sample groups were performed. Results indicated that BT supported the greatest cell ingrowth after 14 days of culture in addition to the greatest cumulative collagen type I expression measured by ELISA. Sequential growth factor application promoted significant increases in collagen type I transcript expression from day 5 of culture to day 14, for five of six groups tested. All T-supplemented samples surpassed their respective control samples in both cell ingrowth and collagen deposition. All samples supported spindle-shaped, fibroblast cell morphology, aligning with the direction of silk fibers. These findings indicate significant in vitro ligament development after only 14 days of culture when using a sequential growth factor approach.

  4. Program completion of a web-based tailored lifestyle intervention for adults: differences between a sequential and a simultaneous approach.

    Science.gov (United States)

    Schulz, Daniela N; Schneider, Francine; de Vries, Hein; van Osch, Liesbeth A D M; van Nierop, Peter W M; Kremers, Stef P J

    2012-03-08

    Unhealthy lifestyle behaviors often co-occur and are related to chronic diseases. One effective method to change multiple lifestyle behaviors is web-based computer tailoring. Dropout from Internet interventions, however, is rather high, and it is challenging to retain participants in web-based tailored programs, especially programs targeting multiple behaviors. To date, it is unknown how much information people can handle in one session while taking part in a multiple behavior change intervention, which could be presented either sequentially (one behavior at a time) or simultaneously (all behaviors at once). The first objective was to compare dropout rates of 2 computer-tailored interventions: a sequential and a simultaneous strategy. The second objective was to assess which personal characteristics are associated with completion rates of the 2 interventions. Using an RCT design, demographics, health status, physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking were self-assessed through web-based questionnaires among 3473 adults, recruited through Regional Health Authorities in the Netherlands in the autumn of 2009. First, a health risk appraisal was offered, indicating whether respondents were meeting the 5 national health guidelines. Second, psychosocial determinants of the lifestyle behaviors were assessed and personal advice was provided, about one or more lifestyle behaviors. Our findings indicate a high non-completion rate for both types of intervention (71.0%; n = 2167), with more incompletes in the simultaneous intervention (77.1%; n = 1169) than in the sequential intervention (65.0%; n = 998). In both conditions, discontinuation was predicted by a lower age (sequential condition: OR = 1.04; P simultaneous condition: OR = 1.04; P sequential condition: OR = 0.86; P = .01; CI = 0.76-0.97; simultaneous condition: OR = 0.49; P sequential intervention, being male (OR = 1.27; P = .04; CI = 1.01-1.59) also predicted dropout

  5. Work–Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress

    Science.gov (United States)

    Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao

    2018-01-01

    After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work–family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work–family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work–family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women’s perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work–family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work–family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress. PMID:29719522

  6. Work-Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress.

    Science.gov (United States)

    Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao

    2018-01-01

    After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work-family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work-family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work-family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women's perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work-family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work-family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress.

  7. Work–Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress

    Directory of Open Access Journals (Sweden)

    Shiyi Zhou

    2018-04-01

    Full Text Available After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work–family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work–family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work–family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women’s perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work–family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work–family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress.

  8. Accuracy of respiratory motion measurement of 4D-MRI: A comparison between cine and sequential acquisition.

    Science.gov (United States)

    Liu, Yilin; Yin, Fang-Fang; Rhee, DongJoo; Cai, Jing

    2016-01-01

    The authors have recently developed a cine-mode T2*/T1-weighted 4D-MRI technique and a sequential-mode T2-weighted 4D-MRI technique for imaging respiratory motion. This study aims at investigating which 4D-MRI image acquisition mode, cine or sequential, provides more accurate measurement of organ motion during respiration. A 4D digital extended cardiac-torso (XCAT) human phantom with a hypothesized tumor was used to simulate the image acquisition and the 4D-MRI reconstruction. The respiratory motion was controlled by the given breathing signal profiles. The tumor was manipulated to move continuously with the surrounding tissue. The motion trajectories were measured from both sequential- and cine-mode 4D-MRI images. The measured trajectories were compared with the average trajectory calculated from the input profiles, which was used as references. The error in 4D-MRI tumor motion trajectory (E) was determined. In addition, the corresponding respiratory motion amplitudes of all the selected 2D images for 4D reconstruction were recorded. Each of the amplitude was compared with the amplitude of its associated bin on the average breathing curve. The mean differences from the average breathing curve across all slice positions (D) were calculated. A total of 500 simulated respiratory profiles with a wide range of irregularity (Ir) were used to investigate the relationship between D and Ir. Furthermore, statistical analysis of E and D using XCAT controlled by 20 cancer patients' breathing profiles was conducted. Wilcoxon Signed Rank test was conducted to compare two modes. D increased faster for cine-mode (D = 1.17 × Ir + 0.23) than sequential-mode (D = 0.47 × Ir + 0.23) as irregularity increased. For the XCAT study using 20 cancer patients' breathing profiles, the median E values were significantly different: 0.12 and 0.10 cm for cine- and sequential-modes, respectively, with a p-value of 0.02. The median D values were significantly different: 0.47 and 0.24 cm for cine

  9. Removal of toluene by sequential adsorption-plasma oxidation: Mixed support and catalyst deactivation.

    Science.gov (United States)

    Qin, Caihong; Huang, Xuemin; Zhao, Junjie; Huang, Jiayu; Kang, Zhongli; Dang, Xiaoqing

    2017-07-15

    A sequential adsorption-plasma oxidation system was used to remove toluene from simulated dry air using γ-Al 2 O 3 , HZSM-5, a mixture of the two materials or their supported Mn-Ag catalyst as adsorbents under atmospheric pressure and room temperature. After 120min of plasma oxidation, γ-Al 2 O 3 had a better carbon balance (∼75%) than HZSM-5, but the CO 2 yield of γ-Al 2 O 3 was only ∼50%; and there was some desorption of toluene when γ-Al 2 O 3 was used. When a mixture of HZSM-5 and γ-Al 2 O 3 with a mass ratio of 1/2 was used, the carbon balance was up to 90% and 82% of this was CO 2 . The adsorption performance and electric discharge characteristics of the mixed supports were tested in order to rationalize this high CO x yield. After seven cycles of sequential adsorption-plasma oxidation, support and Mn-Ag catalyst deactivation occurred. The support and catalyst were characterized before and after deactivation by SEM, a BET method, XRD, XPS and GC-MS in order to probe the mechanism of their deactivation. 97.6% of the deactivated supports and 76% of the deactivated catalysts could be recovered by O 2 temperature-programmed oxidation. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Markov decision processes: a tool for sequential decision making under uncertainty.

    Science.gov (United States)

    Alagoz, Oguzhan; Hsu, Heather; Schaefer, Andrew J; Roberts, Mark S

    2010-01-01

    We provide a tutorial on the construction and evaluation of Markov decision processes (MDPs), which are powerful analytical tools used for sequential decision making under uncertainty that have been widely used in many industrial and manufacturing applications but are underutilized in medical decision making (MDM). We demonstrate the use of an MDP to solve a sequential clinical treatment problem under uncertainty. Markov decision processes generalize standard Markov models in that a decision process is embedded in the model and multiple decisions are made over time. Furthermore, they have significant advantages over standard decision analysis. We compare MDPs to standard Markov-based simulation models by solving the problem of the optimal timing of living-donor liver transplantation using both methods. Both models result in the same optimal transplantation policy and the same total life expectancies for the same patient and living donor. The computation time for solving the MDP model is significantly smaller than that for solving the Markov model. We briefly describe the growing literature of MDPs applied to medical decisions.

  11. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    Science.gov (United States)

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  12. DNA moves sequentially towards the nuclear matrix during DNA replication in vivo

    Directory of Open Access Journals (Sweden)

    Aranda-Anzaldo Armando

    2011-01-01

    Full Text Available Abstract Background In the interphase nucleus of metazoan cells DNA is organized in supercoiled loops anchored to a nuclear matrix (NM. There is varied evidence indicating that DNA replication occurs in replication factories organized upon the NM and that DNA loops may correspond to the actual replicons in vivo. In normal rat liver the hepatocytes are arrested in G0 but they synchronously re-enter the cell cycle after partial-hepatectomy leading to liver regeneration in vivo. We have previously determined in quiescent rat hepatocytes that a 162 kbp genomic region containing members of the albumin gene family is organized into five structural DNA loops. Results In the present work we tracked down the movement relative to the NM of DNA sequences located at different points within such five structural DNA loops during the S phase and after the return to cellular quiescence during liver regeneration. Our results indicate that looped DNA moves sequentially towards the NM during replication and then returns to its original position in newly quiescent cells, once the liver regeneration has been achieved. Conclusions Looped DNA moves in a sequential fashion, as if reeled in, towards the NM during DNA replication in vivo thus supporting the notion that the DNA template is pulled progressively towards the replication factories on the NM so as to be replicated. These results provide further evidence that the structural DNA loops correspond to the actual replicons in vivo.

  13. Individual differences in eyewitness identification accuracy between sequential and simultaneous line-ups: consequences for police practice and jury decisions

    OpenAIRE

    Dominic Willmott; Nicole Sherretts

    2016-01-01

    Background Although previous research has indicated that sequential line-up procedures result in fewer mistaken identifications, this was found to be at the expense of accurate identifications more typical within simultaneous procedures. Hence, there remains a lack of agreement about which procedure is superior, and the interaction such procedures have with eyewitness confidence. The interaction between witness demographics and identification accuracy also remains unclear. Part...

  14. Selenium speciation in phosphate mine soils and evaluation of a sequential extraction procedure using XAFS

    International Nuclear Information System (INIS)

    Favorito, Jessica E.; Luxton, Todd P.; Eick, Matthew J.; Grossl, Paul R.

    2017-01-01

    Selenium is a trace element found in western US soils, where ingestion of Se-accumulating plants has resulted in livestock fatalities. Therefore, a reliable understanding of Se speciation and bioavailability is critical for effective mitigation. Sequential extraction procedures (SEP) are often employed to examine Se phases and speciation in contaminated soils but may be limited by experimental conditions. We examined the validity of a SEP using X-ray absorption spectroscopy (XAS) for both whole and a sequence of extracted soils. The sequence included removal of soluble, PO 4 -extractable, carbonate, amorphous Fe-oxide, crystalline Fe-oxide, organic, and residual Se forms. For whole soils, XANES analyses indicated Se(0) and Se(-II) predominated, with lower amounts of Se(IV) present, related to carbonates and Fe-oxides. Oxidized Se species were more elevated and residual/elemental Se was lower than previous SEP results from ICP-AES suggested. For soils from the SEP sequence, XANES results indicated only partial recovery of carbonate, Fe-oxide and organic Se. This suggests Se was incompletely removed during designated extractions, possibly due to lack of mineral solubilization or reagent specificity. Selenium fractions associated with Fe-oxides were reduced in amount or removed after using hydroxylamine HCl for most soils examined. XANES results indicate partial dissolution of solid-phases may occur during extraction processes. This study demonstrates why precautions should be taken to improve the validity of SEPs. Mineralogical and chemical characterizations should be completed prior to SEP implementation to identify extractable phases or mineral components that may influence extraction effectiveness. Sequential extraction procedures can be appropriately tailored for reliable quantification of speciation in contaminated soils. - Highlights: • XANES spectra indicated whole soils consisted of mostly elemental and organic Se and lower amounts of sorbed oxidized Se.

  15. Retrieval of sea surface velocities using sequential ocean colour monitor (OCM) data

    Digital Repository Service at National Institute of Oceanography (India)

    Prasad, J.S.; Rajawat, A.S.; Pradhan, Y.; Chauhan, O.S.; Nayak, S.R.

    velocities has been developed. The method is based on matching suspended sediment dispersion patterns, in sequential two time lapsed images. The pattern matching is performed on atmospherically corrected and geo-referenced sequential pair of images by Maximum...

  16. MC/DC and Toggle Coverage Measurement Tool for FBD Program Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Eui Sub; Jung, Se Jin; Kim, Jae Yeob; Yoo, Jun Beom [Konkuk University, Seoul (Korea, Republic of)

    2016-05-15

    The functional verification of FBD program can be implemented with various techniques such as testing and simulation. Simulation is preferable to verify FBD program, because it replicates operation of the PLC as well. The PLC is executed repeatedly as long as the controlled system is running based on scan time. Likewise, the simulation technique operates continuously and sequentially. Although engineers try to verify the functionality wholly, it is difficult to find residual errors in the design. Even if 100% functional coverage is accomplished, code coverage have 50%, which might indicate that the scenario is missing some key features of the design. Unfortunately, errors and bugs are often found in the missing points. To assure a high quality of functional verification, code coverage is important as well as functional coverage. We developed a pair tool 'FBDSim' and 'FBDCover' for FBD simulation and coverage measurement. The 'FBDSim' automatically simulates a set of FBD simulation scenarios. While the 'FBDSim' simulates the FBD program, it calculates the MC/DC and Toggle coverage and identifies unstimulated points. After FBD simulation is done, the 'FBDCover' reads the coverage results and shows the coverage with graphical feature and uncovered points with tree feature. The coverages and uncovered points can help engineers to improve the quality of simulation. We slightly dealt with the both coverages, but the coverage is dealt with more concrete and rigorous manner.

  17. Sequential multi-nuclide emission rate estimation method based on gamma dose rate measurement for nuclear emergency management

    International Nuclear Information System (INIS)

    Zhang, Xiaole; Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu

    2017-01-01

    Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.

  18. Sequential multi-nuclide emission rate estimation method based on gamma dose rate measurement for nuclear emergency management

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiaole, E-mail: zhangxiaole10@outlook.com [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany); Institute of Public Safety Research, Department of Engineering Physics, Tsinghua University, Beijing, 100084 (China); Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany)

    2017-03-05

    Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.

  19. Plant simulator

    International Nuclear Information System (INIS)

    Fukumitsu, Hiroyuki

    1998-01-01

    A simulator of a reactor plant of the present invention comprises a plurality of distributed computers, an indication processing section and an operation section. The simulation calculation functions of various kinds of plant models in the plant are shared by the plurality of computers. The indication processing section controls collection of data of the plant simulated by the computers and instructions of an operator. The operation section is operated by the operator and the results of operation are transmitted to the indication processing section, to conduct operation trainings and display the results of the simulation. Each of the computers and the indication processing portion are connected with each other by a network having a memory for common use. Data such as the results of calculation of plant models and various kinds of parameters of the plant required commonly to the calculators and the indication processing section are stored in the common memory, and adapted to be used by way of the network. (N.H.)

  20. Sequential Foreign Investments, Regional Technology Platforms and the Evolution of Japanese Multinationals in East Asia

    OpenAIRE

    Song, Jaeyong

    2001-01-01

    IVABSTRACTIn this paper, we investigate the firm-level mechanisms that underlie the sequential foreign direct investment (FDI) decisions of multinational corporations (MNCs). To understand inter-firm heterogeneity in the sequential FDI behaviors of MNCs, we develop a firm capability-based model of sequential FDI decisions. In the setting of Japanese electronics MNCs in East Asia, we empirically examine how prior investments in firm capabilities affect sequential investments into existingprodu...

  1. Sequential cancer immunotherapy: targeted activity of dimeric TNF and IL-8

    Science.gov (United States)

    Adrian, Nicole; Siebenborn, Uta; Fadle, Natalie; Plesko, Margarita; Fischer, Eliane; Wüest, Thomas; Stenner, Frank; Mertens, Joachim C.; Knuth, Alexander; Ritter, Gerd; Old, Lloyd J.; Renner, Christoph

    2009-01-01

    Polymorphonuclear neutrophils (PMNs) are potent effectors of inflammation and their attempts to respond to cancer are suggested by their systemic, regional and intratumoral activation. We previously reported on the recruitment of CD11b+ leukocytes due to tumor site-specific enrichment of TNF activity after intravenous administration of a dimeric TNF immunokine with specificity for fibroblast activation protein (FAP). However, TNF-induced chemo-attraction and extravasation of PMNs from blood into the tumor is a multistep process essentially mediated by interleukin 8. With the aim to amplify the TNF-induced and IL-8-mediated chemotactic response, we generated immunocytokines by N-terminal fusion of a human anti-FAP scFv fragment with human IL-8 (IL-872) and its N-terminally truncated form IL-83-72. Due to the dramatic difference in chemotaxis induction in vitro, we favored the mature chemokine fused to the anti-FAP scFv for further investigation in vivo. BALB/c nu/nu mice were simultaneously xenografted with FAP-positive or -negative tumors and extended chemo-attraction of PMNs was only detectable in FAP-expressing tissue after intravenous administration of the anti-FAP scFv-IL-872 construct. As TNF-activated PMNs are likewise producers and primary targets for IL-8, we investigated the therapeutic efficacy of co-administration of both effectors: Sequential application of scFv-IL-872 and dimeric IgG1-TNF fusion proteins significantly enhanced anti-tumor activity when compared either to a single effector treatment regimen or sequential application of non-targeted cytokines, indicating that the tumor-restricted sequential application of IL-872 and TNF is a promising approach for cancer therapy. PMID:19267427

  2. Weighted-Bit-Flipping-Based Sequential Scheduling Decoding Algorithms for LDPC Codes

    Directory of Open Access Journals (Sweden)

    Qing Zhu

    2013-01-01

    Full Text Available Low-density parity-check (LDPC codes can be applied in a lot of different scenarios such as video broadcasting and satellite communications. LDPC codes are commonly decoded by an iterative algorithm called belief propagation (BP over the corresponding Tanner graph. The original BP updates all the variable-nodes simultaneously, followed by all the check-nodes simultaneously as well. We propose a sequential scheduling algorithm based on weighted bit-flipping (WBF algorithm for the sake of improving the convergence speed. Notoriously, WBF is a low-complexity and simple algorithm. We combine it with BP to obtain advantages of these two algorithms. Flipping function used in WBF is borrowed to determine the priority of scheduling. Simulation results show that it can provide a good tradeoff between FER performance and computation complexity for short-length LDPC codes.

  3. Differences in Sequential Eye Movement Behavior between Taiwanese and American Viewers

    Directory of Open Access Journals (Sweden)

    Yen Ju eLee

    2016-05-01

    Full Text Available Knowledge of how information is sought in the visual world is useful for predicting and simulating human behavior. Taiwanese participants and American participants were instructed to judge the facial expression of a focal face that was flanked horizontally by other faces while their eye movements were monitored. The Taiwanese participants distributed their eye fixations more widely than American participants, started to look away from the focal face earlier than American participants, and spent a higher percentage of time looking at the flanking faces. Eye movement transition matrices also provided evidence that Taiwanese participants continually, and systematically shifted gaze between focal and flanking faces. Eye movement patterns were less systematic and less prevalent in American participants. This suggests that both cultures utilized different attention allocation strategies. The results highlight the importance of determining sequential eye movement statistics in cross-cultural research on the utilization of visual context.

  4. Multistrain models predict sequential multidrug treatment strategies to result in less antimicrobial resistance than combination treatment

    DEFF Research Database (Denmark)

    Ahmad, Amais; Zachariasen, Camilla; Christiansen, Lasse Engbo

    2016-01-01

    generated by a mathematical model of the competitive growth of multiple strains of Escherichia coli.Results: Simulation studies showed that sequential use of tetracycline and ampicillin reduced the level of double resistance, when compared to the combination treatment. The effect of the cycling frequency...... frequency did not play a role in suppressing the growth of resistant strains, but the specific order of the two antimicrobials did. Predictions made from the study could be used to redesign multidrug treatment strategies not only for intramuscular treatment in pigs, but also for other dosing routes.......Background: Combination treatment is increasingly used to fight infections caused by bacteria resistant to two or more antimicrobials. While multiple studies have evaluated treatment strategies to minimize the emergence of resistant strains for single antimicrobial treatment, fewer studies have...

  5. A novel approach for small sample size family-based association studies: sequential tests.

    Science.gov (United States)

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  6. Performance analysis of coherent free space optical communications with sequential pyramid wavefront sensor

    Science.gov (United States)

    Liu, Wei; Yao, Kainan; Chen, Lu; Huang, Danian; Cao, Jingtai; Gu, Haijun

    2018-03-01

    Based-on the previous study on the theory of the sequential pyramid wavefront sensor (SPWFS), in this paper, the SPWFS is first applied to the coherent free space optical communications (FSOC) with more flexible spatial resolution and higher sensitivity than the Shack-Hartmann wavefront sensor, and with higher uniformity of intensity distribution and much simpler than the pyramid wavefront sensor. Then, the mixing efficiency (ME) and the bit error rate (BER) of the coherent FSOC are analyzed during the aberrations correction through numerical simulation with binary phase shift keying (BPSK) modulation. Finally, an experimental AO system based-on SPWFS is setup, and the experimental data is used to analyze the ME and BER of homodyne detection with BPSK modulation. The results show that the AO system based-on SPWFS can increase ME and decrease BER effectively. The conclusions of this paper provide a new method of wavefront sensing for designing the AO system for a coherent FSOC system.

  7. An Improved Sequential Initiation Method for Multitarget Track in Clutter with Large Noise Measurement

    Directory of Open Access Journals (Sweden)

    Daxiong Ji

    2014-01-01

    Full Text Available This paper proposes an improved sequential method for underwater multiple objects tracks initiation in clutter, estimating the initial position for the trajectory. The underwater environment is complex and changeable, and the sonar data are not very ideal. When the detection distance is far, the error of measured data is also great. Besides that, the clutter has a grave effect on the tracks initiation. So it is hard to initialize a track and estimate the initial position. The new tracks initiation is that when at least six of ten points meet the requirements, then we determine that there is a new track and the initial states of the parameters are estimated by the linear least square method. Compared to the conventional tracks initiation methods, our method not only considers the kinematics information of targets, but also regards the error of the sonar sensors as an important element. Computer simulations confirm that the performance of our method is very nice.

  8. Particle connectedness and cluster formation in sequential depositions of particles: integral-equation theory.

    Science.gov (United States)

    Danwanichakul, Panu; Glandt, Eduardo D

    2004-11-15

    We applied the integral-equation theory to the connectedness problem. The method originally applied to the study of continuum percolation in various equilibrium systems was modified for our sequential quenching model, a particular limit of an irreversible adsorption. The development of the theory based on the (quenched-annealed) binary-mixture approximation includes the Ornstein-Zernike equation, the Percus-Yevick closure, and an additional term involving the three-body connectedness function. This function is simplified by introducing a Kirkwood-like superposition approximation. We studied the three-dimensional (3D) system of randomly placed spheres and 2D systems of square-well particles, both with a narrow and with a wide well. The results from our integral-equation theory are in good accordance with simulation results within a certain range of densities.

  9. A sequential threshold cure model for genetic analysis of time-to-event data

    DEFF Research Database (Denmark)

    Ødegård, J; Madsen, Per; Labouriau, Rodrigo S.

    2011-01-01

    In analysis of time-to-event data, classical survival models ignore the presence of potential nonsusceptible (cured) individuals, which, if present, will invalidate the inference procedures. Existence of nonsusceptible individuals is particularly relevant under challenge testing with specific...... pathogens, which is a common procedure in aquaculture breeding schemes. A cure model is a survival model accounting for a fraction of nonsusceptible individuals in the population. This study proposes a mixed cure model for time-to-event data, measured as sequential binary records. In a simulation study...... survival data were generated through 2 underlying traits: susceptibility and endurance (risk of dying per time-unit), associated with 2 sets of underlying liabilities. Despite considerable phenotypic confounding, the proposed model was largely able to distinguish the 2 traits. Furthermore, if selection...

  10. Acoustical source reconstruction from non-synchronous sequential measurements by Fast Iterative Shrinkage Thresholding Algorithm

    Science.gov (United States)

    Yu, Liang; Antoni, Jerome; Leclere, Quentin; Jiang, Weikang

    2017-11-01

    Acoustical source reconstruction is a typical inverse problem, whose minimum frequency of reconstruction hinges on the size of the array and maximum frequency depends on the spacing distance between the microphones. For the sake of enlarging the frequency of reconstruction and reducing the cost of an acquisition system, Cyclic Projection (CP), a method of sequential measurements without reference, was recently investigated (JSV,2016,372:31-49). In this paper, the Propagation based Fast Iterative Shrinkage Thresholding Algorithm (Propagation-FISTA) is introduced, which improves CP in two aspects: (1) the number of acoustic sources is no longer needed and the only making assumption is that of a "weakly sparse" eigenvalue spectrum; (2) the construction of the spatial basis is much easier and adaptive to practical scenarios of acoustical measurements benefiting from the introduction of propagation based spatial basis. The proposed Propagation-FISTA is first investigated with different simulations and experimental setups and is next illustrated with an industrial case.

  11. Programme for test generation for combinatorial and sequential systems

    International Nuclear Information System (INIS)

    Tran Huy Hoan

    1973-01-01

    This research thesis reports the computer-assisted search for tests aimed at failure detection in combinatorial and sequential logic circuits. As he wants to deal with complex circuits with many modules such as those met in large scale integrated circuits (LSI), the author used propagation paths. He reports the development of a method which is valid for combinatorial systems and for several sequential circuits comprising elementary logic modules and JK and RS flip-flops. This method is developed on an IBM 360/91 computer in PL/1 language. The used memory space is limited and adjustable with respect to circuit dimension. Computing time is short when compared to that needed by other programmes. The solution is practical and efficient for failure test and localisation

  12. Moving mesh generation with a sequential approach for solving PDEs

    DEFF Research Database (Denmark)

    In moving mesh methods, physical PDEs and a mesh equation derived from equidistribution of an error metrics (so-called the monitor function) are simultaneously solved and meshes are dynamically concentrated on steep regions (Lim et al., 2001). However, the simultaneous solution procedure...... a simple and robust moving mesh algorithm in one or multidimension. In this study, we propose a sequential solution procedure including two separate parts: prediction step to obtain an approximate solution to a next time level (integration of physical PDEs) and regriding step at the next time level (mesh...... generation and solution interpolation). Convection terms, which appear in physical PDEs and a mesh equation, are discretized by a WENO (Weighted Essentially Non-Oscillatory) scheme under the consrvative form. This sequential approach is to keep the advantages of robustness and simplicity for the static...

  13. Competence and Praxis: Sequential Analysis in German Sociology

    Directory of Open Access Journals (Sweden)

    Kai-Olaf Maiwald

    2005-09-01

    Full Text Available In German social research nowadays most qualitative methodologies employ sequential analysis. This article explores the similarities and differences in conceptualising and practising this method. First, the working consensus, conceived as a shared set of methodological assumptions, is explicated. Second, with regard to three major paradigms of qualitative research in Germany—conversation analysis, objective hermeneutics, and hermeneutic sociology of knowledge—the dif­ferent ways of doing sequential analysis are investigated to locate the points of departure from a working consensus. It is argued that differences arise from different case-perspectives and, relative to that, from different modes of introducing general knowl­edge, i.e. knowledge that is not specific for the analysed case, into the interpretation. An import­ant notion to emerge from the comparison is the distinction between competence and praxis. URN: urn:nbn:de:0114-fqs0503310

  14. Bidding in sequential electricity markets: The Nordic case

    DEFF Research Database (Denmark)

    Boomsma, Trine Krogh; Juul, Nina; Fleten, Stein-Erik

    2014-01-01

    problem as a multi-stage stochastic program. We investigate whether higher risk exposure can explain the hesitation, often observed in practice, to bid into the balancing market, even in cases of higher expected price levels. Furthermore, we quantify the gain from coordinated bidding, and by deriving......For electricity market participants trading in sequential markets with differences in price levels and risk exposure, coordinated bidding is highly relevant. We consider a Nordic power producer who engages in the day-ahead spot market and the near real-time balancing market. In both markets......, clearing prices and dispatched volumes are unknown at the time of bidding. However, in the balancing market, the agent faces an additional risk of not being dispatched. Taking into account the sequential clearing of these markets and the gradual realization of market prices, we formulate the bidding...

  15. POLYP: an automatic device for drawing sequential samples of gas

    Energy Technology Data Exchange (ETDEWEB)

    Gaglione, P; Koechler, C; Stanchi, L

    1974-12-01

    Polyp is an automatic device consisting of electronic equipment which drives sequentially 8 small pumps for drawing samples of gas. The electronic circuit is driven by a quartz oscillator and allows for the preselection of a waiting time in such a manner that a set of similar instruments placed in suitable position in the open country will start simultaneously. At the same time the first pump of each instrument will inflate a plastic bag for a preset time. The other seven pumps will inflate sequentially the other bags. The instrument is powered by rechargeable batteries and realized with C-MUS integrated circuits for a nearly negligible consumption. As it is foreseen for field operation it is waterproof.

  16. Variation among heritage speakers: Sequential vs. simultaneous bilinguals

    Directory of Open Access Journals (Sweden)

    Teresa Lee

    2013-08-01

    Full Text Available This study examines the differences in the grammatical knowledge of two types of heritage speakers of Korean. Early simultaneous bilinguals are exposed to both English and the heritage language from birth, whereas early sequential bilinguals are exposed to the heritage language first and then to English upon schooling. A listening comprehension task involving relative clauses was conducted with 51 beginning-level Korean heritage speakers. The results showed that the early sequential bilinguals exhibited much more accurate knowledge than the early simultaneous bilinguals, who lacked rudimentary knowledge of Korean relative clauses. Drawing on the findings of adult and child Korean L1 data on the acquisition of relative clauses, the performance of each group is discussed with respect to attrition and incomplete acquisition of the heritage language.

  17. Comparisons of memory for nonverbal auditory and visual sequential stimuli.

    Science.gov (United States)

    McFarland, D J; Cacace, A T

    1995-01-01

    Properties of auditory and visual sensory memory were compared by examining subjects' recognition performance of randomly generated binary auditory sequential frequency patterns and binary visual sequential color patterns within a forced-choice paradigm. Experiment 1 demonstrated serial-position effects in auditory and visual modalities consisting of both primacy and recency effects. Experiment 2 found that retention of auditory and visual information was remarkably similar when assessed across a 10s interval. Experiments 3 and 4, taken together, showed that the recency effect in sensory memory is affected more by the type of response required (recognition vs. reproduction) than by the sensory modality employed. These studies suggest that auditory and visual sensory memory stores for nonverbal stimuli share similar properties with respect to serial-position effects and persistence over time.

  18. Sequential function approximation on arbitrarily distributed point sets

    Science.gov (United States)

    Wu, Kailiang; Xiu, Dongbin

    2018-02-01

    We present a randomized iterative method for approximating unknown function sequentially on arbitrary point set. The method is based on a recently developed sequential approximation (SA) method, which approximates a target function using one data point at each step and avoids matrix operations. The focus of this paper is on data sets with highly irregular distribution of the points. We present a nearest neighbor replacement (NNR) algorithm, which allows one to sample the irregular data sets in a near optimal manner. We provide mathematical justification and error estimates for the NNR algorithm. Extensive numerical examples are also presented to demonstrate that the NNR algorithm can deliver satisfactory convergence for the SA method on data sets with high irregularity in their point distributions.

  19. Sequential and parallel image restoration: neural network implementations.

    Science.gov (United States)

    Figueiredo, M T; Leitao, J N

    1994-01-01

    Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.

  20. Sequentially generated states for the study of two dimensional systems

    Energy Technology Data Exchange (ETDEWEB)

    Banuls, Mari-Carmen; Cirac, J. Ignacio [Max-Planck-Institut fuer Quantenoptik, Garching (Germany); Perez-Garcia, David [Depto. Analisis Matematico, Universidad Complutense de Madrid (Spain); Wolf, Michael M. [Niels Bohr Institut, Copenhagen (Denmark); Verstraete, Frank [Fakultaet fuer Physik, Universitaet Wien (Austria)

    2009-07-01

    The family of Matrix Product States represents a powerful tool for the study of physical one-dimensional quantum many-body systems, such as spin chains. Besides, Matrix Product States can be defined as the family of quantum states that can be sequentially generated in a one-dimensional system. We have introduced a new family of states which extends this sequential definition to two dimensions. Like in Matrix Product States, expectation values of few body observables can be efficiently evaluated and, for the case of translationally invariant systems, the correlation functions decay exponentially with the distance. We show that such states are a subclass of Projected Entangled Pair States and investigate their suitability for approximating the ground states of local Hamiltonians.